Predictions regarding undergraduate enrollment are dire. As Kelderman (2019) predicted, we are already seeing a precipitous drop in enrollment. Coupled with existing problems related to persistence, completion, and decreased state revenue streams, concerns about the sustainability of higher education are mounting. Because colleges and universities cannot rely on consistent first-year enrollments, much less show growth, the necessity of ensuring the success of those who are enrolled has never been more pronounced (Field, 2018).

To respond to these pressures, a plethora of initiatives have been implemented. Many of these have focused on student characteristics that correlate with college attrition. Programs focused on student characteristics have been predicated on the notion that the characteristics not indicative of college success can be mitigated by specialized programs and resources. For example, mentoring for first-generation students or gathering spaces to foster transfer student belonging are common institutional initiatives.

Programs that address unfavorable student characteristics are associated with a deficit model (Prime, 1982). Deficit models assume that these characteristics must be mitigated or “programmed out.” However, key findings from a number of retention and student success studies have demonstrated that incoming student characteristics are not something that must be overcome. Rather, the processes by which students make meaningful connections to campus, deploy effort, engage in learning, and access resources drive student success, regardless of entry characteristics (Kinzie, 2012).

Thus, student success initiatives that address core college experiences are emerging. For example, the Bill and Melinda Gates Foundation’s (2020) Frontier Set initiative focuses on advising and the use of technology for learning. Learning experiences are also becoming a “buzzing hive” of efforts to improve student outcomes (Condon et al., 2016, p. ix). The Association of College and University Educators course in Effective Teaching Practices (2018) and the Gateways 2 Completion (2017) course redesign initiative are two such examples.

The shift to focusing on the experiences students are having in courses puts the attention where it matters most (Koch, 2017). This may be particularly true when gateway courses are involved. Gateway courses are operationally defined as lower-division courses known to create student success bottlenecks (EAB, 2018, 2019; Koch et al., 2018). Bottlenecks can appear as jams in college completion (EAB, 2018, 2019; Koch et al., 2018) and/or college access (EAB, 2018).

Gateway courses are garnering attention from educational development initiatives focused on course redesign. Course design institutes (CDIs)—a workshop-style endeavor in which faculty can examine key curricular, instructional, and/or pedagogical aspects of their courses—are not a new idea (Chism et al., 2012; Palmer et al., 2016). Nor is it particularly new to use a CDI model to drive increases in student academic outcomes (Chism et al., 2012).

In their literature review, Chism et al. (2012) note that CDIs can impact academic outcomes such as “the organization and its mission” (Kirkpatrick, 1998), “change in student learning” (Guskey, 2000), and the “institution” (Chism & Szabo, 1998). Thus, we distinguish CDIs whose focus is an impact on student success as a student success course redesign (SSCR) (Campbell & Blankenship, 2020). (See Aitken, 2005, for an early model of a CDI that includes impacts on student success.)

SSCR programs are both similar and different than a traditional CDI. Both CDI and SSCR programs use educational development opportunities to examine the relationships between the course curriculum, instruction, and learning (Campbell & Blankenship, 2020). However, a SSCR differs in that its outcomes must include student success metrics in addition to pedagogical change. Other differences are noted in the redesign program particulars:

  1. situating the locus of change as the course rather than the faculty;

  2. identifying which courses need to be redesigned rather than faculty choosing based on preference or willingness;

  3. crafting the length, leadership, and curriculum of the SSCR to support student success outcomes; and

  4. including all sections of a course-line (i.e., a specific disciplinary course that is part of the curriculum) and assembling course-line teams of all faculty associated with that course-line (Campbell & Blankenship, 2020).

Need for the Study

Faculty participation and engagement are requisites for any type of CDI. Assessment of faculty participation and engagement has focused on faculty satisfaction, student learning, faculty behavior, teaching attitudes, institutional impact (Chism et al., 2012), and syllabus changes (Palmer et al., 2016). These studies have yielded valuable findings for educational developers to use for continual improvement of CDIs.

However, Condon et al. (2016) and Chism et al. (2012) have noted that it is challenging to trace the effects of educational development on classroom and learning experiences. For example, faculty may leave the CDI satisfied with their experience but without the knowledge, skills, or dispositions to affect change (Condon et al., 2016). By looking at a different moment in the course redesign process, this research is a step toward understanding how a CDI might affect student outcomes. By examining an earlier stage in the process, the decision-making that occurs during the development of an implementation plan for course redesign can be explored as scholarship of teaching and learning (SoTL) data.

The foundation of faculty participation and engagement relies on a number of decisions. Hypothetically, those decisions might include those depicted in Figure 1. At decision-making Point 3, faculty may elect to implement none or more of the CDI strategies. Free will, level of effort, academic freedom, and the complexity of faculty’s pedagogical knowledge base (Shulman, 1986) may all be influences on the inclusion of strategies recommended during the CDI. Where the purpose of the CDI is to provide educational development for faculty, the degree of implementation may be less important. However, if the CDI is a SSCR, then the implementation of effective practices matters a great deal for driving increases in student success.

Figure 1.
Figure 1.
Figure 1.

Faculty Decision-Making Related to Content Design Institute Participation

Figure 2 depicts the potential outcomes for varying degrees of implementation. Therefore, the development of an implementation plan is a critical step in course redesign. The plan could also provide useful feedback for educational developers because it reveals the number, diversity, and combination of the strategies selected. For example, data that demonstrate that certain strategies were chosen over others could be used as feedback for modifying how the underutilized strategies are included in future CDIs.

Figure 2.
Figure 2.
Figure 2.

Potential Outcomes Following Participation in a Student Success Course Redesign

Relevant to course design, Shulman (1986) describes how faculty conceptualize teaching. These include (1) subject matter content knowledge: how information in the discipline is organized; (2) general pedagogical knowledge: pedagogical knowledge that is cross-disciplinary; (3) curriculum knowledge: understanding of the programs and resources within a discipline; and (4) pedagogical content knowledge: understanding of strategies for translating discipline-specific concepts to learners (Shulman, 1986).

CDIs typically present general pedagogical knowledge and assume that faculty have the subject matter, curriculum, and pedagogical content knowledge to proceed with the redesign. While the advanced degree can likely serve as a proxy measure for subject matter content knowledge, it is not clear that faculty have either curriculum or pedagogical content knowledge. To that end, it is a known concern that graduate education does not facilitate the teaching expertise required later for faculty roles (Flaherty, 2019).

Therefore, this study will identify potential differences between the implementation plans of science, technology, engineering, and mathematics (STEM) and non-STEM faculty. Implementation plans that demonstrate differences between STEM and non-STEM faculty may connote differences in curriculum knowledge and pedagogical content knowledge that might be better served by either separate STEM and non-STEM redesign tracks or adjustments within interdisciplinary models.

Thus, another purpose of this research was to explore the course redesign strategies identified on the implementation plans generated by faculty participating in a SSCR (see Figure 1, Decision #3). The research questions were:

  1. What types of course redesign strategies did faculty identify on their SSCR implementation plans?

  2. Was there a difference between the types of strategies included on the SSCR implementation plans for STEM and non-STEM courses?

  3. On the SSCR implementation plans, how were the Learning redesign strategies described?

    1. Was there a difference in the Learning redesign strategies identified for STEM and non-STEM courses?

  4. Did the Learning redesign strategies depict practices that required in-class or out-of-class implementation?

    1. Was there a difference between STEM and non-STEM courses in their inclusion of practices that required in-class or out-of-class implementation?

Method

Context

Field (2018) describes Gateways to Completion (G2C) as one of the five most popular programs to improve first-year retention. The initiative, developed by the John N. Gardner Institute for Excellence in Undergraduate Education (2017), is a comprehensive course redesign process focused specifically on student success in gateway courses. Gateway courses are lower-division, introductory courses that frequently enroll large numbers of students (Koch & Rodier, 2014). They serve an unfortunate gatekeeper mechanism by creating a bottleneck (EAB, 2018) for students to complete their degrees because of the high DFWI rates (i.e., D or F grades, Ws/withdrawals, or Is/incompletes; Koch & Rodier, 2014).

Closely aligned with what Chism et al. (2012) label a “project-based community,” the G2C initiative includes elements of a traditional CDI but also leverages the four SSCR particulars (Campbell & Blankenship, 2020); the curriculum, length, and leadership are carefully plotted; and the course is positioned as the locus of change. Because the data from this study were from the G2C initiative, each of the effective SSCR particulars will be described, and an overview of the G2C three-year model will be provided.

Length of the G2C program

The G2C program uses a blended consultancy guided/self-study model. The model is structured as three, year-long phases: “analyze and plan,” “act and monitor,” and “act and refine” (see Figure 3). The three-year length provides sufficient time for the SSCR to both remodel the gateway courses and to transform campus cultural connection about academic experiences and student success.

Figure 3.
Figure 3.
Figure 3.

Gateways to Completion Course Design Model

Leadership and curriculum of the G2C program

During all three years of G2C, the campus collaborates with Gardner Institute advisors to mentor and guide the process. This collaboration occurs through online consultations as well as key educational development experiences at critical points in the process. For example, during Year 1, G2C faculty participate in a Teaching and Learning Academy that facilitates their development of effective teaching pedagogies (John N. Gardner Institute for Excellence in Undergraduate Education, 2017). In this way, the redesign curriculum is based on the expertise of the G2C team.

Identification of courses and course-line teams

During the “analyze and plan” phase in Year 1, the G2C advisors work with the campus to extract and analyze student success patterns in course outcome data. In partnership with the campus, their DFWI data were used to inform decisions about which courses need to be redesigned. Once the courses are identified, the faculty associated with them form the course-line redesign team.

The course as the locus of redesign

By using DFWI data, the G2C model ensures that the courses that need attention are those that receive attention. In other CDIs, courses are often involved because they volunteered. In the G2C model, everyone who teaches the course becomes a de facto member of the course-line team. This avoids blame situated around notions of “inept faculty” and shifts the ethos to a “better course experience.” This shift is critical for building both campus community and faculty agency for change.

G2C’s three-year self-study program

Once courses and teams are identified, the “analyze and plan” phase begins (see Figure 3). During this phase, course-line teams begin the self-study process by examining their gateway courses relative to six key performance indicators (KPIs). The KPIs are touchstones that unpack the intricate relationship between course experiences and broader student success outcomes. These indicators are (1) Academic Policy and Practice, (2) Faculty/Instructors, (3) Learning, (4) Monitoring Student Performance, (5) Improvement, and (6) External Student Support Resources. Due to the indicator’s breadth, a course redesigned through G2C can influence institutional, student, and cultural outcomes.

In Year 2, “act and monitor,” the redesigned courses are taught and the redesign strategies are monitored for efficacy. During “act and refine” in Year 3, the redesign strategies are further honed and assessed. Data from Years 2 and 3 were not part of this study.

Sample and Coding Methods

At the conclusion of Year 1, course-line teams submitted course reports that included descriptions of the team’s implementation plan. The sample was composed of implementation plans from 105 course reports from 27 different institutions between 2012 and 2018. Participating institutions redesigned as few as one and as many as seven different course-lines.

Coding was conducted by the authors, both knowledgeable in curriculum and instruction as well as undergraduate teaching. Both coders have graduate-level education in curriculum, instruction, and learning that provided the expert knowledge to meet the familiarity criteria for coding consistency (Krippendorff, 2018). Coders had similar cultural, educational, and professional backgrounds, which also enhanced coding reliability (Krippendorff, 2018). The lead author’s previous expertise in content analysis methodology (Campbell et al., 2013; Denzine et al., 1996) was used to guide the project.

Procedure

The 105 implementation plans were analyzed via a three-step process. This included Step 1: parsing the implementation plans; Step 2: deductively coding using the six KPIs; and Step 3: inductively coding the Learning KPI.

Step 1: Parsing the implementation plans

The redesign strategies described in the implementation plan sections of the course reports were extracted as data and entered into a database. They were then parsed such that each strategy was isolated as a single data point. For example, “Faculty should have the opportunity to discuss problems and concerns, as well as strategies and approaches to identify best practice” was parsed into “Faculty should have the opportunity to discuss problems and concerns” and “strategies and approaches to identify best practice” as representing two different ideas. Reliability for this step occurred with Step 2 where each statement was checked to ensure that it represented only one idea prior to coding for KPIs. At the conclusion of Step 1, the implementation plans had been parsed into 1,373 individual course redesign strategies.

Step 2: Deductive coding based on the KPIs

The 1,373 individual strategies were deductively coded using the six KPIs from the self-study done in Year 1. Reliability was established by independent coding of a subsample of 50 redesign strategies. The researchers then met and compared coding, discussed and clarified discrepancies, and evaluated the coding process itself. At this point, the operational definitions of the KPIs were refined to generate mutually exclusive categories (Krippendorff, 2018), which allowed the researchers to reach complete agreement on the subsample coding. For example, “addressing DFWI rates” was originally a strategy related to the indicator Faculty/Instructors as well as the indicator Improvement, to “share, compare and understand DFWI rates.” The final KPI definitions were refined (see Table 1) such that the strategic use of DFWI rates was an example of a redesign strategy of only the Improvement KPI.

Table 1.

Refined Operational Definitions of Key Performance Indicators With Example Indicators

Key performance indicator Refined operational definition Example indicators
Academic Policy and Practice Formal policies that promote student success in gateway courses. Policies are effectively communicated, inform academic practice at all levels, clearly articulate the link between policy and practice, and are consistently implemented. Use of placement tests, alignment of pre- and post-requisites, and uniform textbook selection
Faculty/Instructors There is dedication to instructional excellence in gateway courses. Institutions and departments intentionally select gateway course faculty based on academically sound criteria, support ongoing professional development, and reward exemplary teaching in gateway courses. Faculty development focused on pedagogy, increased faculty awards, and recognition and levels of faculty collaboration
Learning There is commitment to authentic student learning in gateway courses. Institutions, departments, and faculty articulate clear learning goals and expectations, ensure timely and frequent feedback, and provide opportunities to demonstrate content mastery. Design of a clear and concise syllabus, aligned relationships with lab or recitations sections, and articulation of clear learning outcomes that are mapped to assessment
Improvement There is a culture of ongoing quality improvement to advance student success in gateway courses. Institutions and departments use multiple data sources to better understand student and faculty performance, encourage knowledge about and sharing of best practices in undergraduate teaching and learning. Campus-based definitions of gateway courses, sharing and collaborative analysis of DFWI data, and the overall use of data to generate continued improvement
Monitoring Student Performance The performance of students in gateway courses is monitored. Institutions and departments analyze and use student data to provide appropriate support based on both student characteristics and specific learning environments. Targeting support outreach efforts based on real-time student performance, at-risk demographic characteristics (e.g., first-generation status), or monitoring students’ use of support resources (e.g., tutoring)
External Student Support Resources There is a commitment to providing students coordinated and effective support to strengthen academic skills needed for success in gateway courses. Institutions and departments deliver timely support in collaboration with other relevant units. Summer Bridge Programs, supplemental academic support (e.g., tutoring, SI), and tailored support for courses offered in different formats (e.g., online)

Next, the researchers independently coded the remaining 1,323 course redesign strategies. The resulting codes were compared, and discrepancies were discussed. Inconsistent application of the operational definitions for the KPI caused the majority of discrepancies that were resolved. The reliability of this phase of the coding was 99.70%, which the researchers deemed acceptable for proceeding with the final phase of analysis.

Step 3: Inductive coding the learning redesign strategies

The redesign strategies coded as the Learning KPI were analyzed as a subset. The Learning subset consisted of 336 individual redesign strategies, gleaned from 86 course-lines, representing 26 different institutions. One researcher inductively coded clusters of strategies based on similarity of approach. Each cluster was given a category descriptor that depicted its operational definition. The categorical clusters were then reviewed by the second researcher for confirmation. The researchers met to refine both category descriptions as well as category membership. The reliability of the coding was 95.00% prior to discussion and 99.40% after discussion. Three strategies that were deemed too vague for coding were removed from the dataset.

During Step 3, the strategy clusters were also conceptually organized into a larger thematic framework. The names of the themes and categories were then added as variables to the database. The final dataset included descriptive information about the courses as well as the following variables:

  • course redesign strategy number (non-coded)

  • course name and number (non-coded)

  • course title (non-coded)

  • university (non-coded)

  • subject (coded)

  • disciplinary focus (coded)

  • key performance indicator (coded)

  • learning themes for strategies coded as the Learning KPI (coded)

  • STEM vs. non-STEM (coded)

  • in-class vs. outside of class pedagogy change (coded)

Results

The content analysis yielded a variety of interesting findings that are organized by research question in the sections that follow.

Question 1

Research Question 1 was addressed through analysis of the 1,373 course redesign strategies gleaned from the implementation plans. Most strategies focused on the Learning KPI, while the minority of the redesign strategies focused on the Monitoring Student Performance KPI (see Figure 4).

Figure 4.
Figure 4.
Figure 4.

Proportions of Each Key Performance Indicator

Question 1a

Using the National Science Foundation’s (Gonzalez & Kuenzi, 2012) definition, each course-line was coded by discipline as either STEM or non-STEM. This definition categorizes mathematics, natural sciences, engineering, computer and information sciences, and social and behavioral sciences (e.g., psychology, economics, sociology, and political science) as STEM courses. There were no appreciable differences in the redesign strategies identified for STEM courses versus non-STEM courses (see Table 2).

Table 2.

Key Performance Indicators Percentages by Non-STEM and STEM Courses

Key performance indicators  Non-STEM  STEM
 Academic Practice and Policy  19.53%  20.17%
 Faculty/Instructors  19.76%  18.15%
 Improvement  17.18%  16.35%
 Learning  25.18%  24.31%
 External Student Support Resources  12.24%  13.38%
 Monitoring Student Performance  6.12%  7.64%

Question 2

The inductive coding of the Learning KPI yielded five major themes, each of which could be described by additional categories (see Table 3).

Table 3.

Learning Themes and Operational Definitions With Number and Percentage of Responses

Learning themes Theme definitions N %
Content Redesign Content Redesign strategies recommended reconstruction or improvement to the course content. Examples of content strategies included resequencing content, eliminating content, and improvement content consistency across sections. 107 31.85%
Assessment Redesign Assessment Redesign strategies recommended updating existing assessments, increasing assessment frequency, or leveraging assessment feedback for future learning. Examples of assessment strategies included adding frequent, low-stakes assessments, creating consistent assessments across sections, and developing rubrics. 86 25.60
Pedagogy Redesign Pedagogy Redesign strategies recommended incorporating pedagogical techniques that were known to have a positive effect on academic success. Examples of pedagogical strategies included increasing the use of active pedagogies, such as exam wrappers and clickers, and including Peer Assistants. 64 19.05%
Syllabus Redesign Syllabus Redesign strategies recommended reconstruction or updating the syllabus. Examples of syllabus redesign strategies included refining existing learning outcomes, creating learning consistent learning outcomes across all sections, and adding student resource information. 44 13.10%
Student Success Redesign Student Success Redesign strategies recommended incorporating effective feedback practices about academic success status within the course. Examples of student success strategies included using campus early alert software and increasing course messages about student success. 35 10.42%

Content Redesign

Content Redesign was the largest theme within the Learning KPI (see Table 4). Content Redesign strategies made up 31.85% of the Learning indicator, and the open coding for Content Redesign revealed three categories: Content Design (53.27%), Curriculum Alignment (27.10%), and Content Consistency (19.63%).

Table 4.

Analysis of Learning Indicator With Number and Percentages of Responses

Redesign themes, subcategories, & groups N  % w/group  % w/theme
Content Redesign 107
Content design 57  53.27
Content reorganization 21  36.84  19.63
Embed discipline specific skills 15  26.32  14.02
Create value for content 11  19.30  10.28
Textbook selection 5  8.77  4.67
Specific content changes 5  8.77  4.67
Curriculum alignment 29  27.10
Pre/post-requisite content alignment 15  51.72  14.02
Lab/recitation alignment to course 14  48.28  13.08
Content consistency 21  19.63
Common content 19  90.48  17.76
Common textbook 2  9.52  1.87
Assessment Redesign 86
Student diagnostics 49  56.98
Early and often assessment 23  46.94  26.74
Mastery feedback approaches 23  46.94  26.74
Feedback communication of grades 3  6.12  3.49
Assessment design 27  31.40
Specific assessment revisions 14  51.85  16.28
Rubrics and grading 13  48.15  15.12
Assessment alignment 10  11.63
Common assessments 6  60.00  6.98
Pre/post-diagnostic assessment 4  40.00  4.65
Pedagogy Redesign 64
Pedagogical approaches 38  59.38
Active learning strategies 23  58.97  35.94
Exam wrappers 10  25.64  15.63
Clickers 5  12.82  7.81
Implement peer assistance 23  100.00  35.94
Social learning approaches 3  100.00  4.69
Syllabus Redesign 44
Course consistency 27  61.36
Common learning outcomes 21  77.78  47.73
Common syllabus 6  22.22  13.64
Syllabus revisions 17  38.64
Syllabus review 7  41.18  15.91
Embed student support on syllabus 10  58.82  22.72
Student Success Redesign 35
Course specific student success communications 28  80.00
Utilize early alert system 7  20.00

The category Content Design consisted of five groups and represented redesign strategies that described revisions to the organization or representation of course content.

  • The Content Reorganization (19.63%) group described redesign strategies that addressed changes to the order in which content was presented or the inclusion of topics within the course (e.g., “examine the materials with an eye to reduce the volume of content” and “create a modular, self-paced version”).

  • The group Embed Discipline Specific Skills (14.02%) connoted the faculty’s desire to include discipline-specific skills (e.g., “include scientific vocabulary as part of course content” and “incorporate career-relevant math skills”) into the course.

  • Faculty described the need to communicate to students the purpose and significance of the course in the Create Value for Content group (10.28%) (e.g., “help faculty who teach the course develop strategies to ‘sell it’ and ‘why this course matters should be conveyed to students’ ”).

  • More purposeful selection of course texts and readings were identified in the group Textbook Selection (4.67%) (e.g., “continue to use an OER textbook” and “build master course and instructional resources”).

  • In the group Specific Content Changes (4.67%), faculty described redesign strategies that addressed unique changes to existing topics, theories, and concepts within the course (e.g., “examine more social and cultural history” and “integrate a number of additional supports into the course”).

The Curriculum Alignment category consisted of two groups and represented redesign strategies that focused on the relationship of course content to other discipline-based experiences.

  • The Pre/Post-Requisite Content Alignment group (14.02%) was a set of redesign strategies that would examine course content with respect to the course’s pre- and post-requisites to ensure that the sequencing was logical (e.g., “align common course objectives with general education outcomes” and “connect with other departments to see what students need in their upper level courses”).

  • The Lab/Recitation Alignment to Course group (13.08%) consisted of redesign strategies that would examine course content related to laboratory or recitations that were co-requisite with the course (e.g., “lab instruction should be better aligned with what is being taught in the classroom” and “add a required recitation”).

The category Content Consistency consisted of two groups that described the faculty’s desire to develop a more homogeneous content experience across all sections of the same course.

  • The Common Content (17.76%) group consisted of redesign strategies for faculty to use the same content across all sections of the course (e.g., “synchronize the delivery of content among sections” and “make the teaching more nearly ‘common,’ since the exams are common”).

  • The Common Textbook (1.87%) group consisted of redesign strategies for all faculty to use the same text or choose between a small number of agreed-upon texts (e.g., “recommend ONE new textbook for all sections” and “use the same online textbook”).

Assessment Redesign

Assessment Redesign was the second largest of the themes within the Learning KPI accounting for 25.60% of the Learning redesign strategies (see Table 4). The content analysis for Assessment Redesign resulted in three categories: Student Diagnostics (56.98%), Assessment Design (31.40%), and Assessment Alignment (11.63%).

The category Student Diagnostics consisted of three groups and represents redesign strategies that describe implementing assessments that changed how students were provided with feedback about course progress.

  • The Early and Often Assessment group (26.74%) represented a course redesign strategy that provides more frequent, low-stakes assessments to students (e.g., “include early and often assessments” and “include regular/early quizzes so students get constant feedback on where they stand”).

  • The Mastery Feedback Approaches group (26.74%) represented a course redesign strategy that would implement assessments that provide students specific assessment of their mastery of course knowledge or skills (e.g., “developing a resource of program-specific writing samples for students” and “allowed students to take quizzes until a 100% score is achieved”).

  • The redesign strategies within the Feedback and Communication of Grades (3.49%) group connoted a faculty commitment to providing increased dissemination of course grades (e.g., “instructors should keep their grades on Blackboard” and “begin discussion on ways to make feedback more effective and efficient”).

The category Assessment Design consisted of two groups and represented redesign strategies that describe revisions to specific assessment experiences.

  • In the Specific Assessment Revision group (16.28%), faculty described redesign strategies that addressed unique changes to existing assessment instruments (e.g., “assessment model was also transformed” and “continue to work on improving assessments”).

  • In the Rubrics and Grading group (15.12%), faculty described implementation of rubrics where none previously existed or changes to existing rubrics and/or grading criterion (e.g., “add rubrics to course templates to improve consistency” and “institute a common framework for evaluating oral presentations and research papers”).

The final Assessment Redesign category, Assessment Alignment, consisted of two groups that described the faculty’s desire to develop more assessment homogeneous activities across all sections of the same course.

  • The Common Assessment group (6.98%) connoted faculty’s desire to administer some common assessments across all sections of the course (e.g., “use common mid-term exam” and “uniform exams for upcoming semesters”).

  • The Pre/Post Diagnostic Assessment group (4.65%) described the use of diagnostic instruments that ensured that the alignment of the course within the program was appropriate or that students’ readiness level for the course was congruous (e.g., “use a diagnostic test at the beginning of a semester” and “give assignment that stands as a diagnostic tool to identify writing skills”).

Pedagogy Redesign

Pedagogy Redesign was the third largest of the themes within the Learning KPI (see Table 4). Pedagogy Redesign strategies made up 19.05% of the Learning indicator, and the results of the content analysis yielded three categories within Pedagogy Redesign: Pedagogical Approaches (59.38%), Implement Peer Assistance (35.94%), and Social Learning Approaches (4.69%).

The category Pedagogical Approaches was composed of redesign strategies in teaching and learning strategies.

  • The Active Learning Strategies (35.94%) group consisted of redesign strategies that included the use of more active or engaged learning strategies (e.g., “search for active learning techniques” and “implement collaborative learning exercises”).

  • The Exam Wrappers (15.63%) group focused on the use of a specific learning strategy that engages students in planning, goal setting, and reflection for exams (e.g., “use Exam Wrapper activity for first two exams” and “implement exam wrappers throughout the semester”).

  • The group Clickers (7.81%) was focused on the use of clickers during class (e.g., “utilize a clicker technology” and “all instructors should be encouraged to use teaching aids such as clickers and Sakai”).

The category Implement Peers Assistance had no subgroups and represented redesign strategies that described the inclusion of peers for teaching, learning, mentoring, or coaching the students enrolled in the course (e.g., “employ Teaching Assistants to participate in class activities” and “we recommend TA support for this to occur in the large lectures”).

The category Social Learning Approaches also had no subgroups and described the use of strategies that fostered peer-to-peer interaction (e.g., “implement a chat room” and “use the classroom to provide a sense of community”).

Syllabus Redesign

The fourth largest of the themes within the Learning KPI described changes to the course syllabus (see Table 4). The Syllabus Redesign theme made up 13.10% of the Learning redesign strategies, and the content analysis for Syllabus Redesign revealed two categories: Course Consistency (61.36%) and Syllabus Revisions (38.64%).

The category Course Consistency consisted of two groups that described the faculty’s desire to increase commonality for syllabi.

  • The Common Learning Outcomes (47.73%) group described the need to include the same learning outcomes on all syllabi across all sections of the course (e.g., “syllabi should list learning goals” and “unify learning outcomes”).

  • The Common Syllabus (13.64%) group took consistency a step further and described the goal of having all sections of a course use the same syllabus.

The category Syllabus Revisions consisted of two groups and represents redesign strategies that describe changes to the content of the syllabus.

  • The Syllabus Review (15.91%) group consisted of faculty’s commitment to scrutinize their syllabus in order to develop more effective language (e.g., “course description needs to be revised” and “establish a course-wide syllabi review committee”).

  • The Embed Student Support on Syllabus (22.72%) group represented the need to include information about student support (e.g., tutoring, supplemental instruction) or other resources (e.g., the learning assistance center) within the syllabus itself (e.g., “all tutoring information should be included in the syllabus” and “add the link to the Writing Center”).

Student Success Redesign

Student Success Redesign was the final and smallest of the themes within the Learning KPI. Student Success Redesign strategies made up 10.42% of the Learning indicator and consisted of two categories: Utilize Early Alert Systems (80.00%) and Course Specific Student Success Communications (20.00%). Table 4 presents the different categories and groups within the Student Success Redesign Theme.

The category Utilize Early Alert Systems (20.00%) had no groups and consisted of the commitment to use the early alert system offered by the university (e.g., Starfish, Beacon) or to provide students with other warning type feedback (e.g., “an effective early alert system needs to be developed” and “establish an early warning benchmark”).

The category Course Specific Student Success Communications (80.00%) had no subgroups and represented the idea that faculty would verbally speak with students about techniques or resources that would facilitate their success (e.g., “faculty should highlight support services orally” and “invite peer tutors to address class”).

Question 2a

Question 2a was addressed using the STEM and non-STEM variable codes. As indicated in Figure 5, the only difference between STEM and non-STEM courses was in the theme Pedagogy. The faculty redesigning the non-STEM courses reported less willingness to implement pedagogical approaches (13.21%) as compared to the faculty redesigning STEM courses (21.74%).

Figure 5.
Figure 5.
Figure 5.

Differences Between STEM and Non-STEM Courses in the Pedagogy Theme

Question 3

To address Question 3, the 336 Learning redesign strategies were classified as either requiring a pedagogical change in class or as a change that could be implemented out of class. In-class strategies included approaches such as Active Learning or using Clickers. Out-of-class strategies included approaches such as Syllabus Revisions or Assessment Alignment. The results demonstrated that the majority (57.73%) of the Learning redesign strategies represented out-of-class approaches.

Question 3a

The STEM and non-STEM course categories were cross tabulated with the in-class and out-of-class strategy classifications of the Learning theme categories. The two differences that emerged are highlighted in Table 5.

Table 5.

Percentages of Learning Theme Redesign Strategies That Must Be Implemented in Class for Non-STEM and STEM Courses

Learning themes & groups  Non-STEM STEM
Content  44.00% 31.00%
 Embed discipline/content-specific skills into course  13.33% 9.28%
 Specific content changes  11.11% 0.00%
 Use of same content  13.33% 13.40%
 Value  6.67% 8.25%
 Pedagogy  31.00% 56.00%
 Active learning strategies  15.56% 16.49%
 Clickers  2.22% 4.12%
 Exam wrappers  4.44% 8.25%
 Social learning  2.22% 2.06%
 Use peers in the classroom  6.67% 20.62%
 Student Success  24.44% 17.53%

The non-STEM faculty favored out-of-class strategies in the Content (44.00%) and Student Success (24.00%) themes, over their STEM peers. The categories that were the most disparate were the non-STEM course recommendations to make Specific Content Changes (11.11%) and Embed Discipline/Content Specific Skills (4.05%). In contrast, the STEM faculty favored the in-class strategies in the Pedagogy Redesign theme (56.00%) over their non-STEM peers.

In contrast, for the Pedagogy theme, the STEM course faculty (56.00%) planned more in-class changes than the non-STEM course faculty (31.11%). Most of this difference was in STEM course faculty planning to Implement Peer Assistance (20.62%) and implement Active Learning Strategies (16.49%) (see Table 5).

In summary, the results from the content analysis demonstrated that the majority of the redesign strategies included on the implementation plans focused on the Learning KPI. The Monitoring Student Performance KPI was the least often included strategy on the implementation plans. For STEM and non-STEM courses, there were no appreciable differences in the types of KPIs included on the implementation plans.

Five themes emerged from the granular analysis of the Learning KPI. In descending order of frequency, those themes were Content Redesign, Assessment Redesign, Pedagogy Redesign, Syllabus Redesign, and Student Success Redesign. Within the Learning KPI, there were differences between STEM and non-STEM courses in the strategies related to Pedagogy Redesign. Specifically, non-STEM courses included fewer changes to pedagogical approaches than STEM courses. The redesign strategies in the Learning KPI were also classified as to whether the strategy would require an actual in-class change. Results demonstrated the faculty gravitated toward planning for more out-of-class than in-class strategies. There were again differences between STEM and non-STEM courses in that non-STEM faculty planned more strategies related to Content Redesign shifts and Student Success approaches, and the STEM faculty favored Implement Peer Assistance and Active Learning Strategies.

Discussion

Implications of the Research Questions

Overall, the distribution of the planned-for strategies was fairly evenly spread across four of the six KPIs (Question 1) with no differences in the STEM and non-STEM faculty planning (Question 1a). This suggests that the curriculum and approach of the SSCR does not need to be tailored to the two different audiences. However, the less frequent inclusion of the External Student Support Resources and Monitoring Student Performance KPIs indicates that the SSCR curriculum for those indicators may need revision. Perhaps faculty did not fully appreciate the value of students’ use of external support systems or were not able to visualize themselves making those referrals.

Within the Learning KPI, faculty planned to implement strategies associated with backwards design (Question 2) (Wiggins & McTighe, 2011). That is, the goal of the course (Content Redesign, 31.85%), the assessment of the course (Assessment Redesign, 25.60%), and instructional design (Pedagogy Redesign, 19.05%) represented the majority of planned-for strategies. While Syllabus Redesign (13.10%) and Student Success Redesign (10.42%) were not indicated on as many plans, they are both implicitly supporting and actualizing the other strategies. For example, redesigned assessments would require corresponding changes to the syllabus. Assessment conducted via a learning management system would de facto result in feedback to students.

A limitation of this study is that the analysis was limited to only the implementation plans. The degree or efficacy of the actual implementation was not investigated nor were specific course profiles analyzed. It was possible for course-line teams to exclude strategies from the Learning indicators and/or to exclude features related to backwards design or constructive alignment. If the “order of operations” matters, this is problematic. Bowen and Watson (2017) describe pedagogical design as a highly recursive process, but one that begins with the end in mind. This study looked at decision-making across a set of implementation plans, but future research studies might address the characteristics within a plan.

In the Learning KPI, there were differences between STEM and non-STEM faculty plans regarding pedagogical approaches (Question 2a). This finding is contrary to the implication from Question 1, as this disciplinary difference suggests that the SSCR may need to split into different disciplinary audience tracks. Future research might focus on why those differences exist and whether different SSCR tracks are more efficacious.

The faculty implementation plans indicated a preference for strategies that did not require them to alter their in-class approaches to teaching (Question 3). One implication of this finding is that the out-of-class strategies represented more straightforward tasks. For instance, the process of syllabus revision may have appeared more concrete than using clickers or active learning strategies. This implies that the SSCR curriculum might need to provide more granular information about implementing the different strategic approaches.

The differences between STEM and non-STEM plans for using in-class or out-of-class Learning strategies (Question 3a) also offers useful feedback for SSCR design. For example, none of the STEM faculty included making Specific Content Changes to their courses. This reluctance may be indicative of their need to adhere to curriculum maps that prepare students for pre-professional exams, rather than obstinance. As SSCR design feedback, that trend indicates that SSCR curriculum should not include that type of redesign approach.

In conclusion, the analysis of the faculty implementation plans, post SSCR or CDI, offers helpful feedback for educational designers. Moreover, the practice of reviewing the implementation plans, to inform CDI curriculum and design, also appears to be a useful process.

Implications

The results of the content analysis also demonstrated some broad patterns and implications. Generally, the course redesign strategies tended to focus on approaches that faculty have conventionally used, maintained disciplinary isolation, and were not clearly described.

Conventional Thinking

The majority of the planned-for strategies focused on approaches that have traditionally been within the faculty’s wheelhouse. As such, the strategies focused on the Learning, Academic Policy, and Practice KPIs and matters related to instructional staffing (Faculty KPI). Within the Learning KPI, faculty planned to redesign content, assessment, and/ or pedagogical approaches. These are all topics that have conventionally been within the faculty’s sphere. The lesser planned-for strategies, such as using data to improve student success, are approaches that have not typically been part of faculty duties.

The tendency to stay within conventional boundaries could be due to several factors. The CDI itself may have privileged the more familiar strategies within the agenda of the institute. Additionally, faculty may have had to defer to conventional strategies due to constraints on their time as they balanced teaching, service, and scholarship.

The implementation plans also demonstrated a trend toward more conventional thinking in which the regular classroom routine was left sacrosanct. For example, syllabus and assessment revisions were preferred over activities that required more in-person execution (e.g., collaborative learning). This conventional thinking around out-of-class time also manifested itself as a disciplinary difference with STEM faculty planning to Implement Peer Assistance more than their non-STEM colleagues.

Demonstrating another conventional thinking pattern as well as a disciplinary difference, STEM faculty were less likely to plan changes to content. This could be due to their standing as pipeline courses that prepare students for professional entrance exams such as the Medical College Admissions Test (MCAT) or the Dental Admission Test (DAT). Because general education courses do not typically serve in a pipeline role, non-STEM faculty have more flexibility in making content decisions and stepping outside of conventional approaches to presenting content.

The lack of content flexibility in the STEM courses may also be due to the more sequential and cumulative nature of disciplines. For example, solving problems in physical chemistry requires the knowledge of balancing chemical equations learned in general chemistry. Coppola and Jacobs (2002) note that the chemistry curriculum is influenced by the American Chemical Society, which supports a vertical nature presentation of the content that requires students to complete courses that emphasize foundational facts and skills before proceeding. Chemistry textbooks reflect this structure.

The conventional thinking pattern is reinforced by Turner’s (2009) position that faculty do not see themselves as “risk takers.” There is safety and comfort in the role of lecturer as the disseminator of wisdom, and the premise of academic freedom can provide a free pass not to change. Faculty maintaining conventions versus risk-taking may also do so because of some contentiousness about their required SSCR participation. Because the G2C process is adopted by upper-level leadership and imposed on the faculty teams, there may be a tendency to plan for strategies that minimize change or risk-taking. This is an area ripe for further research to explore faculty perceptions of redesign programs, risk-taking, motivations, and decision-making.

Isolationist

The redesign strategies on the implementation plans kept faculty in disciplinary isolation. For example, the plans showed a preference for discipline-specific skills over partnerships with outside academic support resources. Similarly, faculty preferred strategies related to Curriculum Alignment and Content Consistency, both of which require a disciplinary-based faculty collaboration.

Beyer et al. (2013) support this finding, noting that faculty often do their pedagogical work in isolation. Likewise, Roth (2005) noted that faculty consciously shape their pedagogy around the disciplinary thinking reflected in their learning outcomes.

The Assessment Redesign theme took isolation to another level by demonstrating a pattern of separation even within a discipline. Strategies that required within disciplinary collaboration (e.g., using common assessments) were the most infrequently selected. Instead, assessment strategies that did not necessitate implementation across all sections of a course (e.g., Early and Often Assessment and Mastery Feedback Approaches) were selected more frequently.

Non-Committal

Regarding changes to pedagogy, Active Learning Strategies represented the majority of the planned-for strategies. On the surface, this outcome appears to be very positive feedback about the course redesign program. However, the wording of the actual recommendations was very non-committal. These vague, hedgy descriptions included language such as “search for active learning techniques,” where “search for” hedges an actual commitment to implement the strategy and “active learning techniques” does not really describe what would change. Other implementation plans included non-committal phrases such as “recommend,” “explore,” and “encourage” the use of active learning strategies. This finding aligns with Zito’s (2019) recommendation that educational developers explicate the concrete skills that are abstracted by the bigger conceptual descriptions about the ways of thinking and feeling.

Similarly, the categories Implement Peer Assistance and Social Learning Approaches were also vague in terms of how the strategies were described. For example, the recommendations “we recommend TA support for this to occur in the large lectures” and “peer-assisted learning would be beneficial for our students” do not provide sufficient clarity to understand what will actually take place. More precise descriptions of what the peers would actually do in the classroom (e.g., lead discussions, assist with problem-solving, micro-teaching) or how students will actually interact (e.g., case studies, note-taking reviews) would bolster more confidence in the SSCR’s ability to positively influence student success.

The patterns that emerged from the results—conventional thinking, isolationism, and non-committal language—are all useful feedback for the construction of course redesign programs. For instance, the vagueness in the implementation plans suggests that faculty were gravitating toward buzzwords such as “active,” “engaged,” or “early and often” but needed different educational development approaches to create plans that were more specific. Deconstructing the steps of in-class, active approaches and creating a safe space for dress rehearsals might mitigate the unclear planning that could lead to even more unfocused implementation.

Similarly, the alliance with conventional approaches and tendencies toward discipline-specific practices also suggest modifications to the SSCR curriculum. For example, are the hows and whats of the strategies fully explicated during the SSCR? If a faculty team is going to develop common learning outcomes, do they have a process for how to come together to efficiently and effectively collaborate? Do faculty understand what constitutes a quality learning outcome? When a course-line team doesn’t fully understand the hows and whats, the lack of risk-taking or interdisciplinary collaboration is understandable. Thus, even where redesign strategies appeared more concrete (e.g., develop common learning outcomes), there still seems to be a need for educational developers to offer a high degree of both strategy detail and implementation structure.

One promising approach for including detail and structure is the “backwards design planner” developed by Reynolds and Kearns (2017). Their planner uses a template to prompt the faculty to develop granular plans for instructional design. The template prompts the inclusion of elements such as “first exposure” to material, “homework,” and “passive versus active” work (p. 19) and provides cues for faculty to be specific. The template approach would negate the potential effects of the non-committal language on implementation plans by providing redesign teams a road map and a pathway for self-assessment. Another promising approach would be to use an engagement rubric that facilitates creating detailed implementation plans (Mellow et al., 2015).

As noted by Kuh and Kinzie (2018), it is the quality of the implementation that matters. Unique to this study was its examination of implementation plans created during an SSCR type of CDI. Because implementation is important in effecting student success outcomes, the faculty’s implementation plans are an important indicator of their intentions. Using these implementation plans as assessment feedback for the educational developers leading the CDI produced some interesting patterns related to future iterations of the program. Moreover, examining the implementation plans as a mechanism for assessment and continuous improvement proved to be a fruitful and replicable assessment strategy.

Biographies

Rebecca Campbell, Ph.D., is the Associate Provost for Academic Administration at New Mexico State University and Professor Emeriti at Northern Arizona University. Her work focuses on how university policies and practices influence student success.

Benjamin Buck Blankenship, Ph.D., is a senior lecturer at Northern Arizona University. He works in First Year Experience in the College of Education and his research interests are educational interventions for students’ success.

References

Aitken, N. D. (2005). The large lecture Course Redesign Project: Pedagogical goals and assessment results. College Teaching Methods & Styles Journal, 1(2), 61–68.

Association of College and University Educators. (2018). Effective teaching practices. https://acue.org/?acue_courses=effective-teaching-practiceshttps://acue.org/?acue_courses=effective-teaching-practices

Beyer, C. H., Taylor, E., & Gillmore, G. M. (2013). Inside the undergraduate teaching experience: The University of Washington’s growth in faculty teaching study. State University of New York Press.

Bill and Melinda Gates Foundation. (2020). Frontier Set. https://postsecondary.gatesfoundation.org/areas-of-focus/transformation/institutional-partnerships/frontier-set/https://postsecondary.gatesfoundation.org/areas-of-focus/transformation/institutional-partnerships/frontier-set/

Bowen, J. A., & Watson, C. E. (2017). Teaching naked techniques: A practical guide to designing better classes. Jossey-Bass.

Campbell, R., & Blankenship, B. B. (2020). Leveraging the power of course redesign for student success. To Improve the Academy, 39(2). https://doi.org/10.3998/tia.17063888.0039.203https://doi.org/10.3998/tia.17063888.0039.203

Campbell, R. P., Saltonstall, M., & Buford, B. (2013, Spring). The scholarship of a movement: A 24-year content analysis of the Journal of The First-Year Experience & Students in Transition. Journal of The First-Year Experience & Students in Transition, 25(1), 13–34.

Chism, N. V. N., Holley, M., & Harris, C. J. (2012). Researching the impact of educational development: Basis for informed practice. To Improve the Academy, 31(1), 129–145. https://doi.org/10.1002/j.2334-4822.2012.tb00678.xhttps://doi.org/10.1002/j.2334-4822.2012.tb00678.x

Chism, N., & Szabo, B. (1998). How faculty development programs evaluate their services. Journal of Staff, Program, and Organization Development, 15(2), 55–62.

Condon, W., Iverson, E. R., Manduca, C. A., Rutz, C., & Willett, G. (2016). Faculty development and student learning: Assessing the connections. Indiana University Press.

Coppola, B. P., & Jacobs, D. C. (2002). Is the scholarship of teaching and learning new to chemistry? In Huber, M. T. & Morreale, S. P. (Eds.), Disciplinary styles in the scholarship of teaching and learning: Exploring common ground (pp. 197–216). American Association for Higher Education and the Carnegie Foundation for the Advancement of Teaching.

Denzine, G. M., Cole, R. P., & Spining, J. (1996, October). Academic fears of incoming undergraduate students: Issues of range and content validity [Paper presentation]. Meeting of the Arizona Educational Research Organization, Phoenix, AZ, United States.

EAB. (2018). It matters who is teaching your 101 classes—and how: 3 ways to reduce course and instructor variation. https://www.eab.com/blogs/institutional-analytics-blog/07/course-section-variationhttps://www.eab.com/blogs/institutional-analytics-blog/07/course-section-variation

EAB. (2019). Navigate the bottleneck courses in your institution. https://www.eab.com/technology/academic-performance-solutions/resources/infographics/bottleneckshttps://www.eab.com/technology/academic-performance-solutions/resources/infographics/bottlenecks

Field, K. (2018, June 3). A third of your freshmen disappear: How can you keep them? The Chronicle of Higher Education. https://www.chronicle.com/article/A-Third-of-Your-Freshmen/243560https://www.chronicle.com/article/A-Third-of-Your-Freshmen/243560

Flaherty, C. (2019, December 13). Required pedagogy. Inside Higher Ed. https://www.insidehighered.com/news/2019/12/13/online-conversation-shines-spotlight-graduate-programs-teach-students-how-teachhttps://www.insidehighered.com/news/2019/12/13/online-conversation-shines-spotlight-graduate-programs-teach-students-how-teach

Gonzalez, H. B., & Kuenzi, J. J. (2012). STEM education: A primer. Congressional Research Service. https://archive.org/details/R42642ScienceTechnologyEngineeringandMathematicsSTEMEducationAPrimer-crshttps://archive.org/details/R42642ScienceTechnologyEngineeringandMathematicsSTEMEducationAPrimer-crs

Guskey, T. R. (2000). Evaluating professional development. Sage Publications.

John N. Gardner Institute for Excellence in Undergraduate Education. (2017). Gateways to completion: Overview, evidence of strength of components & summary of outcomes to date (pp. 1–6). https://s3.amazonaws.com/jngi_pub/hlc16/Overview+of+G2C+Evidence+of+the+Strength+of+the+G2C+Components+&+G2C+Outcomes+to+Date.pdfhttps://s3.amazonaws.com/jngi_pub/hlc16/Overview+of+G2C+Evidence+of+the+Strength+of+the+G2C+Components+&+G2C+Outcomes+to+Date.pdf

Kelderman, E. (2019). A turbulent future for enrollment: The looming enrollment crisis. The Chronicle of Higher Education.

Kinzie, J. (2012). Introduction: A new view of student success. In Schreiner, L. A., Louis, M. C., & Nelson, D. D. (Eds.), Thriving in transitions: A research-based approach to college student success. Stylus Publishing.

Kirkpatrick, D. L. (1998). Evaluating training programs: The four levels (2nd ed.). Berrett-Koehler.

Koch, A. K. (2017). It’s about the gateway courses: Defining and contextualizing the issue. New Directions for Higher Education, 180, 11–17. http://doi.org/10.1002/he.20257http://doi.org/10.1002/he.20257

Koch, A. K., Rife, M. C., & Hanson, M. (2018). Killer course correction: Using self-studies to transform gateway courses [Paper presentation]. Higher Learning Commission Annual Conference, Chicago, IL, United States.

Koch, A. K., & Rodier, R. (2014). Gateways to completion guidebook. John N. Gardner Institute for Excellence in Undergraduate Education.

Krippendorff, K. (2018). Content analysis: An introduction to its methodology (4th ed.). Sage Publications.

Kuh, G. D., & Kinzie, J. (2018, May 1). What really makes a “high-impact” practice high impact? Inside Higher Ed. http://www.insidehighered.com/views/2018/05/01/kuh-and-kinzie-respond-essay-questioning-high-impact-practices-opinionhttp://www.insidehighered.com/views/2018/05/01/kuh-and-kinzie-respond-essay-questioning-high-impact-practices-opinion

Mellow, G. O., Woolis, D. D., Klages-Bombich, M., & Restler, S. (2015). Taking college teaching seriously: Pedagogy matters! Stylus Publishing.

Palmer, M. S., Streifer, A. C., & Williams-Duncan, S. (2016). Systematic assessment of a high-impact course design institute. To Improve the Academy, 35(2), 339–361. https://doi.org/10.1002/tia2.20041https://doi.org/10.1002/tia2.20041

Prime, D. G. (1982). Last word: A missing element in the retention discussion. Black Issues in Higher Education, 18(21), 6.

Reynolds, H. L., & Kearns, K. D. (2017). A planning tool for incorporating backward design, active learning, and authentic assessment in the college classroom. College Teaching, 65(1), 17–27. https://doi.org/10.1080/87567555.2016.1222575https://doi.org/10.1080/87567555.2016.1222575

Roth, T. (2005). Introduction. In Riordan, T. & Roth, J. (Eds.), Disciplines as frameworks for student learning (pp. xi–xix). Stylus Publishing.

Shulman, L. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15(2), 4–14.

Turner, P. M. (2009). Next generation: Course redesign. Change: The Magazine of Higher Learning, 41(6), 10–16. https://doi.org/10.1080/00091380903297642https://doi.org/10.1080/00091380903297642

Wiggins, G., & McTighe, J. (2011). The understanding by design guide to creating high-quality units. Association for Supervision and Curriculum Development.

Zito, A. J. (2019). Broaching threshold concepts: The trouble with “skills” language in defining student learning goals. To Improve the Academy, 38(1), 67–81. https://doi.org/10.1002/tia2.20086https://doi.org/10.1002/tia2.20086