Educational developers and those examining systemic change know that sometimes it takes a significant external push for educators to reexamine long-held practices and make significant changes. In 2020, COVID-19 challenged nearly all institutions of higher education to re-envision and re-engage with teaching and learning in innovative ways. Centers of teaching and learning (CTLs) across the nation and globally played a significant role in helping institutions navigate these changes. Faculty sought ways to pivot their courses to a remote learning environment, including content, class policies, assessment, and other course design elements.

As many faculty adopted teaching practices that facilitated a more equitable, student-centered experience (Purcell & Lumbreras, 2021; Supiano, 2021), many leaders in higher education hoped these positive changes would persist post-pandemic (Darby, 2021; Denial et al., 2022; Gohardani, 2022; Radwan, 2022; Zhao & Watterston, 2021). As Darby (2021) noted, faculty should not retreat to previously held teaching methods but instead see the opportunity for what it is and retain equitable teaching practices put in place during the pandemic. While many speculated that the pandemic and renewed focus on student-centered practices could be a “new normal” (e.g., more remote teaching, more empathy for students, more streamlined and thoughtful uses of learning management systems), questions remain: To what extent did the pandemic change teaching approaches, and do these changes have permanence? Given the continuing influence of the pandemic, is there a new normal for higher education, a return to pre-pandemic practices, or a mixture of both?

Pandemic Practices and Student-Centered Syllabi

Student-centered learning experiences are rooted in the philosophy that students are at the heart of the learning process. Hoidn and Reusser (2021) expanded on this idea: “Student-centeredness focuses not only on individual learners and their learning processes but on the whole learning context and issues of content, culture, community and instructional practice (e.g., activities, assignments) informed by educational constructivism, a theory of knowledge and learning” (p. 18). The pandemic certainly initiated calls for student-centered teaching practices, which included adjustments to syllabus language. The degree to which new practices and corresponding syllabus language were sustained during and after the pandemic is the focus of this article.

The researchers adapted the Student-Centered Syllabus Rubric (SCSR) to document shifts in student-centeredness across time: pre-, mid-, and post-COVID-19 (Palmer et al., 2014).1 As Heim et al. (2019) pointed out, reliability and validity analyses make the SCSR a promising tool for research into student-centered classroom experiences (p. 404). Heim et al. used the SCSR to evaluate learner-centeredness of introductory biology courses, finding that syllabus rubric scores correlated with the Reformed Teaching Observation Protocol and portions of the Approaches to Teaching Inventory.

Palmer et al. (2014) cautioned that the “syllabus is only a proxy to actual classroom practices and student learning and is in itself neither a measure of teaching effectiveness nor necessarily an accurate reflection of an instructor’s values.” While the syllabus cannot be considered a mirror of faculty practices, several researchers have noted the syllabus’s potential to document faculty practices and approaches to teaching and learning (Cullen & Harris, 2009; Favre et al., 2021; Parkes & Harris, 2002; Stanny et al., 2015; Willingham-McLain, 2011). Of note, however, is research from Heim et al. (2019) that used the SCSR along with other instruments to measure student-centeredness. Their results found strong positive correlations between the syllabus rubric scores and other observation scores. While their study was limited by small sample size and to a single institution, it is worthwhile to consider that the syllabus has the potential to be an indicator of faculty’s student-centeredness. A syllabus can provide us with one artifact that can be measured across time and provide a glimpse into student-centered changes during the pandemic.

The researchers do not presume that a syllabus mirrors teaching practice in any given term; syllabi may in fact not mirror a faculty member’s pre-, mid-, and post-pandemic teaching practices. Instead, shifts in student-centeredness are being measured and analyzed, and measuring the changes between syllabi can help identify shifts faculty may have had in perspective (such as the need to provide additional support for remote students), consideration of context (such as the need for more flexible attendance policies to account for illness and isolation policies), feedback from students, and the like. Analyzing changes to syllabi over time, while not documenting specific classroom practices, can provide valuable insights into faculty adaptation and responsiveness to changing educational landscapes, particularly amid the challenges posed by the pandemic.

Methods

Developed to measure the success of a course design institute, the SCSR was designed to assess 13 criteria across four major categories: learning goals and objectives, assessment activities, schedule, and overall learning environment. We reduced the SCSR to the five components we identified as having the most potential to reflect student-centered changes in response to pandemic conditions (see Appendix B):

  1. Clearly defined summative assessments

  2. Frequent formative assessments, with feedback

  3. Positive, respectful, inviting tone

  4. Learning orientation is emphasized, with positive motivation

  5. High expectations and confidence in students through hard work

To validate these components, a principal components analysis (PCA) was conducted. Each of the five components used a 3-point range from 0 (low evidence/negative result) to 2 (very strong evidence/positive result), the same range used by Palmer et al. (2014). Furthermore, we conducted reliability analyses to ensure the internal consistency of our scale and to validate that our questions and time ratings were consistently measuring the intended construct. Finally, we conducted a multivariate analysis of variance (MANOVA) to detect changes over time.

The PCA and MANOVA offer statistical techniques for understanding the way teachers present essential information about their courses, including expressions of student-centeredness. PCA can be used to reduce the complexity of data by identifying patterns and key variables that explain the most variance in syllabi during and after the pandemic. This reduction enables a clearer comparison of pre-, mid-, and post-pandemic differences. MANOVA, however, allows for testing differences across multiple dependent variables simultaneously (e.g., summative and formative assessments, positive tone, communicate high expectations) between different time periods. By comparing these variables across the pre-, mid-, and post-pandemic periods, MANOVA can help determine if changes are statistically significant, suggesting a permanent change or a return to pre-pandemic status. Together, these analyses can illuminate to what extent student-centeredness changed during and after the pandemic.

To illustrate scoring using the SCSR rubric, we provide below three samples of faculty syllabi scoring low evidence, moderate evidence, and strong evidence for Component 1: Clearly defined summative assessments. Because other components may require analysis of the entire syllabus holistically to score, we chose Component 1, which was primarily measured by analyzing a summative assessment section of a syllabus. Each sample syllabus below received matching scores from two reviewers that did not require a third scorer.

Figure 1 shows an example of a syllabus section where summative assessments are not fully defined, leading to scores of “low evidence.” In particular, the two projects are mentioned but only briefly. Exams are also mentioned as closed book tests but with no other supportive details for students.

Figure 1.
Figure 1. Syllabus Example, Low Evidence for Component 1

Figure 2 shows a syllabus receiving scores of “moderate evidence” for summative assessments, which provides details on each of the main categories of assessments: exams, homework, and participation. Figure 3 presents a syllabus receiving a score of “strong evidence,” largely due to the amount of detail provided for each assessment category, which goes significantly beyond the syllabi excerpted in Figures 1 and 2.

Figure 2.
Figure 2. Syllabus Example, Moderate Evidence for Component 1
Figure 3.
Figure 3. Syllabus Example, Strong Evidence for Component 1

Research Site and Participants

Researchers gathered syllabi from 110 faculty across the two residential campuses of Embry-Riddle Aeronautical University (ERAU), a private, STEM-focused, predominantly white institution located in the Southwestern and Southeastern United States. The Daytona Beach, Florida, and Prescott, Arizona, campuses combined enroll approximately 10,000 students and employ approximately 700 faculty. The 110 faculty participants were randomly selected from 190 eligible faculty who had taught the same course pre-, mid-, and post-pandemic (Fall 2019, Fall 2020, and Fall 2022, respectively).

Data Collection

A statistical power analysis was performed to identify a reliable sample size. The effect size in this study was 0.5, recommended by G*Power software. With an alpha = .05, power = 0.80, and allocation ratio = 1, the projected sample size needed was approximately N = 102 overall for this independent samples t test. The proposed sample size allowed for expected attrition and to control for possible mediating/moderating factors, subgroup analyses, and other analyses. Researchers selected 110 faculty to ensure an appropriate sample size.

Syllabus data for the 330 syllabi (for 110 faculty, across all three time periods) was de-identified and assigned a numeric code. Four faculty (two from each campus) were hired as independent scorers, along with the researchers. Scorers participated in two 1-hour calibration sessions, using sample syllabi not in the data set to norm scoring using the five-component rubric.

Syllabi were not separated by time period for scoring; instead, syllabi were randomly assigned to two raters. If the two scores for a rubric component conflicted in the extreme (scores of 0 and 2), that component’s scores were replaced by a third rater’s score. Replacing the score rather than taking the mean in cases of extreme discrepancies helped to prevent an inflation of Type 1 error and ensured that results were not misleading due to compromised averages not accurately reflecting any rater’s perspective.

Results and Discussion

Analyses were performed to determine the effectiveness in comparing syllabi rubrics that were evaluated on several different time periods for inter-rater reliability. Each time rating was determined by a Likert-scale question ranging from 0 (low evidence/negative result) to 2 (very strong evidence/positive result) for evaluating the overall rubric material. First, a reliability analysis and PCA were conducted to test the reliability and validity of the Likert-scale rating questions. Additionally, a one-way, within-subjects MANOVA was performed to compare both the time-rating differences and to determine the differences in ratings between each question. Results will portray descriptive statistics including means and standard deviations for each question/time point rating, multivariate within-subject tests, between-subject tests for question results comparisons, and quadratic data to determine the major differences in effectiveness for each syllabus/question type.

PCA results show that the rating instruments are measuring a construct well. From the MANOVA, overall results yield that the time ratings differed from one another for each of the questions, although results may be inflated due to lack of significant p values and lack of sphericity. Also, the time ratings for each of the questions accounted for only a small amount of variance. The results for analysis interpretations and figures will be found in the section below, followed by an overall conclusion of the findings.

The reliability analysis was run with all questions and time ratings (N= 15). The analysis results portrayed an alpha value of .854, which is considered very good and reliable. In other words, the scale is measuring the construct that it is supposed to be measuring. Moreover, although the variables loaded onto one construct, the scale is found to be reliable, and the results of the scale are to be trusted.

Data Analysis

A PCA and a one-way within-subjects MANOVA were conducted. For the MANOVA, variables consisted of five dependent variables (i.e., Likert-scale question ratings for each syllabus) and three independent variables (i.e., Times 1, 2, and 3). Descriptive results portrayed in Table 1 include the means and standard deviations for each question and rating time. Means range from a high of M = .9682, SD = .63635 for the Question 1, Time 2 rating and a low of M = .4227, SD = .53410 for the Question 4, Time 3 rating.

Table 1.

Descriptive Statistics Table

Question_Time

Mean

Std. deviation

N

Q1_T1

.8136

.65266

110

Q1_T2

.9682

.63635

110

Q1_T3

.8273

.67537

110

Q2_T1

.8364

.65001

110

Q2_T2

.8818

.60928

110

Q2_T3

.7364

.63067

110

Q3_T1

.4545

.55663

110

Q3_T2

.5727

.55783

110

Q3_T3

.4636

.60846

110

Q4_T1

.4500

.53410

110

Q4_T2

.5364

.52771

110

Q4_T3

.4227

.53410

110

Q5_T1

.4682

.53449

110

Q5_T2

.5227

.52193

110

Q5_T3

.4727

.54114

110

Principle Components Analysis and Reliability Analysis

The PCA was run with an orthogonal varimax rotation. Initial results reveal a Kaiser-Meyer-Olkin measure of sampling adequacy (KMO) value of .873, which is considered very good, as it indicates the proportion of variance in variables that might be caused by underlying factors. Furthermore, within the PCA results, Bartlett’s Test for Sphericity shows that sphericity has been violated and all variances are not equal to one another, χ2(105) = 1109.09, p = .000, suggesting that the variables in the data set are intercorrelated and may not be ideally suited for a straightforward PCA.

To address this, other considerations may be to employ alternative dimension reduction techniques, such as using oblique rotation methods within PCA or investigating the variables to understand and potentially remedy the high intercorrelations. Screen plot results are interpreted to have five major components, according to the five Likert-scale rating questions. Upon further evaluation of the rotated components matrix (Appendix B, Table 2), it was found that the variables that are supposed to represent a specific question did not load highly on those components (i.e., Likert-scale rating question).

Table 2.

Principal Components Analysis—Rotated Component Matrix

Items

Components

1

2

3

4

5

Q2_T3

.789

-

-

-

-

Q4_T3

.779

-

-

-

-

Q5_T3

.749

-

-

-

-

Q3_T3

.711

-

-

-

-

Q4_T2

-

.843

-

-

-

Q5_T2

-

.827

-

-

-

Q3_T2

-

.776

-

-

-

Q5_T1

-

-

.844

-

-

Q3_T1

-

-

.819

-

-

Q4_T1

-

-

.811

-

-

Q1_T2

-

-

-

.849

-

Q1_T3

-

-

-

.628

-

Q1_T1

-

-

-

.593

-

Q2_T1

-

-

-

-

.769

Q2_T2

-

-

-

-

.732

    Note. Rotation Method: Varimax with Kaiser Normalization. Rotation converged in seven iterations. Highest loading value included.

More specifically, for Component 1 (Question 1), Variable Q2_T3 (Question 2, Time point 3 ratings data) loads highest; for Component 2, Variable Q4_T2 has the highest loading value; for Component 3, Variable Q5_T1 loads highest; for Component 4, Variable Q1_T2 has the highest loading value; and for Component 5, Variable Q2_T1 loads highest. In other words, the variables (rating results for each of the questions) do not appear to match with the question they are supposed to be measuring.

Overall, the results from the PCA offer a nuanced view of changes in the student-centeredness of syllabi. The mismatch between expected and actual patterns, where variables didn’t align neatly with the questions we thought they would, reveals that changes in syllabi may not be straightforward. In practical terms, this analysis helps us understand that the pandemic has led to many faculty reevaluating and adjusting syllabi, with some changes likely to persist due to their effectiveness or necessity. At the same time, the resilience of certain pre-pandemic syllabus practices suggests that not everything has changed. The ongoing influence of the pandemic might mean that faculty continue to adjust, reflecting a new normal that is not static, but instead characterized by ongoing adaptation and reevaluation of practices.

Multivariate Data

Sphericity is the equality of variance of the differences between each pair of values. Mauchly’s Test of Sphericity was interpreted for the current analysis, showing sphericity has not been violated for all question measures, except for Question 5, which was found significant and therefore sphericity has been violated, p < .05, with a Greenhouse-Geisser value of .888. This value being less than 1 means that the test of sphericity was not met, and therefore the F value for further analysis results may be inflated.

Wilks’ lambda is a value that ranges from 0 to 1 with a value closer to 1 indicating stronger evidence that the explanatory variable has a statistically significant effect on the values of the response variables. For the present results, found in Table 3, Wilks’ lambda showed an insignificant effect between time ratings Λ = .849, F (10,100) = 1.783, p = .073.

Table 3.

Multivariate Data Table

Multivariate test

Value

F

Hypothesis df

Error df

Sig.

Partial eta squared

Observed powerb

Wilks’ lambda

.849

1.783a

10.000

100.000

.073

.151

.802

    Note. Each F tests the multivariate effect of time. These tests are based on linearly independent pairwise comparisons among the estimated marginal means.

    1. Exact statistic.
    2. Computed using alpha = .05.

Based on the multivariate results, significant values found in the test of within-subjects contrasts table (Table 4) and the pairwise comparisons table (see Appendix B) may be inflated.

The test of within-subjects contrasts table shows significant results for time ratings within Questions 1, 3, and 4. Overall, partial eta squared (ηp2) values are very low, showing little variance being accounted for from the analysis.

Table 4.

Test of Within-Subjects Contrasts Table

Source

Measure

Type

Type III SS

df

Mean square

F

Sig.

Partial eta squared

Observed powera

Time

Q1

Linear

.010

1

.010

.037

.848

.000

.054

Quadratic

1.600

1

1.600

8.431

.004

.072

.821

Q2

Linear

.550

1

.550

1.937

.167

.017

.281

Quadratic

.668

1

.668

3.311

.072

.029

.438

Q3

Linear

.005

1

.005

.024

.877

.000

.053

Quadratic

.947

1

.947

6.819

.010

.059

.735

Q4

Linear

.041

1

.041

.284

.595

.003

.083

Quadratic

.733

1

.733

5.323

.023

.047

.628

Q5

Linear

.001

1

.001

.006

.938

.000

.051

Quadratic

.200

1

.200

1.735

.191

.016

.257

Within the pairwise comparisons table, significant results are indicated when p < .05. Significant results indicate that those variable results are different from one another. Therefore, within Appendix B, pairwise comparisons show significance between:

  • Question 1 ratings for Times 1 and 2, and between 2 and 3;

  • Question 2 ratings for Times 2 and 3;

  • Question 3 ratings for Times 1 and 2, and between 2 and 3; and

  • Question 4 ratings between Times 2 and 3.

Differences in ratings for Questions 1 through 4 show significant differences in rating from Times 1 and 2, and 2 and 3, representing an upside down “U” curve, which can be seen for Question 3 in Figure 4.

Figure 4.
Figure 4. Question 3 Time Ratings Curve

Between-Subjects Effects

Findings comparing the overall results from each of the questions show significant results. Results found in Table 5 portray significant F and partial eta squared values, meaning that each question’s overall ratings significantly differ from one another, and the interpreted results account for over 50% of variance.

Table 5.

Data of Between-Subjects Effects per Question

Source

Measure

Type III sum of squares

df

Mean square

F

Sig.

Partial eta squared

Noncent. parameter

Observed powera

Intercept

Q1

249.603

1

249.603

304.905

.000

.737

304.905

1.000

Q2

220.909

1

220.909

313.023

.000

.742

313.023

1.000

Q3

81.503

1

81.503

123.107

.000

.530

123.107

1.000

Q4

72.803

1

72.803

127.587

.000

.539

127.587

1.000

Q5

78.548

1

78.548

143.210

.000

.568

143.210

1.000

    1. Computed using alpha = .05.

The MANOVA results offer insightful clues into how syllabi have shifted during the pandemic and whether these changes have staying power. First, the issue with sphericity, where variances among some measures weren’t equal, suggests that the data for Question 5 (high expectations and confidence in students through hard work) behaved differently from the others. This discrepancy indicates that the impact of the pandemic on various aspects of student-centeredness didn’t follow a uniform pattern; some changes were more pronounced or variable than others. Additionally, Wilks’ lambda told us that, broadly speaking, the differences in syllabi over time weren’t statistically significant across the board. This suggests that changes may not be as sweeping or uniform as one might expect. However, significant results in specific pairwise comparisons, particularly the changes in ratings from one time period to another for certain questions, highlight that there were indeed notable shifts in some areas of syllabi. These significant pairwise differences, especially where we see an “upside down U” curve for some questions, indicate that some representations of student-centeredness initially changed quite a bit as the pandemic began, then perhaps settled into a new pattern or returned closer to pre-pandemic methods over time. This pattern could reflect an initial rush to adapt to remote teaching or other pandemic-related changes, followed by a period of adjustment and settling into more effective or sustainable practices. The significant differences found in overall ratings for each question, and the fact that these differences account for a substantial amount of variance, reinforce the idea that the pandemic has had a tangible impact on syllabi. However, the specific nature of these impacts seems to vary by question, suggesting that some areas were more affected than others.

To delineate faculty learning changes into distinct cohorts, specific threshold values were employed. After calculating the differences between Time 1 and Time 2 (ΔTime1-Time2), Time 2 and Time 3 (ΔTime2-Time3), and Time 1 and Time 3 (ΔTime1-Time3), four cohorts were established:

  • Increase – Those faculty members whose scores consistently demonstrated improvement over time, indicated by a ΔTime1-Time2, ΔTime2-Time3, and ΔTime1-Time3 greater than 1 (N = 28).

  • Decrease – Faculty participants displaying a consistent decline in scores, reflected by a ΔTime1-Time2, ΔTime2-Time3, and ΔTime1-Time3 less than –1 (N = 28).

  • Static – Participants whose scores exhibited minimal variation over time, falling within the range of -0.5 to +0.5 for ΔTime1-Time2, ΔTime2-Time3, and ΔTime1-Time3. This cohort was further subdivided into two subcohorts: “Student-Centered” and “Content-Centered.” The Student-Centered cohort encompassed individuals whose scores exhibited sustained high scores despite negligible changes over time (n = 4). The Static, Content-Centered subcohort pertained to individuals whose scores remained consistently low (n = 11).

  • Elastic – Individuals whose scores increased then decreased, or vice versa. The Increase to Decrease subcohort were classified by a ΔTime1-Time2 equal to or greater than +1 and a ΔTime2-Time3 equal to or less than -1 (n = 27). The Decrease to Increase subcohort were classified by a ΔTime1-Time2 equal to or less than -1 and a ΔTime2-Time3 equal to or greater than +1 (n = 12).

An overview of all faculty cohorts can be found in Table 6. Note that positive changes in student-centeredness describe 50% of syllabi measured; however, these changes did not persist for 25% of faculty syllabi measured. For the Elastic cohort, representing over one-third of the faculty, results suggest that the pandemic was a significant disruptor that had a temporary impact on syllabi.

Table 6.

Total Faculty Syllabus Changes in Student-Centeredness

Cohorts

Subcohort

# of faculty

% of faculty

Increase

28

25%

Static

Content-Centered

4

4%

Student-Centered

11

10%

Decrease

28

25%

Elastic

Increase, then Decrease

27

25%

Decrease, then Increase

12

11%

110

100%

Notably, a MANOVA, conducted to assess the interaction between time and several independent variables (rank, campus, college, the type of student-centered syllabus provided, and participation in a short training course), revealed that among these factors, the interaction between time and participation in a short training course on student-centered syllabi was the only one to show statistical significance. This interaction effect was quantified using Pillai’s trace, which yielded a value of .342. The corresponding F value for this interaction was 1.7, indicating the ratio of variance explained by the model in comparison to the variance unexplained. The significance value (p value) associated with this interaction was .031, falling below the threshold of .05, which suggests that the interaction effect is statistically significant. This implies that changes over time in the dependent variables are significantly influenced by the participation in a short training course.

Limitations

Due to the institution’s profile as a small, private, STEM-focused university, there is a potential bias in the approach to syllabus development. Consequently, findings may not be as generalizable as research conducted in larger, more diverse institutions.

In addition, a larger sample size often enhances the statistical power of a study, increasing the likelihood of detecting true significant effects. When sample size is increased, the estimates derived become more precise, reducing the standard errors. This precision allows for smaller differences between groups or associations to be identified as statistically significant. Moreover, a larger sample size can better represent the population, making the results more generalizable. Thus, if the sample size were larger in this context, it would likely improve the chances of uncovering genuine significant effects that might be missed with a smaller sample due to increased variability or random chance.

Another variable perhaps impacting post-pandemic scores is that the researchers’ campuses adopted the tool Simple Syllabus in 2021, mid-pandemic. Simple Syllabus pre-loads course meeting times and dates, faculty contact information, and student resources, based on university/campus templates and course registration information. The researchers anticipated this impact and redacted the student resources section of all post-pandemic syllabi to limit impact on component scores. However, the Simple Syllabus template could have influenced faculty perceptions of syllabi as something the institution produces more than the individual faculty member. For some faculty, Simple Syllabus may have pre-loaded enough information to serve its purpose and thus those faculty may have felt less inclined to spend time developing a comprehensive syllabus, which could have affected scores on student-centeredness for the Fall 2022 syllabi. Additional research could compare template syllabi to those developed more explicitly by individual faculty.

Another limitation lies in the potential fragmentation of syllabus content by faculty choosing to relocate certain sections to the learning management system (LMS) interface. For instance, faculty may create distinct site pages for individual assignments, extracting this information from a conventional syllabus structure, which could diminish the overall score of a syllabus in terms of student-centeredness. Furthermore, essential components such as classroom policies, grading criteria, and other pertinent details might be dispersed across various sections of the course site within the LMS, thereby transforming the syllabus into a truncated, less comprehensive document for students.

As Palmer et al. (2014) cautioned, syllabi should not be considered a direct correlation to classroom practices. While Heim et al. (2019) showed correlations between syllabus rubric scores and other instruments measuring classroom practices, additional qualitative research that gains faculty input, student input, and documents classroom practices could inform research in this area. For example, research into faculty motivations for change would add nuance to these results. This research would also help us understand how to support institutional change.

Implications for Practice

We recommend actions that CTLs can take to identify opportunities on their campuses to address the issues raised by our research.

Analyze Syllabi Locally. Centers, in collaboration with university leaders, could utilize the SCSR to rate syllabus practices for their institutions. Providing administrators with the SCSR as an analytic tool could lead to increased focus on syllabi as an important instrument for student success. At our institution, a college administrator is utilizing the SCSR to identify opportunities for improvement within the college. Their work has led to several initiatives related to course alignment and improvements to student-centeredness, more generally.

Develop Student-Centered Syllabi. One surprising finding from our research, however, is that the only significant variable leading to an increase in student-centeredness of syllabi was pre-pandemic participation in a CTL’s program to improve syllabi. Thirty faculty on one campus received training on the SCSR over three 1-hour workshops. After familiarizing themselves with the SCSR’s 13 components, faculty scored sample syllabi, participated in a norming session, and then self-scored syllabi, revising them for improvements. Participating faculty’s syllabi were scored three semesters later, revealing that gains made during the intervention were maintained.

These results have implications for the work of CTLs beyond the pandemic. Extended faculty development on syllabi may seem to have limited value beyond the syllabus itself. However, what arose from the short course were conversations about connecting assessments to outcomes, student-centered absence policies, framing a positive tone to engage students, offering pathways for success, providing a learning orientation to the course, and other student-centered practices. While the syllabus itself is worthy of focused programming by a CTL, syllabi programming can also serve as a vehicle for discussing a wide range of student-centered practices.

Conclusion

Examining how syllabi change over time reveals the variety of faculty responses to the pandemic’s educational challenges. While some faculty showed an increase in student-centered approaches, some seem to have returned to familiar, pre-pandemic practices. The Elastic cohort, comprising a substantial portion of the sample, showcased the pandemic’s disruptive but mostly transient impact on syllabus characteristics. Some key questions have risen from our research.

Why might an instructor be elastic in their practices?

Early analysis suggests faculty could be elastic in their practices for many reasons. The researchers are conducting interviews with select faculty to identify potential driving and resisting factors to changes in practice. Pandemic practices created a heavier workload for most, which faculty could be eager to unload once campuses returned to a more normal set of operations. In addition, faculty less adept with technologies may be choosing to revert to practices that are more efficient or comfortable to them. Local and disciplinary requirements, such as institutional and/or accreditation requirements, also drove both changes in practice and hindered faculty from making changes to practice.

In addition, the political landscape of higher education cannot be overlooked as a possible influence. Faculty and administrators are still navigating the complexities of how local and national politicians are attempting to influence teaching practices during the pandemic (Wippman & Altschuler, 2022). While faculty and students may continue to feel the ongoing impact of the pandemic, institutions could have political mandates and/or incentives to return to pre-pandemic practices (Johnson et al., 2021). Our continued analysis of faculty driving and restricting factors is discovering that the elasticity of instructors’ syllabus practices and policies can be shaped by a multitude of factors, including workload considerations, technological proficiency, and the political dynamics within higher education, which highlights the complex interplay between individual choices and institutional imperatives.

What additional data can inform syllabus analysis?

While analysis of changes in syllabi can offer insights into changes faculty may have made both during and after the pandemic, additional data can help triangulate findings. Faculty interviews, as we mentioned, are helping the researchers identify the values, beliefs, and strategies influencing faculty decision-making during this time. Insights from ongoing faculty interviews promise to shed light on the driving and resisting factors underpinning pedagogical change, essential for fostering sustainable and impactful pedagogical change. In addition, analysis of course content found on LMSs could enrich discussions of student-centeredness of faculty. Perhaps most importantly, student voices can provide an important lens into the extent to which a faculty member’s student-centeredness is felt by students in their courses. Midterm feedback of student opinions is a popular faculty development service that teaching and learning centers can provide. Gathering student opinions over time can provide an important longitudinal complement to syllabus data.

When defining education development, most academics focus on classroom impact through innovative pedagogy and improving teaching; equally important is how to support institutional change (Sorcinelli et al., 2005). Systematic or transformative change in higher education is extremely difficult to enact and sustain (Fink & Stoll, 1998), unless there are driving factors facilitating the change. In this case, the pandemic drove systemic institutional change, particularly related to the learning experience, however our examination of how syllabi evolve over time amid pandemic challenges underscores the diverse responses among faculty. While some continued to embrace student-centered approaches post-pandemic, others reverted to familiar practices, highlighting the elasticity inherent in pedagogical adaptation. Ongoing interviews with faculty members aim to unravel the complex factors influencing these practices. As we delve deeper into these driving and resisting forces, it becomes evident that understanding pedagogical change requires a multifaceted approach, one that integrates a variety of sources. By triangulating these data, we can identify sustainable pedagogical transformations, ensuring the resilience and efficacy of our educational practices in the face of uncertainty.

ERAU’s Institutional Review determined that no IRB review was required for this research on 12/8/23.

Biographies

Lori A. Mumpower is the Executive Director for the Center for the Advancement of Teaching Excellence (CATE) and a Teaching Associate Professor for the Department of English at the University of Illinois Chicago. She previously directed the Center for Teaching and Learning Excellence at Embry-Riddle Aeronautical University, Daytona Beach, Florida. Mumpower’s scholarly research centers on educational development, student-centered teaching practices, and faculty mentoring.

Chad Rohrbacher is a Senior Associate Director of the Center for Teaching and Learning Excellence and Faculty Mentoring at Embry-Riddle Aeronautical University, Daytona Beach, Florida. His research interests include faculty development, faculty mentoring, and scholarship of teaching and learning (SoTL).

Joshua Caulkins is the Director of the Center for Teaching and Learning Excellence at Embry-Riddle Aeronautical University, Prescott, Arizona. His research interests include educational development, students-as-partners, the scholarship of teaching and learning, STEM education, and Geoscience education.

Jenna Korentsides is a PhD candidate at Embry-Riddle Aeronautical University, Daytona Beach, Florida. She works as a graduate research assistant in the Small Teams Analog Research (STAR) lab. Jenna holds a B.A. in psychology from Stockton University and an M.S. in Human Factors from Embry-Riddle Aeronautical University. Presently, Jenna continues as a PhD candidate with her primary research expertise including human-computer interaction, human-agent/human-AI interaction/teaming, training, teamwork, statistical analysis and modeling, and user experience.

Acknowledgments

We thank Embry-Riddle Aeronautical University faculty Jon Adams, Jayendra Gokhale, Taylor Joy Mitchell, and Ashley Rea for their contributions to this research in scoring 330 syllabi. In addition, we thank the POD Network for awarding our project a 2022 research grant in support of this project.

Conflict of Interest Statement

The authors have no conflict of interest.

Data Availability

The data reported in this manuscript are available on request by contacting the corresponding author.

Notes

  1. The researchers recognize that for many institutions, the COVID-19 pandemic continues to be a significant influence on teaching practices and the overall learning environment. For the faculty participating in this research, campus policies and support structures have resumed as if we are post-pandemic.

References

Cullen, R., & Harris, M. (2009). Assessing learner-centredness through course syllabi. Assessment & Evaluation in Higher Education, 34(1), 115–125. https://doi.org/10.1080/02602930801956018https://doi.org/10.1080/02602930801956018

Darby, F. (2021). Teaching and learning in the post pandemic college. In Beyond the pandemic: lessons learned from COVID 19 [ReportOUT, Vol. 8, pp. 41–72]. SynED. https://syned.org/beyond-the-pandemic-lessons-learned-from-covid-19/https://syned.org/beyond-the-pandemic-lessons-learned-from-covid-19/

Denial, C., Sorensen-Unruh, C., & Lehfeldt, E. A. (2022, February 25). What follows the Great Pivot? The Great Pause: We now need time to rethink our approach to higher ed on every level. The Chronicle of Higher Education, 68(14). https://www.chronicle.com/article/after-the-great-pivot-should-come-the-great-pausehttps://www.chronicle.com/article/after-the-great-pivot-should-come-the-great-pause

Favre, D. E., Bach, D., & Wheeler, L. B. (2021). Measuring institutional transformation: a multifaceted assessment of a new faculty development program. Journal of Research in Innovative Teaching & Learning, 14(3), 378–398. https://doi.org/10.1108/JRIT-04-2020-0023https://doi.org/10.1108/JRIT-04-2020-0023

Fink, D., & Stoll, L. (1998). Educational change: Easier said than done. In A. Hargreaves, A. Lieberman, M. Fullan, & D. Hopkins (Eds.), International handbook of educational change (pp. 297–321). Springer.

Gohardani, A. S. (2022, July/August). A post-pandemic plan for aerospace education. Aerospace America, 60(7). https://aerospaceamerica.aiaa.org/departments/a-post-pandemic-plan-for-aerospace-education/https://aerospaceamerica.aiaa.org/departments/a-post-pandemic-plan-for-aerospace-education/

Heim, A. B., Aldor, E. R., & Holt, E. A. (2019). The first line of contact: How course syllabi can be used to gauge and reform learner-centeredness in a college classroom. The American Biology Teacher, 81(6), 403–409. https://doi.org/10.1525/abt.2019.81.6.403https://doi.org/10.1525/abt.2019.81.6.403

Hoidn, S., & Reusser, K. (2021). Foundations of student-centered learning and teaching. In S. Hoidn & M. Klemenčič (Eds.), The Routledge international handbook of student-centered learning and teaching in higher education. Routledge.

Johnson, A. F., Roberto, K. J., & Rauhaus, B. M. (2021). Policies, politics and pandemics: Course delivery method for US higher educational institutions amid COVID-19. Transforming Government, 15(2), 291–303. https://doi.org/10.1108/TG-07-2020-0158https://doi.org/10.1108/TG-07-2020-0158

Palmer, M. S., Bach, D. J., & Streifer, A. C. (2014). Measuring the promise: A learning-focused syllabus rubric. To Improve the Academy, 33(1), 14–36. https://doi.org/10.1002/tia2.20004https://doi.org/10.1002/tia2.20004

Parkes, J., & Harris, M. B. (2002). The purposes of a syllabus. College Teaching, 50(2), 55–61. https://doi.org/10.1080/87567550209595875https://doi.org/10.1080/87567550209595875

Purcell, W. M., & Lumbreras, J. (2021). Higher education and the COVID-19 pandemic: Navigating disruption using the sustainable development goals. Discover Sustainability, 2(1), Article 6. https://doi.org/10.1007/s43621-021-00013-2https://doi.org/10.1007/s43621-021-00013-2

Radwan, A. (2022). The post-pandemic future of higher education. Dean & Provost, 23(6), 1–5. https://doi.org/10.1002/dap.30987https://doi.org/10.1002/dap.30987

Sorcinelli, M. D., Austin, A. E., Eddy, P. L., & Beach, A. L. (2005). Creating the future of faculty development: Learning from the past, understanding the present. Anker.

Stanny, C., Gonzalez, M., & McGowan, B. (2015). Assessing the culture of teaching and learning through a syllabus review. Assessment & Evaluation in Higher Education, 40(7), 898–913. https://doi.org/10.1080/02602938.2014.956684https://doi.org/10.1080/02602938.2014.956684

Supiano, B. (2021, November 22). The student-centered syllabus: Pandemic conditions have pushed some faculty members to be more flexible—Even when that’s a little scary. The Chronicle of Higher Education. https://www.chronicle.com/article/the-student-centered-syllabushttps://www.chronicle.com/article/the-student-centered-syllabus

Willingham-McLain, L. (2011). Using a university-wide syllabus study to examine learning outcomes and assessment. Journal of Faculty Development, 25(1), 43–53.

Wippman, D., & Altschuler, G. C. (2022, April 10). Political interference in higher ed is becoming endemic. Inside Higher Ed. https://www.insidehighered.com/views/2022/04/11/political-interference-higher-ed-increasing-opinionhttps://www.insidehighered.com/views/2022/04/11/political-interference-higher-ed-increasing-opinion

Zhao, Y., & Watterston, J. (2021). The changes we need: Education post COVID-19. Journal of Educational Change, 22(1), 3–12. https://doi.org/10.1007/s10833-021-09417-3https://doi.org/10.1007/s10833-021-09417-3

Appendix A.  Adapted Student-Centered Syllabus Rubric (SCSR)

  1. The basic features of the major summative assessment activities are clearly defined.

  2. There is evidence of plans for frequent formative assessments with immediate feedback from a variety of sources (e.g., self, peer, instructor, computer generated, community).

  3. The tone of the document is positive, respectful, and inviting and directly addresses the student as a competent, engaged learner.

  4. The syllabus signposts a learning environment that fosters positive motivation, one that promotes a learning orientation rather than a performance one. The document describes the potential value of the course in the learner’s current and post-course life (cognitive, personal, social, civic, and/or professional) in a clear and dynamic way. It clearly communicates that content is used primarily as a vehicle for learning, to understand core principles in the discipline and promote critical thinking and other significant learning objectives.

  5. The syllabus clearly communicates high expectations and projects confidence that students can meet them through hard work.

Appendix B.  Pairwise Comparison Table

95% confidence interval for differencea

Measure

(I) time

(J) time

Mean difference (I-J)

Std. error

Sig.a

Lower bound

Upper bound

Q1

1

2

-.155*

.063

.015

-.278

-.031

3

-.014

.071

.848

-.155

.127

2

1

.155*

.063

.015

.031

.278

3

.141*

.062

.024

.019

.263

3

1

.014

.071

.848

-.127

.155

2

-.141*

.062

.024

-.263

-.019

Q2

1

2

-.045

.063

.473

-.171

.080

3

.100

.072

.167

-.042

.242

2

1

.045

.063

.473

-.080

.171

3

.145*

.064

.025

.019

.272

3

1

-.100

.072

.167

-.242

.042

2

-.145*

.064

.025

-.272

-.019

Q3

1

2

-.118*

.054

.031

-.225

-.011

3

-.009

.059

.877

-.126

.107

2

1

.118*

.054

.031

.011

.225

3

.109*

.051

.035

.008

.210

3

1

.009

.059

.877

-.107

.126

2

-.109*

.051

.035

-.210

-.008

Q4

1

2

-.086

.051

.092

-.187

.014

3

.027

.051

.595

-.074

.129

2

1

.086

.051

.092

-.014

.187

3

.114*

.050

.025

.015

.212

3

1

-.027

.051

.595

-.129

.074

2

-.114*

.050

.025

-.212

-.015

Q5

1

2

-.055

.055

.326

-.164

.055

3

-.005

.058

.938

-.120

.111

2

1

.055

.055

.326

-.055

.164

3

.050

.042

.240

-.034

.134

3

1

.005

.058

.938

-.111

.120

2

-.050

.042

.240

-.134

.034

    Note. Based on estimated marginal means.

  • * The mean difference is significant at the .05 level.

  • a. Adjustment for multiple comparisons: least significant difference (equivalent to no adjustments).