As centers for teaching and learning (CTLs) expand in the United States, so, too, does the need and desire for those working in the field of educational development to provide evidence of the impact of our work (Beach et al., 2016; Chism et al., 2012; Hines, 2011; Kucsera & Svinicki, 2010).1 But doing so is not easy. As scholars and practitioners have noted, gathering and analyzing evidence of a CTL’s impact on teaching practice, beyond counting participants, is difficult (Cardamone & Dwyer, 2023; Chalmers & Gardiner, 2015; Hines, 2017; Hurney et al., 2016; Kreber et al., 2001). Thus, when developing programming to support faculty to teach during the COVID-19 pandemic, we identified an opportunity to study the impact of that training on teaching practice.
In April of 2020, our CTL and the eCampus Center (the unit that focuses on assisting faculty with fully online teaching and learning) partnered to prepare faculty for teaching in the fall of 2020.2 These efforts resulted in the development and implementation of an intensive professional development program, the Flexible Teaching for Student Success (FTSS) initiative. The overarching goal of FTSS was to prepare faculty to teach flexibly, meaning they could pivot their teaching approaches as necessitated by pandemic conditions (Bose & Nyland, 2021). The initiative offered faculty a choice of participating in one of three tiers of faculty support:
Tier 1: A 3-week, facilitated, asynchronous online course that required a minimum of 36 hours of faculty participation and submission of a detailed course map, the Flexible Learning and Instruction Plan (FLIP) (Madsen et al., 2021). Participants who completed the training and submitted the final deliverable received a $1,000 stipend.
Tier 2: A menu of six, 1-week facilitated, asynchronous online modules, each of which took approximately 3 hours to complete and finished with a specific deliverable. Faculty who completed at least three of those workshops and submitted a version of the FLIP received a $250 stipend.
Tier 3: A collection of online resources and help sessions designed for “just-in-time” support for faculty as they prepared for teaching. This tier had no deliverables or stipend.
Regardless of the tier, the FTSS learning outcomes were that participants would be able to:
write measurable course learning outcomes that can be met in flexible ways;
design alternative assessments that demonstrate student achievement of those outcomes;
develop a variety of activities that engage students and scaffold growth toward the learning outcomes; and
choose strategies to create an inclusive and engaging learning community, in both synchronous and asynchronous settings.
While designing FTSS, we were also aware of the intersection of these outcomes with the goals of another grant-funded initiative, the WIDER PERSIST project (PERSIST).3 The primary focus of PERSIST was to increase faculty use of evidence-based instructional practices (EBIPs) (Landrum et al., 2017). Similarly, many of the teaching practices promoted in FTSS were EBIPs (Bose et al., 2020). EBIPs are defined as:
an evidence-based instructional practice or approach that has a demonstrated record of success. That is, there is reliable, valid empirical evidence to suggest that when instructors use EBIPs, student learning is supported, and it is implied that EBIPs are more effective than standard traditional lecture and discussion methods (Groccia & Buskist, 2011). Active learning techniques are often EBIPs, such as just-in-time teaching, process oriented guided inquiry learning (POGIL), think-pair-share, cooperative learning, team-based learning, peer instruction, service learning, and many others. (Landrum et al., 2017)
The PERSIST team had started collecting faculty self-reported data about teaching practices and EBIP adoption in 2014, through an annual survey. The 2020 version of that survey closed in early March—right before our campus shifted to virtual learning due to the pandemic. While we planned and carried out assessments related to how FTSS met its stated outcomes (Bose & Nyland, 2021), we realized that when the PERSIST team surveyed faculty again in Spring 2021, we could also explore the results for potential evidence of the impact of FTSS.
This study aims to investigate the following questions:
Did participation in a professional development opportunity designed as a response to the pandemic impact faculty adoption of EBIPs? And if so,
Did the level of intensity and commitment of the professional development opportunity impact the amount of change in teaching practice?
The study also contributes to the literature on the impact of educational development by adding an institution-wide, quantitative study of faculty practices comparing before and after participation in professional development.
Methods
Recruitment and Demographics
FTSS participants were recruited through campus-wide email campaigns and encouragement from the provost, deans, and department chairs. Teaching faculty from all colleges were invited, but not required, to participate. Faculty self-selected the tier they thought best fit their needs. Factors impacting their decision included stipends to motivate and compensate them for their time and effort to complete requirements of the tier during the summer.
A total of 306 faculty completed Tier 1, and 90 completed Tier 2. While three tiers were offered to faculty, we chose to focus on Tiers 1 and 2 for this particular study because they offered specific content, were intentionally facilitated, and required time and effort commitment from participants. We have excluded Tier 3, which was limited to providing just-in-time resources to participants and not a structured professional development program.4 The total number of registrants for both Tiers 1 and 2 (396) represents nearly 29% of the instructional faculty for the 2020–2021 school year.
Table 1 shows an overview of FTSS Tier 1 and 2 participants’ and non-participants’ teaching experience.5 Tier 1 and Tier 2 participants had a similar mean of years teaching. Non-participants reported the highest mean of teaching years. However, these differences were not statistically significant as determined by a Kruskal-Wallis test for small sample sizes (H = 2.45, p = .294).
Table 1. Participant Years of Teaching Experience
# of years teaching | Tier 1 (n = 76) | Tier 2 (n = 17) | Non-participants (n = 145) |
0–5 years | 17.0% | 22.2% | 11.6% |
6–10 years | 28.4% | 22.2% | 31.4% |
11–15 years | 15.9% | 16.7% | 15.7% |
16–20 years | 14.8% | 16.7% | 15.1% |
21–25 years | 13.6% | 11.1% | 13.4% |
26+ years | 10.2% | 11.1% | 12.8% |
Mean | 13.9 years | 13.6 years | 15.1 years |
Table 2 shows the faculty rank of the participants based on the university records at the time of faculty enrolled in FTSS. As Table 3 shows, more participants in Tier 2 were tenure/tenure-track faculty (71%) than in Tier 1 (46%) or non-participants (45%). In comparison, the representation of non-tenure-track faculty in Tier 1 was 54%, whereas in Tier 2 it was 29%.
Table 2. Faculty Rank
Faculty rank | Tier 1 (n = 76) | Tier 2 (n = 17) | Non-participants (n = 145) |
Tenure/tenure track | 46% | 71% | 45% |
Non-tenure track | 54% | 29% | 55% |
Table 3. Corresponding Survey Participation Data
Response year | 2021 survey response | ||
Tier 1 | Tier 2 | Non-participants | |
2017 (n = 11) | 2 | 0 | 9 |
2018 (n = 42) | 15 | 2 | 25 |
2020 (n = 186) | 59 | 15 | 112 |
Total | 76 | 17 | 146 |
Data Collection
For this study, we utilized the responses to the EBIP adoption scale instrument, which is one of three batteries deployed in the PERSIST survey.6 The EBIP adoption scale was designed specifically to allow “a faculty member to self-identify their level or stage of adoption of evidence-based instructional practices” (Landrum et al., 2017). The EBIP adoption scale contains six items and is measured using a Guttman (0–1) scale. The survey items are shown in Table 4.
Table 4. EBIP Mean Score Change by Item
Item | Tier 1 (n = 76) | Tier 2 (n = 17) | Non-participants (n = 146) | |||
Pre | Post | Pre | Post | Pre | Post | |
Prior to this survey, I already knew about evidence-based instructional practices (EBIPs). | 0.92* | 1.00* | 1.00 | 1.00 | 0.88 | 0.89 |
I have thought about how to implement EBIPs in my courses. | 0.85* | 0.95* | 0.94 | 1.00 | 0.81 | 0.87 |
I have spent time learning about EBIPs (e.g., attended workshops, experimented in class, read education literature, learned from a colleague), and I am prepared to use EBIPs. | 0.82* | 0.95* | 1.00 | 1.00 | 0.73 | 0.70 |
I consistently use EBIPs in my course. | 0.72* | 0.85* | 0.82 | 0.88 | 0.64 | 0.64 |
I consistently use EBIPs in my course, and I continue to learn about and experiment with new EBIPs. | 0.64* | 0.76* | 0.65 | 0.76 | 0.60 | 0.55 |
I have evidence that student outcomes have improved since I started using EBIPs. | 0.42 | 0.44 | 0.41 | 0.41 | 0.35 | 0.33 |
Scale score | 4.34* | 4.95* | 4.82 | 5.06 | 4.01 | 3.98 |
Note. For each item on the scale, the highest possible score is 1, indicating that they agree with that item. Asterisks indicate a statistically significant difference between pre-test and post-test (p < .001). Neither non-participants nor Tier 2 participants showed a statistically significant change in any of the specific EBIP adoption scale questions.
We collected individual EBIP scores for any faculty member who took the survey between 2017 and 2020 (pre-training) and completed the survey in 2021 (post-training). Faculty without two survey responses available for use (a pre-training survey response and a post-training 2021 response) were excluded from analysis altogether, regardless of FTSS status. In order to compare pre-training and post-training EBIP adoption results, respondents to the 2021 survey were categorized into three different comparison groups: Tier 1 participants, Tier 2 participants, and non-participants.7 Table 3 provides the number of faculty who completed the survey in 2021 for whom we have corresponding survey data from 2017 to 2020.
Data Analysis
To explore the differences in EBIP adoption after completing FTSS, we first compared pre-training and post-training EBIP scores between non-participants and FTSS participants (Tiers 1 and 2); we also compared scores between Tier 1 and Tier 2 participants and non-participants. Each of the six EBIP scale items were measured using a Guttman scale where affirmative (yes) responses were assigned a value of 1 and negative (no) responses were assigned a value of 0. Pre-training and post-training EBIP sum scores were calculated for each respondent by summing the individual values of the six scale items in the validated EBIP scale. Each respondent was assigned a calculated EBIP sum score between 0 and 6, with a higher EBIP score indicating a higher level of EBIP adoption practices. Non-parametric comparative analyses (Kruskal-Wallis tests) for small or uneven sample sizes were used to explore preexisting differences in total EBIP scores between groups. Next, comparative analyses were used to identify significant changes in scale sum scores and individual scale item scores between pre-training and post-training scores. For Tier 1 and non-participants, repeated measures t tests were conducted. For Tier 2, Wilcoxon signed-rank tests for non-parametric and small samples were conducted to account for the small sample size. Comparing differences between pre-training and post-training EBIP total scores allows for measurement of overall significant change. Identifying significant change within individual scale items allows for exploration of the individual items that are possibly driving any overall change in EBIP adoption scores.
Results
Our first research question asked if a professional development program designed as a response to the COVID-19 pandemic impacted faculty’s adoption of EBIPs. In comparing pre-training and post-training EBIP adoption scores between non-participants and FTSS participants (both Tier 1 and Tier 2), we found that there was no statistically significant difference in their pre-training scores. We did find a statistically significant increase in post-training mean EBIP adoption scores for the FTSS participants. The non-participants had no increase (Figure 1). This suggests the programming did lead to increased adoption of EBIPs for FTSS participants overall, compared to non-participants.
Since our analysis did indicate that the professional development positively impacted participants’ EBIP adoption, we then investigated if the level of intensity and commitment of the professional development opportunity impacted the amount of change in teaching practice. Comparative analysis indicated that after FTSS, Tier 1 participants reported a statistically significant change in EBIP adoption scores (t(75) = –3.108, p = 0.001). Tier 2 also reported a higher EBIP adoption score, but it was not statistically significant. Non-participants reported a lower EBIP adoption score, but also not at a statistically significant level (Figure 2). While Tier 1 and Tier 2 demonstrated higher preexisting EBIP scores compared to non-participants, these differences between groups are not statistically significant as determined by a Kruskal-Wallis test for small or unequal sample sizes (H = 3.02, p = .221).
We therefore conclude that the level of intensity of the programming led to differential outcomes for participants, as the more intensive programming (Tier 1) resulted in statistically significant changes in EBIP adoption scores, whereas the changes in EBIP adoption for the less intensive programming (Tier 2), though positive, were not statistically significant. However, it is important to note Tier 2 participants’ pre-training EBIP adoption scores were the highest of the three groups (Figure 2), limiting the amount of change that could be observed among this group of participants.
Since our analysis did indicate that there was a difference in EBIP adoption scores based on participation in each Tier, we completed non-parametric comparative analyses (Wilcoxon signed-rank tests) for individual items within the EBIP scale to explore potential significant changes in item scores by tier (Table 4). We found that five of the six EBIP adoption scale item scores were significantly increased for Tier 1 trainees after completing FTSS training: Item 1 (Z = –2.45, p = .01); Item 2 (Z = –2.54, p = .01); Item 3 (Z = –2.89, p = .004); Item 4 (Z = –2.89, p = .004); and Item 5 (Z = –2.07, p = .04).
Discussion
The first goal of this study was to determine if the intensive professional development that we offered in response to the COVID-19 pandemic had a positive impact on faculty’s adoption of EBIPs. The results of the study confirmed that FTSS participants showed a statistically significant change in their EBIP adoption scores. This study supports the findings of other recent studies (Eddy et al., 2021; Perry, 2023) that found evidence of the impact of professional development practice in light of COVID-19-induced professional development opportunities.
The second goal of this study was to determine if the level of intensity and commitment of the professional development opportunity impacted the amount of change in teaching practice. Analysis of the data shows a statistically significant increase in Tier 1 participants’ EBIP adoption scores in almost every item in the EBIP adoption scale. We did not see a statistically significant change in the Tier 2 participant scores. Though we cannot assume causation between the level of intensity of the professional development and its impact, given the evidence in previous studies about the deeper impact of more sustained faculty development efforts (Cardamone & Dwyer, 2023; Condon et al., 2016; Wheeler & Bach, 2021), a bigger change in practice for participants who spent more time in FTSS is not unexpected.
In addition to Tier 2 being less intense, another potential cause for Tier 2 participants not demonstrating a statistically significant change in EBIP adoption scores is the previous knowledge of this group. The item level changes observed for Tier 1 participants suggests the programming was effective at increasing both awareness and use of EBIPs, leading to statistically significant results. In comparison, all of Tier 2 participants (n = 17) were already aware of EBIPs and had spent time learning about them. The data suggests that there was some increased use and continued learning about EBIPs for Tier 2 participants, which contributed to change in the overall mean EBIP adoption scores for this group. However, the changes lack statistical significance. This finding might indicate that trainings are most effective for those that begin with a lower level of knowledge/experience with the topic. This could be an area to point to for future research.
Limitations
One limitation of this study is that the number of participants, specifically in Tier 2, is low. The group sizes were also disproportionate. While this can raise concerns in terms of statistical validity of the results, there are currently not many studies that provide scholarly evidence of the impact of the work done by educational developers on an institutional scale. Future research should focus on finding additional evidence of the institutional impacts of this type of work.
Another limitation is that the PERSIST survey that produced the data analyzed in this study was not specifically developed to find evidence of the impact of the FTSS program. Instead, the survey’s responses to the EBIP adoption scale were one of three batteries deployed in the survey.
Additionally, the study’s results are based on faculty self-reported data, which has been questioned as a reliable assessment tool in past research (e.g., Ebert-May et al., 2011). However, Durham et al. (2017) found that when investigating the frequency of scientific teaching practices as reported by students, instructors, and observers, answers reported by faculty closely matched those of a third-party observer.
Finally, a factor that we could not account for in our research was the influence of the pandemic. How much of faculty’s change in practice can be attributed to the fact that the COVID-19 pandemic forced them to change? What we can say is that participants in Tier 1 did show a positive change in their EBIP adoption scores, while non-participants showed no positive change (and a non-statistically significant drop). This finding could indicate that the pandemic did have a non-positive or negative impact on faculty adoption of EBIPs unless they participated in FTSS, but certainly other factors (e.g., caring for children/loved ones while teaching, difficulty teaching certain content using remote/online learning, the instructor’s own health, etc.) could have played a role.
Conclusion
Through this study we found that participation in a professional development opportunity designed as a response to the COVID-19 pandemic positively impacted faculty adoption of EBIPs. We also found that the level of intensity and commitment of the professional development opportunity impacted the amount of change in teaching practice. Faculty who participated in the Tier 1 training did show an increase in their EBIP adoption scores, whereas Tier 2 participants did not report a significant increase in scores. This could be because Tier 2 was a less intense training and/or Tier 2 participants began FTSS with higher knowledge of, and experience using, EBIPs. Our study informs the discussion of the impact of CTL work and how it can evolve in the future. It also bolsters support for the professional development work being done in CTLs. Lastly, the data collected for this study is part of an extensive longitudinal data set, which offers many more opportunities to explore the impact of CTL professional development work over time. While long-term data collection may be time and labor intensive, it provides opportunities for showcasing the impact of our professional development work.
Biographies
Teresa Focarile is Director of Educational Development for the Boise State University Center for Teaching and Learning. Her scholarly work has focused on how educational developers can support institutional efforts such as program assessment and concurrent enrollment as well as designing programs for adjunct faculty. She has taught at the college level for 18 years, the past 12 for Boise State University, and the previous 6 for the University of Connecticut.
Sarah Lausch is an Educational Development Consultant for the Boise State University Center for Teaching and Learning. Her research projects have focused on developing strategies to improve impostor feelings in students, implement mindful teaching practices, and support students’ journeys toward self-authorship. For the CTL, Sarah manages the Mid-Semester Assessment Protocol program and the Course Design Academy. She has taught classes in various departments at Boise State University for 5 years.
Meagan Haynes is an independent researcher specializing in evaluating education-focused initiatives. She collaborates with several universities and national funding organizations to design, implement, and assess programs that improve both student and community outcomes. Prior publications focus on effective approaches to advancing educational practices, including social-emotional learning strategies, enhancing access to and efficacy of learning materials, and improving outcomes for diverse student populations.
Brittnee Earl is Program Coordinator for the Boise State University Center for Teaching and Learning. She has provided support and oversight for several National Science Foundation grants administered through the CTL and also contributes to institutional initiatives and research focused on improving undergraduate education. Her scholarly work has focused on institutional change and STEM education reform.
Susan Shadle serves as the Vice Provost for Undergraduate Studies and Distinguished Professor of Chemistry at Boise State University. Previously, she served as the Executive Director of the Center for Teaching and Learning at Boise State University from 2006 to 2020. Her scholarly work has focused on understanding and facilitating institutional change, with a particular focus in STEM education reform and creating environments that foster student success.
Lisa Berry is Associate Director of Instructional Design Services for the eCampus Center at Boise State University. Her focus is on assisting faculty to provide students successful online learning experiences. She has taught both in person and online at all levels from middle school to the graduate level.
Notes
- Like Linse and Hood (2022), we use the term CTLs to be “inclusive of centers of one (or less than one), medium and large centers, as well as units that encompass educational technology or student support” (p. 6). ⮭
- The use of “we” in this paper therefore refers to the partnership between these two units. ⮭
- WIDER-PERSIST was funded through the National Science Foundation, #DUE-1347830. ⮭
- Another reason for not including Tier 3 was that this tier focused on providing resources that faculty could use any time. While we could determine the number of faculty who accessed those materials (through website analytics), we can’t know which faculty accessed them, so there would be no way to connect resource usage and EBIP scores. ⮭
- For this study, non-participants are defined as those that completed the PERSIST survey both between 2017–2020 and in 2021 but did not participate in Tier 1 or 2 of FTSS. ⮭
- The EBIP adoption scale was validated as part of the development process, as described in Landrum et al. (2017). The other batteries of the PERSIST survey, Teaching Practices Inventory and Current Instructional Climate Survey, did not align with the outcomes of FTSS and therefore were not included in our study. ⮭
- We separately categorized Tier 1 and Tier 2 participants in order to identify possible outcome differences between the two forms of training. ⮭
Acknowledgments
The authors acknowledge the time and effort spent by the 44 faculty and staff who designed and/or facilitated activities in FTSS. The impact of this work could not have happened without them.
This material is partially based upon work supported by the National Science Foundation under grant # DUE- 1347830.
Conflict of Interest Statement
The authors have no conflict of interest.
Data Availability
The data reported in this manuscript are available on request by contacting the corresponding author.
References
Beach, A. L., Sorcinelli, M. D., Austin, A. E., & Rivard, J. K. (2016). Faculty development in the age of evidence: Current practices, future imperatives. Stylus Publishing.
Bose, D., Berry, L., Nyland, R., Saba, A., & Focarile, T. (2020). Flexible teaching for student success: A three-tiered initiative to prepare faculty for flexible teaching. Journal on Centers for Teaching and Learning, 12, 87–135. https://openjournal.lib.miamioh.edu/index.php/jctl/article/view/211
Bose, D., & Nyland, R. (2021). Are faculty prepared to teach flexibly? Results from an evaluation study. Journal on Centers for Teaching and Learning, 13, 60–91. https://openjournal.lib.miamioh.edu/index.php/jctl/article/view/224/127
Cardamone, C. N., & Dwyer, H. (2023). A mixed methods study of faculty experiences in a course design institute. To Improve the Academy: A Journal of Educational Development, 42(1), 8. http://doi.org/10.3998/tia.2108
Chalmers, D., & Gardiner, D. (2015). An evaluation framework for identifying the effectiveness and impact of academic teacher development programmes. Studies in Educational Evaluation, 46, 81–91. http://doi.org/10.1016/j.stueduc.2015.02.002
Chism, N. V. N., Holley, M., & Harris, C. J. (2012). Researching the impact of educational development: Basis for informed practice. To Improve the Academy: A Journal of Educational Development, 31(1), 129–145. http://doi.org/10.1002/j.2334-4822.2012.tb00678.x
Condon, W., Iverson, E. R., Manduca, C. A., Rutz, C., & Willett, G. (2016). Faculty development and student learning: Assessing the connections. Indiana University Press.
Durham, M. F., Knight, J. K., & Couch, B. A. (2017). Measurement Instrument for Scientific Teaching (MIST): A tool to measure the frequencies of research-based teaching practices in undergraduate science courses. CBE—Life Sciences Education, 16(4). http://doi.org/10.1187/cbe.17-02-0033
Ebert-May, D., Derting, T. L., Hodder, J., Momsen, J. L., Long, T. M., & Jardeleza, S. E. (2011). What we say is not what we do: Effective evaluation of faculty professional development programs. BioScience, 61(7), 550–558. http://doi.org/10.1525/bio.2011.61.7.9
Eddy, P. L., Macdonald, R. H., & Baer, E. M. D. (2021). Professional development during a crisis and beyond: Lessons learned during COVID. New Directions for Community Colleges, 2021(195), 199–212. http://doi.org/10.1002/cc.20477
Groccia, J. E., & Buskist, W. (2011). Need for evidence-based teaching. New Directions for Teaching and Learning, 2011(128), 5–11. http://doi.org/10.1002/tl.463
Hines, S. R. (2011). How mature teaching and learning centers evaluate their services. To Improve the Academy: A Journal of Educational Development, 30(1), 277–289. http://doi.org/10.1002/j.2334-4822.2011.tb00663.x
Hines, S. R. (2017). Evaluating centers for teaching and learning: A field-tested model. To Improve the Academy: A Journal of Educational Development, 36(2), 89–100. http://doi.org/10.3998/tia.17063888.0036.202
Hurney, C. A., Brantmeier, E. J., Good, M. R., Harrison, D., & Meixner, C. (2016). The faculty learning outcome assessment framework. The Journal of Faculty Development, 30(2), 69–77.
Kreber, C., Brook, P., & Educational Policy. (2001). Impact evaluation of educational development programmes. International Journal for Academic Development, 6(2), 96–108. http://doi.org/10.1080/13601440110090749
Kucsera, J. V., & Svinicki, M. (2010). Rigorous evaluations of faculty development programs. The Journal of Faculty Development, 24(2), 5–18.
Landrum, R. E., Viskupic, K., Shadle, S. E., & Bullock, D. (2017). Assessing the STEM landscape: The current instructional climate survey and the evidence-based instructional practices adoption scale. International Journal of STEM Education, 4. http://doi.org/10.1186/s40594-017-0092-1
Linse, A. R., & Hood, L. N. (2022). Building a strategic plan that guides assessment: A case study from a teaching and learning center. Journal on Centers for Teaching and Learning, 14, 4–38.
Madsen, L., Focarile, T., Souza, T., & Berry, L. (2021, April 1). FLIPping the script on course design: Integrating UDL and student centeredness into the course design table. Academic Impressions. https://www.academicimpressions.com/blog/flipping-the-script-on-course-design
Perry, E. (2023). Teacher professional development in changing circumstances: The impact of COVID-19 on schools’ approaches to professional development. Education Sciences, 13(1), 48. http://doi.org/10.3390/educsci13010048
Wheeler, L. B., & Bach, D. (2021). Understanding the impact of educational development interventions on classroom instruction and student success. International Journal for Academic Development, 26(1), 24–40. http://doi.org/10.1080/1360144X.2020.1777555