<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20120330//EN" "http://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1.dtd">
<!--<?xml-stylesheet type="text/xsl" href="article.xsl"?>-->
<article article-type="research-article" dtd-version="1.2" xml:lang="en" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id journal-id-type="issn">2334-4822</journal-id>
<journal-title-group>
<journal-title>To Improve the Academy (TIA)</journal-title>
</journal-title-group>
<issn pub-type="epub">2334-4822</issn>
<publisher>
<publisher-name>Michigan Publishing</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3998/tia.5595</article-id>
<article-categories>
<subj-group>
<subject>Research</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>They came, they engaged, they changed: Evidence of impact of intensive pandemic professional development</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid">https://orcid.org/0000-0003-0521-457X</contrib-id>
<name>
<surname>Focarile</surname>
<given-names>Teresa</given-names>
</name>
<email>teresafocarile@boisestate.edu</email>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid">https://orcid.org/0000-0003-3079-5900</contrib-id>
<name>
<surname>Lausch</surname>
<given-names>Sarah</given-names>
</name>
<email>sarahlausch@boisestate.edu</email>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid">https://orcid.org/0009-0001-5775-5370</contrib-id>
<name>
<surname>Haynes</surname>
<given-names>Meagan</given-names>
</name>
<email>meaganhaynes@boisestate.edu</email>
</contrib>
<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid">https://orcid.org/0000-0002-1906-5256</contrib-id>
<name>
<surname>Earl</surname>
<given-names>Brittnee</given-names>
</name>
<email>brittneeearl@boisestate.edu</email>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid">https://orcid.org/0000-0002-4875-0893</contrib-id>
<name>
<surname>Shadle</surname>
<given-names>Susan</given-names>
</name>
<email>sshadle@boisestate.edu</email>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Berry</surname>
<given-names>Lisa</given-names>
</name>
<email>lisaberry@boisestate.edu</email>
<xref ref-type="aff" rid="aff-2">2</xref>
</contrib>
</contrib-group>
<aff id="aff-1"><label>1</label>Center for Teaching and Learning, Boise State University</aff>
<aff id="aff-2"><label>2</label>eCampus Center, Boise State University</aff>
<pub-date publication-format="electronic" date-type="pub" iso-8601-date="2025-03-11">
<day>11</day>
<month>03</month>
<year>2025</year>
</pub-date>
<pub-date pub-type="collection">
<year>2025</year>
</pub-date>
<volume>44</volume>
<issue>1</issue>
<fpage>1</fpage>
<lpage>17</lpage>
<permissions>
<copyright-statement>Copyright: &#x00A9; 2025 The Author(s)</copyright-statement>
<copyright-year>2025</copyright-year>
<license license-type="open-access" xlink:href="https://creativecommons.org/licenses/by-nc-nd/4.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CC BY-NC-ND 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. See <uri xlink:href="https://creativecommons.org/licenses/by-nc-nd/4.0/">https://creativecommons.org/licenses/by-nc-nd/4.0/</uri>.</license-p>
</license>
</permissions>
<self-uri xlink:href="https://journals.publishing.umich.edu/tia/10.3998/tia.5595/"/>
<abstract>
<p>As centers for teaching and learning (CTL) expand in the United States, so, too, does the need and desire for those of us working in the field of educational development to provide evidence of the impact of our work. In response to the COVID-19 pandemic, our CTL and eCampus Center, like many faculty support units across the country, developed and implemented an intensive professional development program to prepare faculty to teach flexibly in the fall of 2020. Using faculty self-reported data about their adoption of evidence-based instructional practices (EBIPs) that was collected annually since 2014, we investigated comparisons over time and sought evidence of the impact that professional development program had on faculty&#8217;s use of EBIPs. The study shows that intensive professional development programs have a positive effect on faculty&#8217;s use of EBIPs. We also found that the level of improvement was higher for those who participated in the more intense version of the program. This study contributes to the existing literature on the impact of educational development initiatives by demonstrating a positive correlation between educational development and instructor practice and adds to it by looking at an institutional-level program.</p>
</abstract>
<kwd-group>
<kwd>evidence-based instructional practices</kwd>
<kwd>impact of faculty development</kwd>
<kwd>centers for teaching and learning</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<p>As centers for teaching and learning (CTLs) expand in the United States, so, too, does the need and desire for those working in the field of educational development to provide evidence of the impact of our work (<xref ref-type="bibr" rid="B1">Beach et al., 2016</xref>; <xref ref-type="bibr" rid="B6">Chism et al., 2012</xref>; <xref ref-type="bibr" rid="B12">Hines, 2011</xref>; <xref ref-type="bibr" rid="B16">Kucsera &amp; Svinicki, 2010</xref>).<xref ref-type="fn" rid="n1">1</xref> But doing so is not easy. As scholars and practitioners have noted, gathering and analyzing evidence of a CTL&#8217;s impact on teaching practice, beyond counting participants, is difficult (<xref ref-type="bibr" rid="B4">Cardamone &amp; Dwyer, 2023</xref>; <xref ref-type="bibr" rid="B5">Chalmers &amp; Gardiner, 2015</xref>; <xref ref-type="bibr" rid="B13">Hines, 2017</xref>; <xref ref-type="bibr" rid="B14">Hurney et al., 2016</xref>; <xref ref-type="bibr" rid="B15">Kreber et al., 2001</xref>). Thus, when developing programming to support faculty to teach during the COVID-19 pandemic, we identified an opportunity to study the impact of that training on teaching practice.</p>
<p>In April of 2020, our CTL and the eCampus Center (the unit that focuses on assisting faculty with fully online teaching and learning) partnered to prepare faculty for teaching in the fall of 2020.<xref ref-type="fn" rid="n2">2</xref> These efforts resulted in the development and implementation of an intensive professional development program, the Flexible Teaching for Student Success (FTSS) initiative. The overarching goal of FTSS was to prepare faculty to teach flexibly, meaning they could pivot their teaching approaches as necessitated by pandemic conditions (<xref ref-type="bibr" rid="B3">Bose &amp; Nyland, 2021</xref>). The initiative offered faculty a choice of participating in one of three tiers of faculty support:</p>
<list list-type="bullet">
<list-item><p>Tier 1: A 3-week, facilitated, asynchronous online course that required a minimum of 36 hours of faculty participation and submission of a detailed course map, the Flexible Learning and Instruction Plan (FLIP) (<xref ref-type="bibr" rid="B19">Madsen et al., 2021</xref>). Participants who completed the training and submitted the final deliverable received a $1,000 stipend.</p></list-item>
<list-item><p>Tier 2: A menu of six, 1-week facilitated, asynchronous online modules, each of which took approximately 3 hours to complete and finished with a specific deliverable. Faculty who completed at least three of those workshops and submitted a version of the FLIP received a $250 stipend.</p></list-item>
<list-item><p>Tier 3: A collection of online resources and help sessions designed for &#8220;just-in-time&#8221; support for faculty as they prepared for teaching. This tier had no deliverables or stipend.</p></list-item>
</list>
<p>Regardless of the tier, the FTSS learning outcomes were that participants would be able to:</p>
<list list-type="bullet">
<list-item><p>write measurable course learning outcomes that can be met in flexible ways;</p></list-item>
<list-item><p>design alternative assessments that demonstrate student achievement of those outcomes;</p></list-item>
<list-item><p>develop a variety of activities that engage students and scaffold growth toward the learning outcomes; and</p></list-item>
<list-item><p>choose strategies to create an inclusive and engaging learning community, in both synchronous and asynchronous settings.</p></list-item>
</list>
<p>While designing FTSS, we were also aware of the intersection of these outcomes with the goals of another grant-funded initiative, the WIDER PERSIST project (PERSIST).<xref ref-type="fn" rid="n3">3</xref> The primary focus of PERSIST was to increase faculty use of evidence-based instructional practices (EBIPs) (<xref ref-type="bibr" rid="B17">Landrum et al., 2017</xref>). Similarly, many of the teaching practices promoted in FTSS were EBIPs (<xref ref-type="bibr" rid="B2">Bose et al., 2020</xref>). EBIPs are defined as:</p>
<disp-quote>
<p>an evidence-based instructional practice or approach that has a demonstrated record of success. That is, there is reliable, valid empirical evidence to suggest that when instructors use EBIPs, student learning is supported, and it is implied that EBIPs are more effective than standard traditional lecture and discussion methods (<xref ref-type="bibr" rid="B11">Groccia &amp; Buskist, 2011</xref>). Active learning techniques are often EBIPs, such as just-in-time teaching, process oriented guided inquiry learning (POGIL), think-pair-share, cooperative learning, team-based learning, peer instruction, service learning, and many others. (<xref ref-type="bibr" rid="B17">Landrum et al., 2017</xref>)</p>
</disp-quote>
<p>The PERSIST team had started collecting faculty self-reported data about teaching practices and EBIP adoption in 2014, through an annual survey. The 2020 version of that survey closed in early March&#8212;right before our campus shifted to virtual learning due to the pandemic. While we planned and carried out assessments related to how FTSS met its stated outcomes (<xref ref-type="bibr" rid="B3">Bose &amp; Nyland, 2021</xref>), we realized that when the PERSIST team surveyed faculty again in Spring 2021, we could also explore the results for potential evidence of the impact of FTSS.</p>
<p>This study aims to investigate the following questions:</p>
<list list-type="bullet">
<list-item><p>Did participation in a professional development opportunity designed as a response to the pandemic impact faculty adoption of EBIPs? And if so,</p></list-item>
<list-item><p>Did the level of intensity and commitment of the professional development opportunity impact the amount of change in teaching practice?</p></list-item>
</list>
<p>The study also contributes to the literature on the impact of educational development by adding an institution-wide, quantitative study of faculty practices comparing before and after participation in professional development.</p>
<sec>
<title>Methods</title>
<sec>
<title><italic>Recruitment and Demographics</italic></title>
<p>FTSS participants were recruited through campus-wide email campaigns and encouragement from the provost, deans, and department chairs. Teaching faculty from all colleges were invited, but not required, to participate. Faculty self-selected the tier they thought best fit their needs. Factors impacting their decision included stipends to motivate and compensate them for their time and effort to complete requirements of the tier during the summer.</p>
<p>A total of 306 faculty completed Tier 1, and 90 completed Tier 2. While three tiers were offered to faculty, we chose to focus on Tiers 1 and 2 for this particular study because they offered specific content, were intentionally facilitated, and required time and effort commitment from participants. We have excluded Tier 3, which was limited to providing just-in-time resources to participants and not a structured professional development program.<xref ref-type="fn" rid="n4">4</xref> The total number of registrants for both Tiers 1 and 2 (396) represents nearly 29% of the instructional faculty for the 2020&#8211;2021 school year.</p>
<p><xref ref-type="table" rid="T1">Table 1</xref> shows an overview of FTSS Tier 1 and 2 participants&#8217; and non-participants&#8217; teaching experience.<xref ref-type="fn" rid="n5">5</xref> Tier 1 and Tier 2 participants had a similar mean of years teaching. Non-participants reported the highest mean of teaching years. However, these differences were not statistically significant as determined by a Kruskal-Wallis test for small sample sizes (<italic>H</italic> = 2.45, <italic>p</italic> = .294).</p>
<table-wrap id="T1">
<caption>
<p><italic>Table 1. Participant Years of Teaching Experience</italic></p>
</caption>
<table>
<thead>
<tr>
<td align="left" valign="top"><bold># of years teaching</bold></td>
<td align="left" valign="top"><bold>Tier 1 (<italic>n</italic> = 76)</bold></td>
<td align="left" valign="top"><bold>Tier 2 (<italic>n</italic> = 17)</bold></td>
<td align="left" valign="top"><bold>Non-participants (<italic>n</italic> = 145)</bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">0&#8211;5 years</td>
<td align="left" valign="top">17.0%</td>
<td align="left" valign="top">22.2%</td>
<td align="left" valign="top">11.6%</td>
</tr>
<tr>
<td align="left" valign="top">6&#8211;10 years</td>
<td align="left" valign="top">28.4%</td>
<td align="left" valign="top">22.2%</td>
<td align="left" valign="top">31.4%</td>
</tr>
<tr>
<td align="left" valign="top">11&#8211;15 years</td>
<td align="left" valign="top">15.9%</td>
<td align="left" valign="top">16.7%</td>
<td align="left" valign="top">15.7%</td>
</tr>
<tr>
<td align="left" valign="top">16&#8211;20 years</td>
<td align="left" valign="top">14.8%</td>
<td align="left" valign="top">16.7%</td>
<td align="left" valign="top">15.1%</td>
</tr>
<tr>
<td align="left" valign="top">21&#8211;25 years</td>
<td align="left" valign="top">13.6%</td>
<td align="left" valign="top">11.1%</td>
<td align="left" valign="top">13.4%</td>
</tr>
<tr>
<td align="left" valign="top">26+ years</td>
<td align="left" valign="top">10.2%</td>
<td align="left" valign="top">11.1%</td>
<td align="left" valign="top">12.8%</td>
</tr>
<tr>
<td align="left" valign="top">Mean</td>
<td align="left" valign="top">13.9 years</td>
<td align="left" valign="top">13.6 years</td>
<td align="left" valign="top">15.1 years</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><xref ref-type="table" rid="T2">Table 2</xref> shows the faculty rank of the participants based on the university records at the time of faculty enrolled in FTSS. As <xref ref-type="table" rid="T3">Table 3</xref> shows, more participants in Tier 2 were tenure/tenure-track faculty (71%) than in Tier 1 (46%) or non-participants (45%). In comparison, the representation of non-tenure-track faculty in Tier 1 was 54%, whereas in Tier 2 it was 29%.</p>
<table-wrap id="T2">
<caption>
<p><italic>Table 2. Faculty Rank</italic></p>
</caption>
<table>
<thead>
<tr>
<td align="left" valign="top"><bold>Faculty rank</bold></td>
<td align="left" valign="top"><bold>Tier 1 (<italic>n</italic> = 76)</bold></td>
<td align="left" valign="top"><bold>Tier 2 (<italic>n</italic> = 17)</bold></td>
<td align="left" valign="top"><bold>Non-participants (<italic>n</italic> = 145)</bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Tenure/tenure track</td>
<td align="left" valign="top">46%</td>
<td align="left" valign="top">71%</td>
<td align="left" valign="top">45%</td>
</tr>
<tr>
<td align="left" valign="top">Non-tenure track</td>
<td align="left" valign="top">54%</td>
<td align="left" valign="top">29%</td>
<td align="left" valign="top">55%</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="T3">
<caption>
<p><italic>Table 3. Corresponding Survey Participation Data</italic></p>
</caption>
<table>
<thead>
<tr>
<td align="left" valign="top" rowspan="2"><bold>Response year</bold></td>
<td align="left" valign="top" colspan="3"><bold>2021 survey response</bold></td>
</tr>
<tr>
<td align="left" valign="top"><bold>Tier 1</bold></td>
<td align="left" valign="top"><bold>Tier 2</bold></td>
<td align="left" valign="top"><bold>Non-participants</bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">2017 (<italic>n</italic> = 11)</td>
<td align="left" valign="top">2</td>
<td align="left" valign="top">0</td>
<td align="left" valign="top">9</td>
</tr>
<tr>
<td align="left" valign="top">2018 (<italic>n</italic> = 42)</td>
<td align="left" valign="top">15</td>
<td align="left" valign="top">2</td>
<td align="left" valign="top">25</td>
</tr>
<tr>
<td align="left" valign="top">2020 (<italic>n</italic> = 186)</td>
<td align="left" valign="top">59</td>
<td align="left" valign="top">15</td>
<td align="left" valign="top">112</td>
</tr>
<tr>
<td align="left" valign="top">Total</td>
<td align="left" valign="top">76</td>
<td align="left" valign="top">17</td>
<td align="left" valign="top">146</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title><italic>Data Collection</italic></title>
<p>For this study, we utilized the responses to the EBIP adoption scale instrument, which is one of three batteries deployed in the PERSIST survey.<xref ref-type="fn" rid="n6">6</xref> The EBIP adoption scale was designed specifically to allow &#8220;a faculty member to self-identify their level or stage of adoption of evidence-based instructional practices&#8221; (<xref ref-type="bibr" rid="B17">Landrum et al., 2017</xref>). The EBIP adoption scale contains six items and is measured using a Guttman (0&#8211;1) scale. The survey items are shown in <xref ref-type="table" rid="T4">Table 4</xref>.</p>
<table-wrap id="T4">
<caption>
<p><italic>Table 4. EBIP Mean Score Change by Item</italic></p>
</caption>
<table>
<thead>
<tr>
<td align="left" valign="top" rowspan="2"><bold>Item</bold></td>
<td align="left" valign="top" colspan="2"><bold>Tier 1 (<italic>n</italic> = 76)</bold></td>
<td align="left" valign="top" colspan="2"><bold>Tier 2 (<italic>n</italic> = 17)</bold></td>
<td align="left" valign="top" colspan="2"><bold>Non-participants (<italic>n</italic> = 146)</bold></td>
</tr>
<tr>
<td align="left" valign="top"><bold>Pre</bold></td>
<td align="left" valign="top"><bold>Post</bold></td>
<td align="left" valign="top"><bold>Pre</bold></td>
<td align="left" valign="top"><bold>Post</bold></td>
<td align="left" valign="top"><bold>Pre</bold></td>
<td align="left" valign="top"><bold>Post</bold></td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Prior to this survey, I already knew about evidence-based instructional practices (EBIPs).</td>
<td align="left" valign="top"><bold>0.92*</bold></td>
<td align="left" valign="top"><bold>1.00*</bold></td>
<td align="left" valign="top">1.00</td>
<td align="left" valign="top">1.00</td>
<td align="left" valign="top">0.88</td>
<td align="left" valign="top">0.89</td>
</tr>
<tr>
<td align="left" valign="top">I have thought about how to implement EBIPs in my courses.</td>
<td align="left" valign="top"><bold>0.85*</bold></td>
<td align="left" valign="top"><bold>0.95*</bold></td>
<td align="left" valign="top">0.94</td>
<td align="left" valign="top">1.00</td>
<td align="left" valign="top">0.81</td>
<td align="left" valign="top">0.87</td>
</tr>
<tr>
<td align="left" valign="top">I have spent time learning about EBIPs (e.g., attended workshops, experimented in class, read education literature, learned from a colleague), and I am prepared to use EBIPs.</td>
<td align="left" valign="top"><bold>0.82*</bold></td>
<td align="left" valign="top"><bold>0.95*</bold></td>
<td align="left" valign="top">1.00</td>
<td align="left" valign="top">1.00</td>
<td align="left" valign="top">0.73</td>
<td align="left" valign="top">0.70</td>
</tr>
<tr>
<td align="left" valign="top">I consistently use EBIPs in my course.</td>
<td align="left" valign="top"><bold>0.72*</bold></td>
<td align="left" valign="top"><bold>0.85*</bold></td>
<td align="left" valign="top">0.82</td>
<td align="left" valign="top">0.88</td>
<td align="left" valign="top">0.64</td>
<td align="left" valign="top">0.64</td>
</tr>
<tr>
<td align="left" valign="top">I consistently use EBIPs in my course, and I continue to learn about and experiment with new EBIPs.</td>
<td align="left" valign="top"><bold>0.64*</bold></td>
<td align="left" valign="top"><bold>0.76*</bold></td>
<td align="left" valign="top">0.65</td>
<td align="left" valign="top">0.76</td>
<td align="left" valign="top">0.60</td>
<td align="left" valign="top">0.55</td>
</tr>
<tr>
<td align="left" valign="top">I have evidence that student outcomes have improved since I started using EBIPs.</td>
<td align="left" valign="top">0.42</td>
<td align="left" valign="top">0.44</td>
<td align="left" valign="top">0.41</td>
<td align="left" valign="top">0.41</td>
<td align="left" valign="top">0.35</td>
<td align="left" valign="top">0.33</td>
</tr>
<tr>
<td align="left" valign="top">Scale score</td>
<td align="left" valign="top"><bold>4.34*</bold></td>
<td align="left" valign="top"><bold>4.95*</bold></td>
<td align="left" valign="top">4.82</td>
<td align="left" valign="top">5.06</td>
<td align="left" valign="top">4.01</td>
<td align="left" valign="top">3.98</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn><p><italic>Note.</italic> For each item on the scale, the highest possible score is 1, indicating that they agree with that item. Asterisks indicate a statistically significant difference between pre-test and post-test (<italic>p</italic> &lt; .001). Neither non-participants nor Tier 2 participants showed a statistically significant change in any of the specific EBIP adoption scale questions.</p></fn>
</table-wrap-foot>
</table-wrap>
<p>We collected individual EBIP scores for any faculty member who took the survey between 2017 and 2020 (pre-training) and completed the survey in 2021 (post-training). Faculty without two survey responses available for use (a pre-training survey response and a post-training 2021 response) were excluded from analysis altogether, regardless of FTSS status. In order to compare pre-training and post-training EBIP adoption results, respondents to the 2021 survey were categorized into three different comparison groups: Tier 1 participants, Tier 2 participants, and non-participants.<xref ref-type="fn" rid="n7">7</xref> <xref ref-type="table" rid="T3">Table 3</xref> provides the number of faculty who completed the survey in 2021 for whom we have corresponding survey data from 2017 to 2020.</p>
</sec>
<sec>
<title><italic>Data Analysis</italic></title>
<p>To explore the differences in EBIP adoption after completing FTSS, we first compared pre-training and post-training EBIP scores between non-participants and FTSS participants (Tiers 1 and 2); we also compared scores between Tier 1 and Tier 2 participants and non-participants. Each of the six EBIP scale items were measured using a Guttman scale where affirmative (yes) responses were assigned a value of 1 and negative (no) responses were assigned a value of 0. Pre-training and post-training EBIP sum scores were calculated for each respondent by summing the individual values of the six scale items in the validated EBIP scale. Each respondent was assigned a calculated EBIP sum score between 0 and 6, with a higher EBIP score indicating a higher level of EBIP adoption practices. Non-parametric comparative analyses (Kruskal-Wallis tests) for small or uneven sample sizes were used to explore preexisting differences in total EBIP scores between groups. Next, comparative analyses were used to identify significant changes in scale sum scores and individual scale item scores between pre-training and post-training scores. For Tier 1 and non-participants, repeated measures <italic>t</italic> tests were conducted. For Tier 2, Wilcoxon signed-rank tests for non-parametric and small samples were conducted to account for the small sample size. Comparing differences between pre-training and post-training EBIP total scores allows for measurement of overall significant change. Identifying significant change within individual scale items allows for exploration of the individual items that are possibly driving any overall change in EBIP adoption scores.</p>
</sec>
</sec>
<sec>
<title>Results</title>
<p>Our first research question asked if a professional development program designed as a response to the COVID-19 pandemic impacted faculty&#8217;s adoption of EBIPs. In comparing pre-training and post-training EBIP adoption scores between non-participants and FTSS participants (both Tier 1 and Tier 2), we found that there was no statistically significant difference in their pre-training scores. We did find a statistically significant increase in post-training mean EBIP adoption scores for the FTSS participants. The non-participants had no increase (<xref ref-type="fig" rid="F1">Figure 1</xref>). This suggests the programming did lead to increased adoption of EBIPs for FTSS participants overall, compared to non-participants.</p>
<fig id="F1">
<caption>
<p><italic>Figure 1. EBIP Mean Scale Scores Pre-training and Post-training</italic></p>
<p><italic>Note.</italic> Asterisks indicate statistically significant differences between pre and post mean scores as identified by a paired samples t test (t(92) = 3.76, p &lt; 0.01).</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="tia-5595_focarile-g1.png"/>
</fig>
<p>Since our analysis did indicate that the professional development positively impacted participants&#8217; EBIP adoption, we then investigated if the level of intensity and commitment of the professional development opportunity impacted the amount of change in teaching practice. Comparative analysis indicated that after FTSS, Tier 1 participants reported a statistically significant change in EBIP adoption scores (<italic>t</italic>(75) = &#8211;3.108, <italic>p</italic> = 0.001). Tier 2 also reported a higher EBIP adoption score, but it was not statistically significant. Non-participants reported a lower EBIP adoption score, but also not at a statistically significant level (<xref ref-type="fig" rid="F2">Figure 2</xref>). While Tier 1 and Tier 2 demonstrated higher preexisting EBIP scores compared to non-participants, these differences between groups are not statistically significant as determined by a Kruskal-Wallis test for small or unequal sample sizes (<italic>H</italic> = 3.02, <italic>p</italic> = .221).</p>
<fig id="F2">
<caption>
<p><italic>Figure 2. Pre-training and Post-training EBIP Mean Scale Scores</italic></p>
<p><italic>Note.</italic> Asterisks indicate statistically significant differences between pre and post mean scores (p &lt; 0.01).</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="tia-5595_focarile-g2.png"/>
</fig>
<p>We therefore conclude that the level of intensity of the programming led to differential outcomes for participants, as the more intensive programming (Tier 1) resulted in statistically significant changes in EBIP adoption scores, whereas the changes in EBIP adoption for the less intensive programming (Tier 2), though positive, were not statistically significant. However, it is important to note Tier 2 participants&#8217; pre-training EBIP adoption scores were the highest of the three groups (<xref ref-type="fig" rid="F2">Figure 2</xref>), limiting the amount of change that could be observed among this group of participants.</p>
<p>Since our analysis did indicate that there was a difference in EBIP adoption scores based on participation in each Tier, we completed non-parametric comparative analyses (Wilcoxon signed-rank tests) for individual items within the EBIP scale to explore potential significant changes in item scores by tier (<xref ref-type="table" rid="T4">Table 4</xref>). We found that five of the six EBIP adoption scale item scores were significantly increased for Tier 1 trainees after completing FTSS training: Item 1 (<italic>Z</italic> = &#8211;2.45, <italic>p</italic> = .01); Item 2 (<italic>Z</italic> = &#8211;2.54, <italic>p</italic> = .01); Item 3 (<italic>Z</italic> = &#8211;2.89, <italic>p</italic> = .004); Item 4 (<italic>Z</italic> = &#8211;2.89, <italic>p</italic> = .004); and Item 5 (<italic>Z</italic> = &#8211;2.07, <italic>p</italic> = .04).</p>
</sec>
<sec>
<title>Discussion</title>
<p>The first goal of this study was to determine if the intensive professional development that we offered in response to the COVID-19 pandemic had a positive impact on faculty&#8217;s adoption of EBIPs. The results of the study confirmed that FTSS participants showed a statistically significant change in their EBIP adoption scores. This study supports the findings of other recent studies (<xref ref-type="bibr" rid="B10">Eddy et al., 2021</xref>; <xref ref-type="bibr" rid="B20">Perry, 2023</xref>) that found evidence of the impact of professional development practice in light of COVID-19-induced professional development opportunities.</p>
<p>The second goal of this study was to determine if the level of intensity and commitment of the professional development opportunity impacted the amount of change in teaching practice. Analysis of the data shows a statistically significant increase in Tier 1 participants&#8217; EBIP adoption scores in almost every item in the EBIP adoption scale. We did not see a statistically significant change in the Tier 2 participant scores. Though we cannot assume causation between the level of intensity of the professional development and its impact, given the evidence in previous studies about the deeper impact of more sustained faculty development efforts (<xref ref-type="bibr" rid="B4">Cardamone &amp; Dwyer, 2023</xref>; <xref ref-type="bibr" rid="B7">Condon et al., 2016</xref>; <xref ref-type="bibr" rid="B21">Wheeler &amp; Bach, 2021</xref>), a bigger change in practice for participants who spent more time in FTSS is not unexpected.</p>
<p>In addition to Tier 2 being less intense, another potential cause for Tier 2 participants not demonstrating a statistically significant change in EBIP adoption scores is the previous knowledge of this group. The item level changes observed for Tier 1 participants suggests the programming was effective at increasing both awareness and use of EBIPs, leading to statistically significant results. In comparison, all of Tier 2 participants (<italic>n</italic> = 17) were already aware of EBIPs and had spent time learning about them. The data suggests that there was some increased use and continued learning about EBIPs for Tier 2 participants, which contributed to change in the overall mean EBIP adoption scores for this group. However, the changes lack statistical significance. This finding might indicate that trainings are most effective for those that begin with a lower level of knowledge/experience with the topic. This could be an area to point to for future research.</p>
</sec>
<sec>
<title>Limitations</title>
<p>One limitation of this study is that the number of participants, specifically in Tier 2, is low. The group sizes were also disproportionate. While this can raise concerns in terms of statistical validity of the results, there are currently not many studies that provide scholarly evidence of the impact of the work done by educational developers on an institutional scale. Future research should focus on finding additional evidence of the institutional impacts of this type of work.</p>
<p>Another limitation is that the PERSIST survey that produced the data analyzed in this study was not specifically developed to find evidence of the impact of the FTSS program. Instead, the survey&#8217;s responses to the EBIP adoption scale were one of three batteries deployed in the survey.</p>
<p>Additionally, the study&#8217;s results are based on faculty self-reported data, which has been questioned as a reliable assessment tool in past research (e.g., <xref ref-type="bibr" rid="B9">Ebert-May et al., 2011</xref>). However, Durham et al. (<xref ref-type="bibr" rid="B8">2017</xref>) found that when investigating the frequency of scientific teaching practices as reported by students, instructors, and observers, answers reported by faculty closely matched those of a third-party observer.</p>
<p>Finally, a factor that we could not account for in our research was the influence of the pandemic. How much of faculty&#8217;s change in practice can be attributed to the fact that the COVID-19 pandemic forced them to change? What we can say is that participants in Tier 1 did show a positive change in their EBIP adoption scores, while non-participants showed no positive change (and a non-statistically significant drop). This finding could indicate that the pandemic did have a non-positive or negative impact on faculty adoption of EBIPs <italic>unless</italic> they participated in FTSS, but certainly other factors (e.g., caring for children/loved ones while teaching, difficulty teaching certain content using remote/online learning, the instructor&#8217;s own health, etc.) could have played a role.</p>
</sec>
<sec>
<title>Conclusion</title>
<p>Through this study we found that participation in a professional development opportunity designed as a response to the COVID-19 pandemic positively impacted faculty adoption of EBIPs. We also found that the level of intensity and commitment of the professional development opportunity impacted the amount of change in teaching practice. Faculty who participated in the Tier 1 training did show an increase in their EBIP adoption scores, whereas Tier 2 participants did not report a significant increase in scores. This could be because Tier 2 was a less intense training and/or Tier 2 participants began FTSS with higher knowledge of, and experience using, EBIPs. Our study informs the discussion of the impact of CTL work and how it can evolve in the future. It also bolsters support for the professional development work being done in CTLs. Lastly, the data collected for this study is part of an extensive longitudinal data set, which offers many more opportunities to explore the impact of CTL professional development work over time. While long-term data collection may be time and labor intensive, it provides opportunities for showcasing the impact of our professional development work.</p>
</sec>
</body>
<back>
<sec>
<title>Biographies</title>
<p><bold>Teresa Focarile</bold> is Director of Educational Development for the Boise State University Center for Teaching and Learning. Her scholarly work has focused on how educational developers can support institutional efforts such as program assessment and concurrent enrollment as well as designing programs for adjunct faculty. She has taught at the college level for 18 years, the past 12 for Boise State University, and the previous 6 for the University of Connecticut.</p>
<p><bold>Sarah Lausch</bold> is an Educational Development Consultant for the Boise State University Center for Teaching and Learning. Her research projects have focused on developing strategies to improve impostor feelings in students, implement mindful teaching practices, and support students&#8217; journeys toward self-authorship. For the CTL, Sarah manages the Mid-Semester Assessment Protocol program and the Course Design Academy. She has taught classes in various departments at Boise State University for 5 years.</p>
<p><bold>Meagan Haynes</bold> is an independent researcher specializing in evaluating education-focused initiatives. She collaborates with several universities and national funding organizations to design, implement, and assess programs that improve both student and community outcomes. Prior publications focus on effective approaches to advancing educational practices, including social-emotional learning strategies, enhancing access to and efficacy of learning materials, and improving outcomes for diverse student populations.</p>
<p><bold>Brittnee Earl</bold> is Program Coordinator for the Boise State University Center for Teaching and Learning. She has provided support and oversight for several National Science Foundation grants administered through the CTL and also contributes to institutional initiatives and research focused on improving undergraduate education. Her scholarly work has focused on institutional change and STEM education reform.</p>
<p><bold>Susan Shadle</bold> serves as the Vice Provost for Undergraduate Studies and Distinguished Professor of Chemistry at Boise State University. Previously, she served as the Executive Director of the Center for Teaching and Learning at Boise State University from 2006 to 2020. Her scholarly work has focused on understanding and facilitating institutional change, with a particular focus in STEM education reform and creating environments that foster student success.</p>
<p><bold>Lisa Berry</bold> is Associate Director of Instructional Design Services for the eCampus Center at Boise State University. Her focus is on assisting faculty to provide students successful online learning experiences. She has taught both in person and online at all levels from middle school to the graduate level.</p>
</sec>
<fn-group>
<fn id="n1"><p>Like Linse and Hood (<xref ref-type="bibr" rid="B18">2022</xref>), we use the term <italic>CTLs</italic> to be &#8220;inclusive of centers of one (or less than one), medium and large centers, as well as units that encompass educational technology or student support&#8221; (p. 6).</p></fn>
<fn id="n2"><p>The use of &#8220;we&#8221; in this paper therefore refers to the partnership between these two units.</p></fn>
<fn id="n3"><p>WIDER-PERSIST was funded through the National Science Foundation, #DUE-1347830.</p></fn>
<fn id="n4"><p>Another reason for not including Tier 3 was that this tier focused on providing resources that faculty could use any time. While we could determine the number of faculty who accessed those materials (through website analytics), we can&#8217;t know which faculty accessed them, so there would be no way to connect resource usage and EBIP scores.</p></fn>
<fn id="n5"><p>For this study, non-participants are defined as those that completed the PERSIST survey both between 2017&#8211;2020 and in 2021 but did not participate in Tier 1 or 2 of FTSS.</p></fn>
<fn id="n6"><p>The EBIP adoption scale was validated as part of the development process, as described in Landrum et al. (<xref ref-type="bibr" rid="B17">2017</xref>). The other batteries of the PERSIST survey, Teaching Practices Inventory and Current Instructional Climate Survey, did not align with the outcomes of FTSS and therefore were not included in our study.</p></fn>
<fn id="n7"><p>We separately categorized Tier 1 and Tier 2 participants in order to identify possible outcome differences between the two forms of training.</p></fn>
</fn-group>
<sec>
<title>Acknowledgments</title>
<p>The authors acknowledge the time and effort spent by the 44 faculty and staff who designed and/or facilitated activities in FTSS. The impact of this work could not have happened without them.</p>
<p>This material is partially based upon work supported by the National Science Foundation under grant # DUE- 1347830.</p>
</sec>
<sec>
<title>Conflict of Interest Statement</title>
<p>The authors have no conflict of interest.</p>
</sec>
<sec>
<title>Data Availability</title>
<p>The data reported in this manuscript are available on request by contacting the corresponding author.</p>
</sec>
<ref-list>
<ref id="B1"><mixed-citation publication-type="book"><string-name><surname>Beach</surname>, <given-names>A. L.</given-names></string-name>, <string-name><surname>Sorcinelli</surname>, <given-names>M. D.</given-names></string-name>, <string-name><surname>Austin</surname>, <given-names>A. E.</given-names></string-name>, &amp; <string-name><surname>Rivard</surname>, <given-names>J. K.</given-names></string-name> (<year>2016</year>). <source>Faculty development in the age of evidence: Current practices, future imperatives</source>. <publisher-name>Stylus Publishing</publisher-name>.</mixed-citation></ref>
<ref id="B2"><mixed-citation publication-type="webpage"><string-name><surname>Bose</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Berry</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Nyland</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Saba</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Focarile</surname>, <given-names>T.</given-names></string-name> (<year>2020</year>). <article-title>Flexible teaching for student success: A three-tiered initiative to prepare faculty for flexible teaching</article-title>. <source>Journal on Centers for Teaching and Learning</source>, <volume>12</volume>, <fpage>87</fpage>&#8211;<lpage>135</lpage>. <uri>https://openjournal.lib.miamioh.edu/index.php/jctl/article/view/211</uri></mixed-citation></ref>
<ref id="B3"><mixed-citation publication-type="webpage"><string-name><surname>Bose</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Nyland</surname>, <given-names>R.</given-names></string-name> (<year>2021</year>). <article-title>Are faculty prepared to teach flexibly? Results from an evaluation study</article-title>. <source>Journal on Centers for Teaching and Learning</source>, <volume>13</volume>, <fpage>60</fpage>&#8211;<lpage>91</lpage>. <uri>https://openjournal.lib.miamioh.edu/index.php/jctl/article/view/224/127</uri></mixed-citation></ref>
<ref id="B4"><mixed-citation publication-type="journal"><string-name><surname>Cardamone</surname>, <given-names>C. N.</given-names></string-name>, &amp; <string-name><surname>Dwyer</surname>, <given-names>H.</given-names></string-name> (<year>2023</year>). <article-title>A mixed methods study of faculty experiences in a course design institute</article-title>. <source>To Improve the Academy: A Journal of Educational Development</source>, <volume>42</volume>(<issue>1</issue>), <elocation-id>8</elocation-id>. <pub-id pub-id-type="doi">10.3998/tia.2108</pub-id></mixed-citation></ref>
<ref id="B5"><mixed-citation publication-type="journal"><string-name><surname>Chalmers</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Gardiner</surname>, <given-names>D.</given-names></string-name> (<year>2015</year>). <article-title>An evaluation framework for identifying the effectiveness and impact of academic teacher development programmes</article-title>. <source>Studies in Educational Evaluation</source>, <volume>46</volume>, <fpage>81</fpage>&#8211;<lpage>91</lpage>. <pub-id pub-id-type="doi">10.1016/j.stueduc.2015.02.002</pub-id></mixed-citation></ref>
<ref id="B6"><mixed-citation publication-type="journal"><string-name><surname>Chism</surname>, <given-names>N. V. N.</given-names></string-name>, <string-name><surname>Holley</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Harris</surname>, <given-names>C. J.</given-names></string-name> (<year>2012</year>). <article-title>Researching the impact of educational development: Basis for informed practice</article-title>. <source>To Improve the Academy: A Journal of Educational Development</source>, <volume>31</volume>(<issue>1</issue>), <fpage>129</fpage>&#8211;<lpage>145</lpage>. <pub-id pub-id-type="doi">10.1002/j.2334-4822.2012.tb00678.x</pub-id></mixed-citation></ref>
<ref id="B7"><mixed-citation publication-type="book"><string-name><surname>Condon</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Iverson</surname>, <given-names>E. R.</given-names></string-name>, <string-name><surname>Manduca</surname>, <given-names>C. A.</given-names></string-name>, <string-name><surname>Rutz</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Willett</surname>, <given-names>G.</given-names></string-name> (<year>2016</year>). <source>Faculty development and student learning: Assessing the connections</source>. <publisher-name>Indiana University Press</publisher-name>.</mixed-citation></ref>
<ref id="B8"><mixed-citation publication-type="journal"><string-name><surname>Durham</surname>, <given-names>M. F.</given-names></string-name>, <string-name><surname>Knight</surname>, <given-names>J. K.</given-names></string-name>, &amp; <string-name><surname>Couch</surname>, <given-names>B. A.</given-names></string-name> (<year>2017</year>). <article-title>Measurement Instrument for Scientific Teaching (MIST): A tool to measure the frequencies of research-based teaching practices in undergraduate science courses</article-title>. <source>CBE&#8212;Life Sciences Education</source>, <volume>16</volume>(<issue>4</issue>). <pub-id pub-id-type="doi">10.1187/cbe.17-02-0033</pub-id></mixed-citation></ref>
<ref id="B9"><mixed-citation publication-type="journal"><string-name><surname>Ebert-May</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Derting</surname>, <given-names>T. L.</given-names></string-name>, <string-name><surname>Hodder</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Momsen</surname>, <given-names>J. L.</given-names></string-name>, <string-name><surname>Long</surname>, <given-names>T. M.</given-names></string-name>, &amp; <string-name><surname>Jardeleza</surname>, <given-names>S. E.</given-names></string-name> (<year>2011</year>). <article-title>What we say is not what we do: Effective evaluation of faculty professional development programs</article-title>. <source>BioScience</source>, <volume>61</volume>(<issue>7</issue>), <fpage>550</fpage>&#8211;<lpage>558</lpage>. <pub-id pub-id-type="doi">10.1525/bio.2011.61.7.9</pub-id></mixed-citation></ref>
<ref id="B10"><mixed-citation publication-type="journal"><string-name><surname>Eddy</surname>, <given-names>P. L.</given-names></string-name>, <string-name><surname>Macdonald</surname>, <given-names>R. H.</given-names></string-name>, &amp; <string-name><surname>Baer</surname>, <given-names>E. M. D.</given-names></string-name> (<year>2021</year>). <article-title>Professional development during a crisis and beyond: Lessons learned during COVID</article-title>. <source>New Directions for Community Colleges</source>, <volume>2021</volume>(<issue>195</issue>), <fpage>199</fpage>&#8211;<lpage>212</lpage>. <pub-id pub-id-type="doi">10.1002/cc.20477</pub-id></mixed-citation></ref>
<ref id="B11"><mixed-citation publication-type="journal"><string-name><surname>Groccia</surname>, <given-names>J. E.</given-names></string-name>, &amp; <string-name><surname>Buskist</surname>, <given-names>W.</given-names></string-name> (<year>2011</year>). <article-title>Need for evidence-based teaching</article-title>. <source>New Directions for Teaching and Learning</source>, <volume>2011</volume>(<issue>128</issue>), <fpage>5</fpage>&#8211;<lpage>11</lpage>. <pub-id pub-id-type="doi">10.1002/tl.463</pub-id></mixed-citation></ref>
<ref id="B12"><mixed-citation publication-type="journal"><string-name><surname>Hines</surname>, <given-names>S. R.</given-names></string-name> (<year>2011</year>). <article-title>How mature teaching and learning centers evaluate their services</article-title>. <source>To Improve the Academy: A Journal of Educational Development</source>, <volume>30</volume>(<issue>1</issue>), <fpage>277</fpage>&#8211;<lpage>289</lpage>. <pub-id pub-id-type="doi">10.1002/j.2334-4822.2011.tb00663.x</pub-id></mixed-citation></ref>
<ref id="B13"><mixed-citation publication-type="journal"><string-name><surname>Hines</surname>, <given-names>S. R.</given-names></string-name> (<year>2017</year>). <article-title>Evaluating centers for teaching and learning: A field-tested model</article-title>. <source>To Improve the Academy: A Journal of Educational Development</source>, <volume>36</volume>(<issue>2</issue>), <fpage>89</fpage>&#8211;<lpage>100</lpage>. <pub-id pub-id-type="doi">10.3998/tia.17063888.0036.202</pub-id></mixed-citation></ref>
<ref id="B14"><mixed-citation publication-type="journal"><string-name><surname>Hurney</surname>, <given-names>C. A.</given-names></string-name>, <string-name><surname>Brantmeier</surname>, <given-names>E. J.</given-names></string-name>, <string-name><surname>Good</surname>, <given-names>M. R.</given-names></string-name>, <string-name><surname>Harrison</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Meixner</surname>, <given-names>C.</given-names></string-name> (<year>2016</year>). <article-title>The faculty learning outcome assessment framework</article-title>. <source>The Journal of Faculty Development</source>, <volume>30</volume>(<issue>2</issue>), <fpage>69</fpage>&#8211;<lpage>77</lpage>.</mixed-citation></ref>
<ref id="B15"><mixed-citation publication-type="journal"><string-name><surname>Kreber</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Brook</surname>, <given-names>P.</given-names></string-name>, &amp; <collab>Educational Policy</collab>. (<year>2001</year>). <article-title>Impact evaluation of educational development programmes</article-title>. <source>International Journal for Academic Development</source>, <volume>6</volume>(<issue>2</issue>), <fpage>96</fpage>&#8211;<lpage>108</lpage>. <pub-id pub-id-type="doi">10.1080/13601440110090749</pub-id></mixed-citation></ref>
<ref id="B16"><mixed-citation publication-type="journal"><string-name><surname>Kucsera</surname>, <given-names>J. V.</given-names></string-name>, &amp; <string-name><surname>Svinicki</surname>, <given-names>M.</given-names></string-name> (<year>2010</year>). <article-title>Rigorous evaluations of faculty development programs</article-title>. <source>The Journal of Faculty Development</source>, <volume>24</volume>(<issue>2</issue>), <fpage>5</fpage>&#8211;<lpage>18</lpage>.</mixed-citation></ref>
<ref id="B17"><mixed-citation publication-type="journal"><string-name><surname>Landrum</surname>, <given-names>R. E.</given-names></string-name>, <string-name><surname>Viskupic</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Shadle</surname>, <given-names>S. E.</given-names></string-name>, &amp; <string-name><surname>Bullock</surname>, <given-names>D.</given-names></string-name> (<year>2017</year>). <article-title>Assessing the STEM landscape: The current instructional climate survey and the evidence-based instructional practices adoption scale</article-title>. <source>International Journal of STEM Education</source>, <volume>4</volume>. <pub-id pub-id-type="doi">10.1186/s40594-017-0092-1</pub-id></mixed-citation></ref>
<ref id="B18"><mixed-citation publication-type="journal"><string-name><surname>Linse</surname>, <given-names>A. R.</given-names></string-name>, &amp; <string-name><surname>Hood</surname>, <given-names>L. N.</given-names></string-name> (<year>2022</year>). <article-title>Building a strategic plan that guides assessment: A case study from a teaching and learning center</article-title>. <source>Journal on Centers for Teaching and Learning</source>, <volume>14</volume>, <fpage>4</fpage>&#8211;<lpage>38</lpage>.</mixed-citation></ref>
<ref id="B19"><mixed-citation publication-type="webpage"><string-name><surname>Madsen</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Focarile</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Souza</surname>, <given-names>T.</given-names></string-name>, &amp; <string-name><surname>Berry</surname>, <given-names>L.</given-names></string-name> (<year>2021</year>, <month>April</month> <day>1</day>). <article-title>FLIPping the script on course design: Integrating UDL and student centeredness into the course design table</article-title>. <source>Academic Impressions</source>. <uri>https://www.academicimpressions.com/blog/flipping-the-script-on-course-design</uri></mixed-citation></ref>
<ref id="B20"><mixed-citation publication-type="journal"><string-name><surname>Perry</surname>, <given-names>E.</given-names></string-name> (<year>2023</year>). <article-title>Teacher professional development in changing circumstances: The impact of COVID-19 on schools&#8217; approaches to professional development</article-title>. <source>Education Sciences</source>, <volume>13</volume>(<issue>1</issue>), <elocation-id>48</elocation-id>. <pub-id pub-id-type="doi">10.3390/educsci13010048</pub-id></mixed-citation></ref>
<ref id="B21"><mixed-citation publication-type="journal"><string-name><surname>Wheeler</surname>, <given-names>L. B.</given-names></string-name>, &amp; <string-name><surname>Bach</surname>, <given-names>D.</given-names></string-name> (<year>2021</year>). <article-title>Understanding the impact of educational development interventions on classroom instruction and student success</article-title>. <source>International Journal for Academic Development</source>, <volume>26</volume>(<issue>1</issue>), <fpage>24</fpage>&#8211;<lpage>40</lpage>. <pub-id pub-id-type="doi">10.1080/1360144X.2020.1777555</pub-id></mixed-citation></ref>
</ref-list>
</back>
</article>