<?xml version="1.0" encoding="utf-8"?>
<article xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="JATS-journalpublishing1-mathml3.xsd" dtd-version="1.2" article-type="Research">
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">tia</journal-id>
<journal-title-group>
<journal-title>To Improve the Academy: A Journal of Educational Development</journal-title>
</journal-title-group>
<issn pub-type="epub"></issn>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">926</article-id>
<article-id pub-id-type="manuscript">data-driven-iterative-refinements_cleanedup.docx</article-id>
<article-id pub-id-type="doi">10.3998/tia.926</article-id>
<title-group>
<article-title>Data-Driven Iterative Refinements to Educational Development Services: Directly Measuring the Impacts of Consultations on Course and Syllabus Design</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes" equal-contrib="yes">
<name>
<surname>Hershock</surname>
<given-names>Chad</given-names>
</name>
<email>hershock@andrew.cmu.edu</email>
<xref rid="aff1" ref-type="aff"/>
<xref rid="bio1" ref-type="bio"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Pottmeyer</surname>
<given-names>Laura Ochs</given-names>
</name>
<email>lpottmey@andrew.cmu.edu</email>
<xref rid="aff1" ref-type="aff"/>
<xref rid="bio2" ref-type="bio"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Harrell</surname>
<given-names>Jessica</given-names>
</name>
<email>jbharrel@andrew.cmu.edu</email>
<xref rid="aff1" ref-type="aff"/>
<xref rid="bio3" ref-type="bio"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>le Blanc</surname>
<given-names>Sophie</given-names>
</name>
<email>sleblanc@andrew.cmu.edu</email>
<xref rid="aff1" ref-type="aff"/>
<xref rid="bio4" ref-type="bio"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Rodriguez</surname>
<given-names>Marisella</given-names>
</name>
<email>marisella@berkeley.edu</email>
<xref rid="aff2" ref-type="aff"/>
<xref rid="bio5" ref-type="bio"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Stimson</surname>
<given-names>Jacqueline</given-names>
</name>
<email>jstimson@andrew.cmu.edu</email>
<xref rid="aff1" ref-type="aff"/>
<xref rid="bio6" ref-type="bio"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Walsh</surname>
<given-names>Katharine Phelps</given-names>
</name>
<email>kpwalsh@andrew.cmu.edu</email>
<xref rid="aff1" ref-type="aff"/>
<xref rid="bio7" ref-type="bio"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Weiss</surname>
<given-names>Emily Daniels</given-names>
</name>
<email>eedaniel@andrew.cmu.edu</email>
<xref rid="aff1" ref-type="aff"/>
<xref rid="bio8" ref-type="bio"/>
</contrib>
</contrib-group>
<aff id="aff1">
<institution>Carnegie Mellon University</institution>
<institution content-type="position"></institution>
<institution content-type="dept"></institution>
<addr-line content-type="addrline1"></addr-line>
<country></country>
<addr-line content-type="city"></addr-line>
<addr-line content-type="zipcode"></addr-line>
<phone content-type="primary"></phone>
</aff>
<aff id="aff2">
<institution>University of California, Berkeley</institution>
<institution content-type="position"></institution>
<institution content-type="dept"></institution>
<addr-line content-type="addrline1"></addr-line>
<country></country>
<addr-line content-type="city"></addr-line>
<addr-line content-type="zipcode"></addr-line>
<phone content-type="primary"></phone>
</aff>
<pub-date>
<day>31</day>
<month>10</month>
<year>2022</year>
</pub-date>
<volume>41</volume>
<issue>2</issue>
<history>
<date date-type="received">
<day></day>
<month></month>
<year></year>
</date>
<date date-type="rev-recd">
<day></day>
<month></month>
<year></year>
</date>
<date date-type="accepted">
<day></day>
<month></month>
<year></year>
</date>
</history>
<permissions>
<license><license-p>CC BY-NC-ND 4.0</license-p></license>
</permissions>
<abstract id="ABS1">
<p id="P1">Evidence-based practice in educational development includes leveraging data to iteratively refine center for teaching and learning (CTL) services. However, CTL data collection is often limited to counts and satisfaction surveys rather than direct measures of outcomes. To directly assess impacts of consultations on course and syllabus design, we analyzed 94 clients&#x2019; syllabi (32 faculty, 62 graduate students and postdocs) before and after consultations. Faculty and non-faculty clients demonstrated significant change following consultations (6% and 10% gains in syllabus rubric scores, representing 50% and 31% of possible gains and effect sizes of 0.73 and 1.04 standard deviations, respectively). We compared faculty clients to quasi-experimental control groups that did not receive consultations. Syllabi from non-clients scored lower and did not demonstrate similar changes across semesters. Attendance at a CTL seminar on course and syllabus design did not explain variation in clients&#x2019; syllabi. We discuss implications for assessment of CTL services and how we leveraged formative assessments to inform and iteratively refine our educational development practices.</p>
</abstract>
<kwd-group>
<kwd>educational development</kwd>
<kwd>formative outcomes assessment</kwd>
<kwd>centers for teaching and learning</kwd>
</kwd-group>
<funding-group />
<counts>
<fig-count count="3" />
</counts>
<custom-meta-group>
<custom-meta id="competing-interest">
<meta-name></meta-name>
<meta-value></meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<p id="P2">How can centers for teaching and learning (CTLs) measure the outcomes of their services to inform their educational development practices? For example, CTL services frequently disseminate effective course and syllabus design (C&amp;SD) principles (e.g., see <xref rid="R1" ref-type="bibr">Ambrose et al., 2010</xref>; <xref rid="R14" ref-type="bibr">Fink, 2013</xref>; <xref rid="R35" ref-type="bibr">Wiggins &amp; McTighe, 2005</xref>). Common approaches include one-on-one consultations, stand-alone seminars, orientation events, multi-day institutes, and/or web resources. Our CTL allocates significant resources to all of the above but disproportionately more to consultations. Naturally, we wondered whether our consultation services enhance clients&#x2019; C&amp;SDs. Furthermore, how could we leverage outcomes assessments to continuously improve our educational development practices? This article describes how we used syllabi analyses as direct outcomes measures to formatively assess and reflect on our C&amp;SD consultation services. It also contributes to a gap in the scholarship of educational development (SoED) literature on outcomes assessment.</p>
<p id="P3">Scholarship on assessing CTL impacts identifies multiple possible levels of outcomes analyses (<xref rid="R36" ref-type="bibr">Haras et al., 2017</xref>; <xref rid="R17" ref-type="bibr">Hines, 2015</xref>; <xref rid="R19" ref-type="bibr">Kreber et al., 2001</xref>):
<list list-type="order" id="L1">
<list-item><p id="P4">counts of instructors served;</p></list-item>
<list-item><p id="P5">instructor satisfaction with CTL services;</p></list-item>
<list-item><p id="P6">instructor beliefs about teaching and learning;</p></list-item>
<list-item><p id="P7">instructor perceptions of institutional culture;</p></list-item>
<list-item><p id="P8">instructor learning gains;</p></list-item>
<list-item><p id="P9">instructor course and syllabus designs;</p></list-item>
<list-item><p id="P10">instructor teaching practices during and between class sessions; and</p></list-item>
<list-item><p id="P11">student outcomes, including learning, persistence, or attitudes.</p></list-item>
</list></p>
<p id="P12">Previous theoretical and empirical SoED aggregates Outcomes 6 and 7 broadly as teaching behaviors or practices. We intentionally separate and delineate these outcomes because CTL services may target one or both, and different data sources may be required for each.</p>
<p id="P13">Typical CTL evaluation practices focus on Outcomes 1 and 2 (<xref rid="R2" ref-type="bibr">Beach et al., 2016</xref>; <xref rid="R36" ref-type="bibr">Haras et al., 2017</xref>; <xref rid="R16" ref-type="bibr">Hines, 2011</xref>). These data are easy to collect and can indicate the reach and relevance of CTL services. Survey data can provide useful formative feedback (e.g., perceived strengths and weaknesses of services and suggestions for change). Nevertheless, neither client counts nor typical CTL evaluation forms <italic>directly</italic> measure educational development impacts. For instance, feedback surveys may ask instructors to self-report changes regarding Outcome 7. Unfortunately, studies comparing instructors&#x2019; self-reports to classroom observations suggest self-reports are unreliable as indirect measures of the impacts of professional development programs (<xref rid="R9" ref-type="bibr">Ebert-May et al., 2011</xref>). Similarly, self-reports of confidence or perceived ability to implement evidence-based practices encountered via CTL services are indirect measures of outcomes. They do not directly measure changes in knowledge, skill, or practice. Because counts, satisfaction data, and self-reports alone fail to adequately inform how best to iteratively refine CTL services or fully demonstrate the value added by CTLs, recent reports argue for incorporating more and broader data sources directly measuring educational development outcomes (<xref rid="R2" ref-type="bibr">Beach et al., 2016</xref>; <xref rid="R29" ref-type="bibr">POD Network, 2018</xref>). Historically, SoED studies omit Outcomes 5&#x2013;8 above (<xref rid="R4" ref-type="bibr">Chism et al., 2012</xref>; <xref rid="R32" ref-type="bibr">Stes et al., 2010</xref>; but for recent exceptions, see <xref rid="R26" ref-type="bibr">Palmer 2016</xref>; <xref rid="R33" ref-type="bibr">Tomkin et al., 2019</xref>; <xref rid="R34" ref-type="bibr">Wheeler &amp; Bach, 2021</xref>). Here, we focus on CTL outcomes analyses regarding changes in instructor course designs as documented in syllabi (Outcome 6). Our study also illustrates our use of syllabi as data sources to formatively inform and refine educational development practices.</p>
<p id="P14">Syllabi analyses may reveal the current state of C&amp;SD elements, such as learning objectives, assessments, pedagogical methods, course policies, and alignment among course design features (e.g., <xref rid="R6" ref-type="bibr">Cullen &amp; Harris, 2009</xref>; <xref rid="R7" ref-type="bibr">Doolittle &amp; Siudzinski, 2010</xref>; <xref rid="R18" ref-type="bibr">Homa et al., 2013</xref>; <xref rid="R22" ref-type="bibr">McGowan et al., 2016</xref>; <xref rid="R26" ref-type="bibr">Palmer et al., 2016</xref>; <xref rid="R31" ref-type="bibr">Stanny et al., 2015</xref>). Syllabi may also provide data on non-contextually dependent teaching and learning constructs, such as time on task and level of expectations (<xref rid="R3" ref-type="bibr">Campbell et al., 2019</xref>) or learner centeredness (<xref rid="R26" ref-type="bibr">Palmer et al., 2016</xref>). Syllabi can be readily compared across CTL clients, regardless of their discipline and teaching contexts. Of course, syllabi have limitations as data sources. Instructors may use practices not documented in syllabi or fail to use practices documented in syllabi. For classroom teaching practices, specifically, another data source, such as an observation or analyses of other teaching artifacts, may be more appropriate. Regardless, educational developers can still gain valuable insights from syllabi analyses regarding impacts of CTL services, especially those targeting elements of C&amp;SD (see above) rather than, or in addition to, Outcome 7 above. Furthermore, when services target adoption of evidence-based teaching strategies, the work of explicitly and intentionally articulating related C&amp;SD elements in writing can be an important step. It can support instructor commitment to adoption and foster reflection on implementation, especially if instructors perceive syllabi as a public agreement with students. Additionally, syllabi analyses are amenable for pre/post analyses of interventions and outcome assessments at scale, especially when resources are limited.</p>
<p id="P15">Surprisingly, few studies analyze syllabi to measure the impact of an intervention on instructor C&amp;SD (<xref rid="R4" ref-type="bibr">Chism et al., 2012</xref>; <xref rid="R32" ref-type="bibr">Stes et al., 2010</xref>). <xref rid="R25" ref-type="bibr">Palmer et al. (2014)</xref> developed and validated a rubric to measure the degree to which syllabi are learning centered. Researchers used this rubric to compare changes in syllabi before and after a week-long course design institute (<xref rid="R26" ref-type="bibr">Palmer et al., 2016</xref>). This approach illustrates the value of capturing pre/post measurements. However, including a comparison group (e.g., instructors not enrolled in the program) would strengthen inferences regarding causality. Research on CTL impacts rarely includes comparison groups (<xref rid="R4" ref-type="bibr">Chism et al., 2012</xref>; <xref rid="R32" ref-type="bibr">Stes et al., 2010</xref>; but see <xref rid="R23" ref-type="bibr">Meizlish et al., 2018</xref>; <xref rid="R34" ref-type="bibr">Wheeler &amp; Bach, 2021</xref>).</p>
<p id="P16">Our study directly assesses the impact of a CTL consultation service on C&amp;SD as documented in syllabi via pre/post analyses and includes a comparison group. We analyzed syllabi from 94 clients (32 faculty, 54 graduate students, and eight postdocs) before and after each received a CTL consultation on course and syllabus design. Using our institution&#x2019;s syllabus registry, we generated two quasi-experimental comparison groups for faculty who did not receive a consultation: (a) individual course syllabi from 32 faculty members, and (b) pairs of syllabi from 10 faculty members who taught the same course in consecutive semesters. We matched all faculty client and comparison group syllabi on discipline and course level and, when possible, employment track and faculty rank. Our analyses included data on prior attendance at CTL CS&amp;D seminars to statistically &#x201C;control&#x201D; for effects on pre-/post-consultation data.</p>
<sec id="S1">
<title>Research Questions and Significance of Study</title>
<p id="P17">Our study contributes to the literature on CTL outcomes assessments by exploring the following research questions. To what extent:
<list list-type="order" id="L2">
<list-item><p id="P18">do CTL consultation services influence instructor course design, as documented in their course syllabi?</p></list-item>
<list-item><p id="P19">do the impacts of CTL consultations on C&amp;SD differ between faculty and graduate student/postdoc clients?</p></list-item>
<list-item><p id="P20">does participation in a CTL C&amp;SD seminar, prior to a CTL consultation, influence:
<list list-type="alpha-lower" id="L3">
<list-item><p id="P21">instructors&#x2019; C&amp;SD practices, as documented in their pre-consultation syllabi?</p></list-item>
<list-item><p id="P22">the impact of a consultation, as documented by pre/post changes in syllabi?</p></list-item>
</list></p></list-item>
</list></p>
<p id="P23">These questions are relevant for educational developers for several reasons. First, our study helps fill two well-documented gaps in the literature, the need for (a) additional evaluations of the impacts of CTLs services beyond counts of clients and satisfaction data (<xref rid="R2" ref-type="bibr">Beach et al., 2016</xref>; <xref rid="R36" ref-type="bibr">Haras et al., 2017</xref>; <xref rid="R20" ref-type="bibr">Kucsera &amp; Svinicki, 2010</xref>) and (b) assessments of CTL impacts using direct, outcomes measures, pre/post data, and comparison groups (<xref rid="R4" ref-type="bibr">Chism et al., 2012</xref>; <xref rid="R32" ref-type="bibr">Stes et al., 2010</xref>; <xref rid="R34" ref-type="bibr">Wheeler &amp; Bach, 2021</xref>). Second, we compare CTL impacts between current and future faculty. Some CTLs offer consultation services to both constituencies, such as consultations on C&amp;SD, classroom observations of teaching, or Small Group Instructional Diagnosis (SGID; see <xref rid="R12" ref-type="bibr">Finelli et al., 2008</xref>; <xref rid="R13" ref-type="bibr">Finelli et al., 2011</xref>). However, scholarship comparing impacts across types of clients is lacking (<xref rid="R4" ref-type="bibr">Chism et al., 2012</xref>; <xref rid="R32" ref-type="bibr">Stes et al., 2010</xref>). Action research (<xref rid="R15" ref-type="bibr">Hershock et al., 2011</xref>) regarding how each constituency responds to consultation services can help CTLs tailor their service models to best meet clients&#x2019; needs. Third, we provide a data-driven model for how to formatively evaluate and iteratively refine the delivery of CTL services.</p>
</sec>
<sec id="S2">
<title>Methods</title>
<p id="P24">We conducted this study at the Eberly Center for Teaching Excellence and Educational Innovation, the CTL at Carnegie Mellon University (CMU). CMU is a research-intensive, private institution with approximately 14,000 graduate and undergraduate students and 1,400 faculty. Consultations represent a large component of the CTL&#x2019;s portfolio. During the academic year 2019&#x2013;2020, the CTL consulted with 470 faculty and 151 graduate student clients. This study addresses the extent to which clients&#x2019; syllabi change following a C&amp;SD consultation.</p>
<sec id="S3">
<title>C&amp;SD Consultations</title>
<p id="P25">In this section, we compare and contrast our CTL&#x2019;s C&amp;SD consultation service models for faculty clients and graduate students/postdocs participating in our Future Faculty Program (FFP). Similarities are numerous. All consultations are voluntary, confidential, and provided by full-time educational developers. Consultants receive the same training, including shadowing and being shadowed by more experienced colleagues on consultations and participating in monthly professional development sessions to compare strategies and discuss challenging consultation scenarios. Clients meet with consultants one or more times. Discussions typically focus on learning objectives, assessments, instructional strategies, alignment of course design features, and/or course policies. Clients may be designing a course from scratch, revising a syllabus they inherited from previous instructors, or inspired by syllabi encountered elsewhere. Consultants provide resources, as needed, such as a syllabus template, a set of heuristic questions, and/or recommendations from our institution&#x2019;s Faculty Senate (<xref rid="R10" ref-type="bibr">Eckhardt, 2017</xref>). Rather than follow a strict protocol, consultants draw from the same, established C&amp;SD and learning principles (e.g., <xref rid="R1" ref-type="bibr">Ambrose et al., 2010</xref>; <xref rid="R14" ref-type="bibr">Fink, 2013</xref>; <xref rid="R35" ref-type="bibr">Wiggins &amp; McTighe, 2005</xref>) regarding learning objectives, assessments, alignment, inclusive teaching, course policies, and how students learn to provide feedback on clients&#x2019; C&amp;SDs. However, to strategically &#x201C;meet the client where they are,&#x201D; including what to prioritize, how much feedback to provide, and when to gently push (or not), consultants also have to rely on intuition and be flexible in their approach. After receiving feedback, clients may revise syllabi and receive additional feedback. Given that contextual, client-specific factors influence the exact content of a consultation, one expects some variation in the client experience. Given this variation, our tests of the impacts of consultations are conservative. Detecting a strong signal regarding syllabi improvements after consultations or relative to a comparison group, in spite of this potential variation, actually strengthens interpretations of the benefits of the fundamental, shared parameters of the C&amp;SD service model.</p>
<p id="P26">C&amp;SD consultations may differ between faculty and FFP clients. Time on task regarding C&amp;SD tends to be greater during FFP consultations. FFP consultations focus exclusively on C&amp;SD, because it&#x2019;s a specific requirement of the FFP program. Faculty consultations often include other CTL services and have shorter timelines for iteration. Many faculty clients request a C&amp;SD consultation proximal to the start of a semester when they will teach the targeted course. Fewer FFP clients are (or will be) teaching the courses they are designing at CMU. Thus, consultants may need to prioritize feedback differently for faculty, based on time constraints and what&#x2019;s possible or most critical.</p>
<p id="P27">Additionally, FFP consultations tend to be more scaffolded. Unlike faculty, FFP clients participate in a structured program designed to support skill development and success as educators in a faculty career (<xref rid="R8" ref-type="bibr">Eberly Center, 2021</xref>). FFP participants must (a) complete a course and syllabus (re)design project; (b) attend at least eight CTL seminars on evidenced-based teaching and learning; (c) receive at least two teaching feedback consultations; and (d) write a teaching philosophy statement. FFP clients receive feedback on syllabi drafts using a rubric assessing course descriptions, learning objectives, description of assessments, assessment criteria for evaluation (if applicable), course policies, and more (see <xref ref-type="app" rid="APP1">Appendix A</xref>). Faculty clients may also receive feedback on their syllabi, but it is not scaffolded using the FFP rubric. Typically, FFP clients do not receive the rubric prior to submitting their first draft. However, consultants may conduct a generative interview with FFP clients before they draft their syllabi. Generative interviews do not &#x201C;workshop&#x201D; elements of course or syllabus design in detail. Instead, they pose guiding questions to help clients get started. Finally, after FFP syllabi pass minimum requirements (on the first draft or after iteration), all FFP clients complete a written reflection on their course designs. This assignment challenges clients to communicate the alignment of course design features, especially regarding instructional strategies, assessments, and learning objectives, because this information is rarely explicit in a traditional course syllabus. Consultants do not request this reflection document from faculty clients.</p>
</sec>
<sec id="S4">
<title>Study Design and Data Sources</title>
<p id="P28">We compared syllabi for 94 CTL clients before and after they received a C&amp;SD consultation. Syllabus pairs came from the same course.</p>
<p id="P29">Two-thirds of our sample came from FFP clients. From January 2017 through June 2018, 62 FFP participants (16 master&#x2019;s students, 38 doctoral students, and eight postdocs) received a C&amp;SD consultation and submitted syllabi before and after consultations (<xref rid="T1" ref-type="table">Table 1</xref>).</p>
<table-wrap id="T1" position="anchor" orientation="portrait">
<label>Table 1.</label><caption><p id="P80">FFP Clients Sample</p></caption>
<table frame="hsides" rules="groups">
<colgroup>
<col align="left" valign="middle"/>
<col align="left" valign="middle"/>
</colgroup>
<thead>
<tr>
<th align="left" valign="top">Discipline/college</th>
<th align="center" valign="top">Proportion of clients (%)<break/><inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M42"><mml:mrow><mml:mi>n</mml:mi><mml:mo mathvariant='bold'>=</mml:mo><mml:mn mathvariant='bold'>6</mml:mn><mml:mn mathvariant='bold'>2</mml:mn></mml:mrow></mml:math></inline-formula></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="bottom">Engineering</td>
<td align="center" valign="bottom">18 (29.0)</td>
</tr>
<tr>
<td align="left" valign="bottom">Humanities and social sciences</td>
<td align="center" valign="bottom">14 (22.6)</td>
</tr>
<tr>
<td align="left" valign="bottom">Fine arts</td>
<td align="center" valign="bottom">13 (21.0)</td>
</tr>
<tr>
<td align="left" valign="bottom">Computer science</td>
<td align="center" valign="bottom">8 (12.9)</td>
</tr>
<tr>
<td align="left" valign="bottom">Science</td>
<td align="center" valign="bottom">5 (8.1)</td>
</tr>
<tr>
<td align="left" valign="bottom">Business</td>
<td align="center" valign="bottom">3 (4.8)</td>
</tr>
<tr>
<td align="left" valign="bottom">Information systems and public policy</td>
<td align="center" valign="bottom">1 (1.6)</td>
</tr>
<tr>
<td align="left" valign="bottom">Participation in additional CTL C&amp;SD services</td>
<td align="center" valign="bottom"></td>
</tr>
<tr>
<td align="left" valign="bottom">C&amp;SD seminar</td>
<td align="center" valign="bottom">36 (58.1)</td>
</tr>
</tbody>
</table>
</table-wrap>
<p id="P30">One-third of our sample came from faculty clients. We identified 57 unique faculty-course combinations that requested a C&amp;SD consultation between August 2017 and June 2019 and shared syllabi with consultants at the beginning of the process. We emailed these clients to request the syllabi they implemented after the consultation and received syllabi from 43 client-course combinations. Eleven respondents submitted syllabi for multiple courses receiving consultations. For repeat clients, we only analyzed syllabus pairs from their earliest consultation, resulting in a final sample of 32 faculty clients (<xref rid="T2" ref-type="table">Table 2</xref>).</p>
<table-wrap id="T2" position="anchor" orientation="portrait">
<label>Table 2.</label><caption><p id="P81">Faculty Clients Sample</p></caption>
<table frame="hsides" rules="groups">
<colgroup>
<col align="left" valign="middle"/>
<col align="left" valign="middle"/>
<col align="left" valign="middle"/>
<col align="left" valign="middle"/>
</colgroup>
<thead>
<tr>
<th rowspan="3" align="left" valign="top">Discipline/college</th>
<th align="center" valign="top">CTL clients (<inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M43"><mml:mrow><mml:mi>n</mml:mi><mml:mo mathvariant='bold'>=</mml:mo><mml:mn mathvariant='bold'>3</mml:mn><mml:mn mathvariant='bold'>2</mml:mn></mml:mrow></mml:math></inline-formula>)</th>
<th align="center" valign="top">Syllabus registry, 1 syllabus (<inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M44"><mml:mrow><mml:mi>n</mml:mi><mml:mo mathvariant='bold'>=</mml:mo><mml:mn mathvariant='bold'>3</mml:mn><mml:mn mathvariant='bold'>2</mml:mn></mml:mrow></mml:math></inline-formula>)</th>
<th align="center" valign="top">Syllabus registry &#x201C;drift,&#x201D; 2 syllabi (<inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M45"><mml:mrow><mml:mi>n</mml:mi><mml:mo mathvariant='bold'>=</mml:mo><mml:mn mathvariant='bold'>1</mml:mn><mml:mn mathvariant='bold'>0</mml:mn></mml:mrow></mml:math></inline-formula>)</th>
</tr>
<tr>
<th colspan="3" align="left" valign="top"><hr/></th>
</tr>
<tr>
<th align="center" valign="top">Proportion of faculty (%)</th>
<th align="center" valign="top">Proportion of faculty (%)</th>
<th align="center" valign="top">Proportion of faculty (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Humanities and social sciences</td>
<td align="center" valign="top">11 (34.4)</td>
<td align="center" valign="top">11 (34.4)</td>
<td align="center" valign="top">1 (10)</td>
</tr>
<tr>
<td align="left" valign="top">Engineering</td>
<td align="center" valign="top">6 (18.8)</td>
<td align="center" valign="top">6 (18.8)</td>
<td align="center" valign="top">&#x2013;</td>
</tr>
<tr>
<td align="left" valign="top">Fine arts</td>
<td align="center" valign="top">5 (15.6)</td>
<td align="center" valign="top">5 (15.6)</td>
<td align="center" valign="top">3 (30)</td>
</tr>
<tr>
<td align="left" valign="top">Information systems and public policy</td>
<td align="center" valign="top">4 (12.5)</td>
<td align="center" valign="top">4 (12.5)</td>
<td align="center" valign="top">2 (20)</td>
</tr>
<tr>
<td align="left" valign="top">Science</td>
<td align="center" valign="top">2 (6.3)</td>
<td align="center" valign="top">2 (6.3)</td>
<td align="center" valign="top">2 (20)</td>
</tr>
<tr>
<td align="left" valign="top">Business</td>
<td align="center" valign="top">2 (6.3)</td>
<td align="center" valign="top">2 (6.3)</td>
<td align="center" valign="top">1 (10)</td>
</tr>
<tr>
<td align="left" valign="top">Computer science</td>
<td align="center" valign="top">1 (3.1)</td>
<td align="center" valign="top">1 (3.1)</td>
<td align="center" valign="top">&#x2013;</td>
</tr>
<tr>
<td align="left" valign="top">Other</td>
<td align="center" valign="top">1 (3.1)</td>
<td align="center" valign="top">1 (3.1)</td>
<td align="center" valign="top">1 (10)</td>
</tr>
<tr>
<td align="left" valign="top">Employment track</td>
<td align="center" valign="top"></td>
<td align="center" valign="top"></td>
<td align="center" valign="top"></td>
</tr>
<tr>
<td align="left" valign="top">Tenure track</td>
<td align="center" valign="top">11 (34.4)</td>
<td align="center" valign="top">13 (40.6)</td>
<td align="center" valign="top">3 (30)</td>
</tr>
<tr>
<td align="left" valign="top">Teaching track</td>
<td align="center" valign="top">9 (28.1)</td>
<td align="center" valign="top">8 (25.0)</td>
<td align="center" valign="top">2 (20)</td>
</tr>
<tr>
<td align="left" valign="top">Research track</td>
<td align="center" valign="top">1 (3.1)</td>
<td align="center" valign="top">1 (3.1)</td>
<td align="center" valign="top">&#x2013;</td>
</tr>
<tr>
<td align="left" valign="top">Adjunct, special, or visiting</td>
<td align="center" valign="top">11 (34.4)</td>
<td align="center" valign="top">10 (31.3)</td>
<td align="center" valign="top">5 (50)</td>
</tr>
<tr>
<td align="left" valign="top">Rank</td>
<td align="left" valign="top"></td>
<td align="left" valign="top"></td>
<td align="left" valign="top"></td>
</tr>
<tr>
<td align="left" valign="top">Assistant Professor</td>
<td align="center" valign="top">15 (50.0)</td>
<td align="center" valign="top">6 (18.8)</td>
<td align="center" valign="top">1 (10)</td>
</tr>
<tr>
<td align="left" valign="top">Associate Professor</td>
<td align="center" valign="top">4 (12.5)</td>
<td align="center" valign="top">8 (25.0)</td>
<td align="center" valign="top">2 (20)</td>
</tr>
<tr>
<td align="left" valign="top">Full Professor</td>
<td align="center" valign="top">1 (3.1)</td>
<td align="center" valign="top">7 (21.9)</td>
<td align="center" valign="top">2 (20)</td>
</tr>
<tr>
<td align="left" valign="top">Adjunct, Special, Visiting, or Research Professor</td>
<td align="center" valign="top">12 (37.5)</td>
<td align="center" valign="top">11 (34.4)</td>
<td align="center" valign="top">5 (50)</td>
</tr>
<tr>
<td align="left" valign="top">Participation in additional CTL C&amp;SD services</td>
<td align="center" valign="top"></td>
<td align="center" valign="top"></td>
<td align="center" valign="top"></td>
</tr>
<tr>
<td align="left" valign="top">C&amp;SD seminar</td>
<td align="center" valign="top">13 (40.6)</td>
<td align="center" valign="top">0</td>
<td align="center" valign="top">0</td>
</tr>
</tbody>
</table>
</table-wrap>
<p id="P31">To better assess CTL impacts, we also analyzed syllabi from two comparison groups of faculty who did not receive a C&amp;SD consultation. In 2018, CMU established a course syllabus registry to better support its students. We identified registry syllabi from faculty who did not receive CTL consultations from 2016 through June 2019. We closely matched client and registry syllabi regarding course discipline and level as well as faculty employment track and rank (<xref rid="T2" ref-type="table">Table 2</xref>). When multiple registry matches occurred, we randomly selected one course. Our first comparison group included a single syllabus from 32 unique faculty. Ten faculty matches in the course registry posted syllabi for the same course taught in consecutive semesters. These faculty formed our second comparison group, accounting for &#x201C;syllabus drift,&#x201D; ambient changes in syllabi across semesters from instructors independently revising courses. Together, these comparison groups provide reference points for comparing faculty clients&#x2019; syllabi before versus after a consultation. We could not procure a viable comparison group for graduate students and postdocs who did not receive a consultation.</p>
<p id="P32">Additionally, we included data on clients&#x2019; attendance at CTL seminars on C&amp;SD prior to the completion of consultations. Seminar attendance data spanned July 2016 to the time of consultation. For faculty, we used attendance at the CTL&#x2019;s annual Incoming Faculty Orientation event, which contains a seminar devoted to C&amp;SD. We excluded faculty attending these seminars from our syllabus registry comparison groups. No other CTL events for faculty focused exclusively on C&amp;SD during this period. For FFP participants, we used attendance data from the CTL&#x2019;s annual Graduate Student Seminar Series, which offers at least one seminar on C&amp;SD each year.</p>
</sec>
<sec id="S5">
<title>Syllabi Coding</title>
<p id="P33">The FFP consultation rubric was not designed as a research instrument. Therefore, to assess the quality of syllabi in our study, we created a different rubric informed by course design and evidence-based teaching literature (<xref rid="R1" ref-type="bibr">Ambrose et al., 2010</xref>; <xref rid="R25" ref-type="bibr">Palmer et al., 2014</xref>; <xref rid="R35" ref-type="bibr">Wiggins &amp; McTighe, 2005</xref>) and CMU Faculty Senate recommendations (<xref rid="R10" ref-type="bibr">Eckhardt, 2017</xref>).</p>
<p id="P34">Six of the authors (C&amp;SD experts) served as syllabus coders and calibrated by individually coding two anonymous syllabi not included in this study. Two of the remaining authors (assessment experts) compiled the codes, identified areas of disagreement, and facilitated a discussion in which the coders revised the rubric. With the revised rubric, the coders independently scored a third anonymous syllabus, followed by another round of rubric revision. The final rubric included six broad categories&#x2014;<italic>course description</italic>, <italic>learning objectives</italic>, <italic>assessments</italic>, <italic>assessment and grading policies</italic>, <italic>other policies</italic>, and <italic>organization</italic>&#x2014;each with subcategories for specific features (see <xref ref-type="app" rid="APP1">Appendix A</xref>).</p>
<p id="P35">Given the consensus among raters following the norming activities, the six coders independently scored a unique subset of 220 syllabi in the study. Syllabi were randomly assigned, but we ensured that coders did not analyze a syllabus from one of their clients or analyze both pre- and post-consultation syllabi from the same client. Prior to coding, we de-identified syllabi regarding instructor identity and study conditions.</p>
<p id="P36">With a maximum of 56 points possible on the rubric, coders rated each syllabus on 17 of the 20 rubric items (<xref ref-type="app" rid="APP1">Appendix A</xref>). However, coders only rated the remaining three items (9 total points possible) if they were present in the syllabus. Coders agreed that these items (participation grading, other policies, and explanation for other policies) were not necessarily essential in syllabi and thus &#x201C;optional&#x201D; from a consultant&#x2019;s perspective. Consequently, we did not count their absence against the score of a syllabus and reduced the total possible points for that syllabus. We converted rubric scores to a proportion to account for the variation in possible point totals and to compare across rubric categories.</p>
</sec>
<sec id="S6">
<title>Data Analysis</title>
<p id="P37">To investigate the impact of consultations on C&amp;SD practices (Research Question 1), we conducted separate analyses for faculty and FFP clients because we were unable to obtain an FFP comparison group. We analyzed FFP syllabus scores using within-subjects, repeated measures ANOVA with syllabus condition (pre-consultation, post-consultation) as a fixed factor. We applied a general linear model to faculty client data and the first syllabus registry comparison group (<inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M1"><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>32</mml:mn></mml:mrow></mml:math></inline-formula> matched course syllabi). Independent variables included syllabus condition (pre-consultation, post-consultation, or syllabus registry Comparison Group 1) as a fixed factor and unique faculty identity (dummy coded) as random effects variables. The faculty identity variables account for repeated measures across pre- and post-consultation syllabi for clients. Because we identified only 10 syllabus registry matched pairs meeting selection criteria (Comparison Group 2), we deemed this sample too small for a reliable statistical test. Instead, we include that data below as an observational comparison.</p>
<p id="P38">To determine the extent to which syllabus scores and rates of change differed between faculty or FFP clients (Research Question 2) as well as the influence of CTL seminars on the consultation outcomes (Research Question 3.b), we conducted a three-way mixed ANOVA on syllabus rubric scores. Independent variables included syllabus condition (pre-consultation, post-consultation) as a repeated measures factor as well as client type (faculty, FFP) and prior seminar attendance (yes, no) as between-subjects factors. We included all possible two-way and three-way interactions in the model. The <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M2"><mml:mrow><mml:mi>s</mml:mi><mml:mi>y</mml:mi><mml:mi>l</mml:mi><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>b</mml:mi><mml:mi>u</mml:mi><mml:mi>s</mml:mi><mml:mo>&#x2009;</mml:mo><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:mi>n</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>c</mml:mi><mml:mi>l</mml:mi><mml:mi>i</mml:mi><mml:mi>e</mml:mi><mml:mi>n</mml:mi><mml:mi>t</mml:mi><mml:mo>&#x2009;</mml:mo><mml:mi>t</mml:mi><mml:mi>y</mml:mi><mml:mi>p</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:math></inline-formula> interaction term directly tests the null hypothesis of Research Question 2&#x2014;that is, impacts of consultations on syllabi are independent of the type of client. The <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M3"><mml:mrow><mml:mi>s</mml:mi><mml:mi>y</mml:mi><mml:mi>l</mml:mi><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>b</mml:mi><mml:mi>u</mml:mi><mml:mi>s</mml:mi><mml:mo>&#x2009;</mml:mo><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:mi>n</mml:mi><mml:mo>&#x2009;</mml:mo><mml:mo>&#x00D7;</mml:mo><mml:mo>&#x2009;</mml:mo><mml:mi>s</mml:mi><mml:mi>e</mml:mi><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mo>&#x00A0;</mml:mo><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:mi>t</mml:mi><mml:mi>e</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi><mml:mi>a</mml:mi><mml:mi>n</mml:mi><mml:mi>c</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:math></inline-formula> interaction term directly tests the null hypothesis of Research Question 3.b&#x2014;that is, impacts of consultations on syllabi are independent of whether clients previously attended a CTL seminar on C&amp;SD. The three-way interaction of <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M4"><mml:mrow><mml:mi>s</mml:mi><mml:mi>y</mml:mi><mml:mi>l</mml:mi><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>b</mml:mi><mml:mi>u</mml:mi><mml:mi>s</mml:mi><mml:mo>&#x00A0;</mml:mo><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:mi>n</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>c</mml:mi><mml:mi>l</mml:mi><mml:mi>i</mml:mi><mml:mi>e</mml:mi><mml:mi>n</mml:mi><mml:mi>t</mml:mi><mml:mo>&#x00A0;</mml:mo><mml:mi>t</mml:mi><mml:mi>y</mml:mi><mml:mi>p</mml:mi><mml:mi>e</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>s</mml:mi><mml:mi>e</mml:mi><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mo>&#x00A0;</mml:mo><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:mi>t</mml:mi><mml:mi>e</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi><mml:mi>a</mml:mi><mml:mi>n</mml:mi><mml:mi>c</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:math></inline-formula> directly tests whether impacts of consultations are independent of both client type and previous seminar attendance.</p>
<p id="P39">To measure the influence of CTL workshops on the initial condition of clients&#x2019; syllabi (Research Question 3.a), we conducted a two-way ANOVA on pre-consultation syllabus scores. Independent variables included client type (faculty, FFP) and previous seminar attendance (yes, no) as between-subjects factors.</p>
<p id="P40">In all statistical analyses, we checked model assumptions by evaluating boxplots of residuals to identify outliers, the Shapiro-Wilk test and Q-Q plots of residuals to verify normality, Levene&#x2019;s test for homogeneity of variance, and Box&#x2019;s M test for homogeneity of covariances. We did not identify outliers or violations of homogeneity of variance or covariance in our data sets. We only detected violations of normality in the post-consultation syllabus scores in the three-way ANOVA. We proceeded with the ANOVA because this type of analysis is robust to violations of normality (<xref rid="R24" ref-type="bibr">Norman, 2010</xref>).</p>
</sec>
</sec>
<sec id="S7">
<title>Results</title>
<sec id="S8">
<title>RQ1: To what extent do CTL consultation services influence instructor course design, as documented in their course syllabi?</title>
<sec id="S9">
<title>Faculty Syllabi</title>
<p id="P41">Total syllabus scores differed significantly across the three syllabus conditions (pre-consultation, post-consultation, and registry), <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M5"><mml:mrow><mml:mi>F</mml:mi><mml:mfenced><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>31</mml:mn></mml:mrow></mml:mfenced><mml:mo>=</mml:mo><mml:mn>17.14</mml:mn></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M6"><mml:mrow><mml:mi>p</mml:mi><mml:mo>&#x003C;</mml:mo><mml:mn>.001</mml:mn></mml:mrow></mml:math></inline-formula>, partial <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M7"><mml:mrow><mml:msup><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msup><mml:mo>=</mml:mo><mml:mn>.356</mml:mn></mml:mrow></mml:math></inline-formula> (<xref rid="F1" ref-type="fig">Figure 1</xref>). This result indicates that approximately 36% of the variance in total syllabus scores depends on experimental conditions (pre-consultation, post-consultation, or registry). A post hoc Tukey HSD test found that the mean differences between each of the three possible pairwise combinations of conditions are statistically significant (<xref rid="T3" ref-type="table">Table 3</xref>). Clients&#x2019; syllabi scored higher after a consultation (<inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M8"><mml:mrow><mml:mi>M</mml:mi><mml:mo>=</mml:mo><mml:mn>.87</mml:mn></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M9"><mml:mrow><mml:mi>S</mml:mi><mml:mi>D</mml:mi><mml:mo>=</mml:mo><mml:mn>.08</mml:mn></mml:mrow></mml:math></inline-formula>) than before (<inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M10"><mml:mrow><mml:mi>M</mml:mi><mml:mo>=</mml:mo><mml:mn>.81</mml:mn></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M11"><mml:mrow><mml:mi>S</mml:mi><mml:mi>D</mml:mi><mml:mo>=</mml:mo><mml:mn>.10</mml:mn></mml:mrow></mml:math></inline-formula>). Clients&#x2019; pre- and post-consultation syllabi both scored higher than syllabi from non-clients matched from the syllabus registry, whether non-clients posted a single syllabus (Comparison Group 1: <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M12"><mml:mrow><mml:mi>M</mml:mi><mml:mo>=</mml:mo><mml:mn>.70</mml:mn></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M13"><mml:mrow><mml:mi>S</mml:mi><mml:mi>D</mml:mi><mml:mo>=</mml:mo><mml:mn>.11</mml:mn></mml:mrow></mml:math></inline-formula>) or pairs of syllabi for the same course in consecutive semesters (Comparison Group 2: Semester 1 <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M14"><mml:mrow><mml:mi>M</mml:mi><mml:mo>=</mml:mo><mml:mn>.74</mml:mn></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M15"><mml:mrow><mml:mi>S</mml:mi><mml:mi>D</mml:mi><mml:mo>=</mml:mo><mml:mn>.05</mml:mn></mml:mrow></mml:math></inline-formula>; Semester 2 <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M16"><mml:mrow><mml:mi>M</mml:mi><mml:mo>=</mml:mo><mml:mn>.74</mml:mn></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M17"><mml:mrow><mml:mi>S</mml:mi><mml:mi>D</mml:mi><mml:mo>=</mml:mo><mml:mn>.80</mml:mn></mml:mrow></mml:math></inline-formula>). Due to small sample size, we do not present statistical analyses including the syllabus drift comparison group. Observationally, in contrast to clients, syllabus drift samples from non-clients appear to show little or no change across consecutive semesters, as measured by our syllabus rubric. Faculty clients&#x2019; syllabi also consistently score higher than syllabus drift samples. After consultations, faculty syllabi scored significantly higher in three rubric categories: assessments, assessment and grading policies, and other course policies (<xref rid="T4" ref-type="table">Table 4</xref>).</p>
<fig id="F1" position="anchor">
<label>Figure 1.</label>
<caption>
<p id="P77">Mean Scores for Faculty Clients&#x2019; Pre- and Post-Consultation Syllabi and Matched Non-Client Samples (error bars represent 95% confidence intervals)</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="tia.926-f0001.jpg"/>
</fig>
<table-wrap id="T3" position="anchor" orientation="portrait">
<label>Table 3.</label><caption><p id="P82">Multiple Pairwise Comparisons (Tukey HSD tests) for Scores Across Combinations of Three Faculty Syllabus Conditions</p></caption>
<table frame="hsides" rules="groups">
<colgroup>
<col align="left" valign="middle"/>
<col align="left" valign="middle"/>
<col align="left" valign="middle"/>
<col align="left" valign="middle"/>
<col align="left" valign="middle"/>
</colgroup>
<thead>
<tr>
<th align="left" valign="top">Condition A</th>
<th align="center" valign="top">Condition B</th>
<th align="center" valign="top">Mean difference (A&#x2013;B)</th>
<th align="center" valign="top">p</th>
<th align="center" valign="top">95% CI</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Pre-consultation</td>
<td align="left" valign="top">Post-consultation</td>
<td align="left" valign="top">&#x2212;.057</td>
<td align="center" valign="top">.001<xref rid="tfn2" ref-type="table-fn">*</xref></td>
<td align="left" valign="top">[&#x2212;.09,&#x2212;.02]</td>
</tr>
<tr>
<td align="left" valign="top">Post-consultation</td>
<td align="left" valign="top">Registry</td>
<td align="left" valign="top">.167</td>
<td align="center" valign="top">&#x003C; .001<xref rid="tfn2" ref-type="table-fn">*</xref></td>
<td align="left" valign="top">[.13,.20]</td>
</tr>
<tr>
<td align="left" valign="top">Registry</td>
<td align="left" valign="top">Pre-consultation</td>
<td align="left" valign="top">&#x2212;.110</td>
<td align="center" valign="top">&#x003C; .001<xref rid="tfn2" ref-type="table-fn">*</xref></td>
<td align="left" valign="top">[&#x2212;.14,&#x2212;.08]</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="tfn2"><p id="P83">*Statistically significant.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<table-wrap id="T4" position="anchor" orientation="portrait">
<label>Table 4.</label><caption><p id="P84">Mean Syllabus Scores for Non-Clients Posting a Single Registry Syllabus and Faculty Clients (t-tests performed on client data only)</p></caption>
<table frame="hsides" rules="groups">
<colgroup>
<col align="left" valign="middle"/>
<col align="left" valign="middle"/>
<col align="left" valign="middle"/>
<col align="left" valign="middle"/>
<col align="left" valign="middle"/>
<col align="left" valign="middle"/>
<col align="left" valign="middle"/>
</colgroup>
<thead>
<tr>
<th align="left" valign="top">Category</th>
<th align="center" valign="top">Registry proportion of points scored (0&#x2013;1) (<italic>SD</italic>)</th>
<th colspan="2" align="center" valign="top">Clients&#x2019; proportion of points scored (0&#x2013;1) (<italic>SD</italic>)</th>
<th align="center" valign="top"><italic>t</italic> (client pre/post)</th>
<th align="center" valign="top"><italic>p</italic></th>
<th align="center" valign="top">Effect size <italic>(d)</italic></th>
</tr>
<tr>
<th align="left" valign="top"></th>
<th align="center" valign="top"></th>
<th colspan="2" align="center" valign="top"><hr/>Pre-consult Post-consult</th>
<th align="center" valign="top"></th>
<th align="center" valign="top"></th>
<th align="center" valign="top"></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Course description</td>
<td align="center" valign="top">.76 (.17)</td>
<td align="center" valign="top">.76 (.18)</td>
<td align="center" valign="top">.81 (.18)</td>
<td align="center" valign="top">&#x2212;1.54</td>
<td align="center" valign="top">.13</td>
<td align="center" valign="top">&#x2013;</td>
</tr>
<tr>
<td align="left" valign="top">Learning objectives</td>
<td align="center" valign="top">.78 (.20)</td>
<td align="center" valign="top">.86 (.13)</td>
<td align="center" valign="top">.88 (.14)</td>
<td align="center" valign="top">&#x2212;0.71</td>
<td align="center" valign="top">.48</td>
<td align="center" valign="top">&#x2013;</td>
</tr>
<tr>
<td align="left" valign="top">Assessments</td>
<td align="center" valign="top">.74 (.16)</td>
<td align="center" valign="top">.80 (.16)</td>
<td align="center" valign="top">.87 (.14)</td>
<td align="center" valign="top">&#x2212;3.21</td>
<td align="center" valign="top">&#x003C; .001<xref rid="tfn3" ref-type="table-fn">*</xref></td>
<td align="center" valign="top">&#x2212;.57</td>
</tr>
<tr>
<td align="left" valign="top">Assessment and grading policies</td>
<td align="center" valign="top">.72 (.21)</td>
<td align="center" valign="top">.77 (.19)</td>
<td align="center" valign="top">.84 (.16)</td>
<td align="center" valign="top">&#x2212;3.30</td>
<td align="center" valign="top">&#x003C; .001<xref rid="tfn3" ref-type="table-fn">*</xref></td>
<td align="center" valign="top">&#x2212;.58</td>
</tr>
<tr>
<td align="left" valign="top">Other course policies</td>
<td align="center" valign="top">.72 (.19)</td>
<td align="center" valign="top">.82 (.18)</td>
<td align="center" valign="top">.89 (.10)</td>
<td align="center" valign="top">&#x2212;2.59</td>
<td align="center" valign="top">.01<xref rid="tfn3" ref-type="table-fn">*</xref></td>
<td align="center" valign="top">&#x2212;.46</td>
</tr>
<tr>
<td align="left" valign="top">Organization</td>
<td align="center" valign="top">.88 (.18)</td>
<td align="center" valign="top">.93 (.14)</td>
<td align="center" valign="top">.94 (.13)</td>
<td align="center" valign="top">&#x2212;.44</td>
<td align="center" valign="top">.66</td>
<td align="center" valign="top">&#x2013;</td>
</tr>
<tr>
<td align="left" valign="top"><bold>Total</bold></td>
<td align="center" valign="top"><bold>.70 (.11)</bold></td>
<td align="center" valign="top"><bold>.81 (.10)</bold></td>
<td align="center" valign="top"><bold>.87 (.08)</bold></td>
<td align="center" valign="top"><bold>&#x2212;4.13</bold></td>
<td align="center" valign="top"><bold>&#x003C; .001</bold><xref rid="tfn3" ref-type="table-fn">*</xref></td>
<td align="center" valign="top"><bold>.73</bold></td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="tfn3"><p id="P85">*Statistically significant.</p>
</fn>
</table-wrap-foot>
</table-wrap>
</sec>
<sec id="S10">
<title>FFP Syllabi</title>
<p id="P42">FFP clients receiving C&amp;SD consultations scored significantly higher on post-consultation syllabi (<inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M18"><mml:mrow><mml:mi>M</mml:mi><mml:mo>=</mml:mo><mml:mn>0.90</mml:mn></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M19"><mml:mrow><mml:mi>S</mml:mi><mml:mi>D</mml:mi><mml:mo>=</mml:mo><mml:mn>.066</mml:mn></mml:mrow></mml:math></inline-formula>) than pre-consultation syllabi (<inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M20"><mml:mrow><mml:mi>M</mml:mi><mml:mo>=</mml:mo><mml:mn>0.80</mml:mn></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M21"><mml:mrow><mml:mi>S</mml:mi><mml:mi>D</mml:mi><mml:mo>=</mml:mo><mml:mn>.095</mml:mn></mml:mrow></mml:math></inline-formula>), <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M22"><mml:mrow><mml:mi>t</mml:mi><mml:mfenced><mml:mrow><mml:mn>61</mml:mn></mml:mrow></mml:mfenced><mml:mo>=</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mn>8.17</mml:mn></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M23"><mml:mrow><mml:mi>p</mml:mi><mml:mo>&#x003C;</mml:mo><mml:mn>.001</mml:mn></mml:mrow></mml:math></inline-formula>, with a large effect size, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M24"><mml:mrow><mml:mi>d</mml:mi><mml:mo>=</mml:mo><mml:mn>1.04</mml:mn></mml:mrow></mml:math></inline-formula> (<xref rid="T5" ref-type="table">Table 5</xref>). Additionally, FFP clients demonstrated statistically significant increases in the following rubric categories: course description, learning objectives, assessments, assessment and grading policies, and other course policies.</p>
<table-wrap id="T5" position="anchor" orientation="portrait">
<label>Table 5.</label><caption><p id="P86">FFP Clients&#x2019; Mean Syllabus Scores</p></caption>
<table frame="hsides" rules="groups">
<colgroup>
<col align="left" valign="middle"/>
<col align="left" valign="middle"/>
<col align="left" valign="middle"/>
<col align="left" valign="middle"/>
<col align="left" valign="middle"/>
<col align="left" valign="middle"/>
</colgroup>
<thead>
<tr>
<th align="left" valign="top">Category</th>
<th align="center" valign="top">Pre&#x2013;proportion of points scored (0&#x2013;1)</th>
<th align="center" valign="top">Post&#x2013;proportion of points scored (0&#x2013;1)</th>
<th align="center" valign="top"><italic>t</italic></th>
<th align="center" valign="top"><italic>p</italic></th>
<th align="center" valign="top">Effect size <italic>(d)</italic></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Course description</td>
<td align="center" valign="top">.76 (.18)</td>
<td align="center" valign="top">.87 (.14)</td>
<td align="center" valign="top">&#x2212;3.65</td>
<td align="center" valign="top">.001<xref rid="tfn4" ref-type="table-fn">*</xref></td>
<td align="center" valign="top">.46</td>
</tr>
<tr>
<td align="left" valign="top">Learning objectives</td>
<td align="center" valign="top">.83 (.20)</td>
<td align="center" valign="top">.94 (.08)</td>
<td align="center" valign="top">&#x2212;4.46</td>
<td align="center" valign="top">&#x003C; .001<xref rid="tfn4" ref-type="table-fn">*</xref></td>
<td align="center" valign="top">.57</td>
</tr>
<tr>
<td align="left" valign="top">Assessments</td>
<td align="center" valign="top">.81 (.15)</td>
<td align="center" valign="top">.92 (.10)</td>
<td align="center" valign="top">&#x2212;5.24</td>
<td align="center" valign="top">&#x003C; .001<xref rid="tfn4" ref-type="table-fn">*</xref></td>
<td align="center" valign="top">.67</td>
</tr>
<tr>
<td align="left" valign="top">Assessment and grading policies</td>
<td align="center" valign="top">.78 (.17)</td>
<td align="center" valign="top">.89 (.12)</td>
<td align="center" valign="top">&#x2212;4.93</td>
<td align="center" valign="top">&#x003C; .001<xref rid="tfn4" ref-type="table-fn">*</xref></td>
<td align="center" valign="top">.63</td>
</tr>
<tr>
<td align="left" valign="top">Other course policies</td>
<td align="center" valign="top">.76 (.14)</td>
<td align="center" valign="top">.88 (.10)</td>
<td align="center" valign="top">&#x2212;6.31</td>
<td align="center" valign="top">&#x003C; .001<xref rid="tfn4" ref-type="table-fn">*</xref></td>
<td align="center" valign="top">.80</td>
</tr>
<tr>
<td align="left" valign="top">Organization</td>
<td align="center" valign="top">.90 (.15)</td>
<td align="center" valign="top">.93 (.15)</td>
<td align="center" valign="top">&#x2212;1.10</td>
<td align="center" valign="top">.28</td>
<td align="center" valign="top">&#x2013;</td>
</tr>
<tr>
<td align="left" valign="top"><bold>Total</bold></td>
<td align="center" valign="top"><bold>.80 (.09)</bold></td>
<td align="center" valign="top"><bold>.90 (.07)</bold></td>
<td align="center" valign="top"><bold>&#x2212;8.17</bold></td>
<td align="center" valign="top"><bold>&#x003C; .001</bold><xref rid="tfn4" ref-type="table-fn">*</xref></td>
<td align="center" valign="top"><bold>1.04</bold></td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="tfn4"><p id="P87">*Statistically significant.</p></fn>
</table-wrap-foot>
</table-wrap>
</sec>
</sec>
<sec id="S11">
<title>RQ2: To what extent do the impacts of CTL consultations on C&amp;SD differ between faculty and graduate student/postdoc clients?</title>
<p id="P43">Overall changes in syllabus scores following consultations resulted in strong effect sizes for both faculty and non-faculty clients (<xref rid="T4" ref-type="table">Tables 4</xref> and <xref rid="T5" ref-type="table">5</xref>), representing increases of 0.73 and 1.04 standard deviations, respectively. However, changes in syllabus scores following consultations differed between faculty and FFP clients. The three-way mixed ANOVA on all client syllabus scores did not exhibit a statistically significant three-way interaction among syllabus condition (pre-consultation, post-consultation), client type (faculty, FFP), or previous seminar attendance (yes, no), <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M25"><mml:mrow><mml:mi>F</mml:mi><mml:mfenced><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>90</mml:mn></mml:mrow></mml:mfenced><mml:mo>&#x003C;</mml:mo><mml:mn>0.0</mml:mn></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M26"><mml:mrow><mml:mi>p</mml:mi><mml:mo>=</mml:mo><mml:mn>.995</mml:mn></mml:mrow></mml:math></inline-formula>. However, this ANOVA revealed a statistically significant interaction between client type (faculty, FFP) and syllabus condition (pre-consultation, post-consultation), <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M27"><mml:mrow><mml:mi>F</mml:mi><mml:mfenced><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>92</mml:mn></mml:mrow></mml:mfenced><mml:mo>=</mml:mo><mml:mn>5.109</mml:mn></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M28"><mml:mrow><mml:mi>p</mml:mi><mml:mo>=</mml:mo><mml:mn>.026</mml:mn></mml:mrow></mml:math></inline-formula>, partial <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M29"><mml:mrow><mml:msup><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msup><mml:mo>=</mml:mo><mml:mn>.053</mml:mn></mml:mrow></mml:math></inline-formula> (<xref rid="F2" ref-type="fig">Figure 2</xref>). While pre-consultation syllabus scores did not differ between client types (FFP <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M30"><mml:mrow><mml:mi>M</mml:mi><mml:mo>=</mml:mo><mml:mn>.80</mml:mn></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M31"><mml:mrow><mml:mi>S</mml:mi><mml:mi>D</mml:mi><mml:mo>=</mml:mo><mml:mn>.09</mml:mn></mml:mrow></mml:math></inline-formula>; faculty <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M32"><mml:mrow><mml:mi>M</mml:mi><mml:mo>=</mml:mo><mml:mn>.81</mml:mn></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M33"><mml:mrow><mml:mi>S</mml:mi><mml:mi>D</mml:mi><mml:mo>=</mml:mo><mml:mn>.10</mml:mn></mml:mrow></mml:math></inline-formula>), FFP clients received higher post-consultation scores (<inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M34"><mml:mrow><mml:mi>M</mml:mi><mml:mo>=</mml:mo><mml:mn>.90</mml:mn></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M35"><mml:mrow><mml:mi>S</mml:mi><mml:mi>D</mml:mi><mml:mo>=</mml:mo><mml:mn>.07</mml:mn></mml:mrow></mml:math></inline-formula>) than faculty clients (<inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M36"><mml:mrow><mml:mi>M</mml:mi><mml:mo>=</mml:mo><mml:mn>.87</mml:mn></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M37"><mml:mrow><mml:mi>S</mml:mi><mml:mi>D</mml:mi><mml:mo>=</mml:mo><mml:mn>.08</mml:mn></mml:mrow></mml:math></inline-formula>). While total syllabus scores improved for both types of clients, FFP clients improved more than faculty (10% vs. 6%, respectively). When calculated as a relative growth rate,
<disp-formula id="FD1">
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="block" id="M38">
<mml:mrow><mml:mfenced><mml:mrow><mml:mi>F</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mi>I</mml:mi></mml:mrow></mml:mfenced><mml:mtext>&#x00F7;</mml:mtext><mml:mfenced><mml:mrow><mml:mi>M</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mi>I</mml:mi></mml:mrow></mml:mfenced></mml:mrow></mml:math>
</disp-formula>
where <italic>F</italic> = final syllabus score, <italic>I</italic> = initial syllabus score, and <italic>Max</italic> = maximum possible syllabus score, FFP and faculty clients demonstrated 50% and 31% gains, respectively, relative to possible growth. Faculty clients&#x2019; growth was driven primarily by categories related to assessments and policies (<xref rid="T4" ref-type="table">Table 4</xref>), whereas FFP clients&#x2019; scores increased in almost every category (<xref rid="T4" ref-type="table">Table 5</xref>).</p>
<fig id="F2" position="anchor">
<label>Figure 2.</label>
<caption>
<p id="P78">Faculty and FFP Clients&#x2019; Mean Syllabus Pre- and Post-Consultation Scores (error bars represent 95% confidence intervals)</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="tia.926-f0002.jpg"/>
</fig>
</sec>
<sec id="S12">
<title>RQ3: To what extent does participation in a CTL C&amp;SD seminar, prior to a CTL consultation, influence:
<list list-type="alpha-lower" id="L4">
<list-item><p id="P45">instructors&#x2019; C&amp;SD practices, as documented in their pre-consultation syllabi?</p></list-item>
<list-item><p id="P46">the impact of a consultation, as documented by pre/post changes in syllabi?</p></list-item>
</list></title>
<p id="P47">Seminar attendance prior to a consultation did not impact clients&#x2019; initial syllabus scores. The two-way ANOVA testing the influence of seminar attendance on clients&#x2019; pre-consultation syllabus scores found no significant interaction between prior attendance at a C&amp;SD seminar and client type (faculty, FFP) on pre-consultation scores <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M39"><mml:mrow><mml:mi>F</mml:mi><mml:mfenced><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>90</mml:mn></mml:mrow></mml:mfenced><mml:mo>=</mml:mo><mml:mn>.23</mml:mn></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M40"><mml:mrow><mml:mi>p</mml:mi><mml:mo>=</mml:mo><mml:mn>.63</mml:mn></mml:mrow></mml:math></inline-formula> (<xref rid="F3" ref-type="fig">Figure 3</xref>). Additionally, there were no significant main effects for either prior seminar attendance or client type on pre-consultation syllabus scores. Similarly, as reported above, prior seminar attendance did not explain the clients&#x2019; changes in syllabus scores, either within or across client types (i.e., no statistically significant main effects or two- or three-way interactions including seminar condition).</p>
<fig id="F3" position="anchor">
<label>Figure 3.</label>
<caption>
<p id="P79">Faculty and FFP Clients&#x2019; Mean Syllabus Pre- and Post-Consultation Scores and Seminar Attendance (error bars represent 95% confidence intervals)</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="tia.926-f0003.jpg"/>
</fig>
</sec>
</sec>
<sec id="S13">
<title>Discussion</title>
<p id="P48">Our study directly investigated impacts of CTL services on current and future faculty&#x2019;s C&amp;SD practices, as documented in their syllabi before and after a C&amp;SD consultation. Below, we discuss the implications of our data and assessment approach for educational developers. We also discuss alternative data sources for assessing CTL impacts on instructional practices.</p>
<sec id="S14">
<title>Measuring Impacts of CTL Consultations on C&amp;SD</title>
<sec id="S15">
<title>What Changed After a Consultation?</title>
<p id="P49">Following a CTL consultation, both faculty and FFP clients demonstrated significant changes in C&amp;SD practices (<xref rid="T4" ref-type="table">Tables 4</xref> and <xref rid="T5" ref-type="table">5</xref>), as measured by our syllabus rubric (<xref ref-type="app" rid="APP1">Appendix A</xref>). Faculty and FFP clients&#x2019; syllabi scored 31% and 50% higher after a consultation, respectively, relative to the ceiling of possible improvement, with FFP clients demonstrating significantly greater gains. We observed large effect sizes, suggesting changes of 0.73 and 1.04 standard deviations in faculty and FFP clients&#x2019; syllabi, respectively. Both types of clients&#x2019; syllabi changed significantly regarding assessments, assessment and grading policies, and other course policies. Following consultations, instructors included more assessment details (e.g., format, deliverables, timelines) and provided greater clarity regarding expectations and evaluation criteria. Furthermore, in post-consultation syllabi, instructors were more likely to provide a rationale for their policies. FFP clients&#x2019; course descriptions and learning objectives also changed significantly. After consultations, course descriptions contained more information on the types of instructional methods students might experience and how the course might contribute to student development, their discipline, and/or their future career. Learning objectives also become more learner centered and measurable and represent a broader range of cognitive skills (e.g., application, synthesis and evaluation, rather than recall or comprehension alone).</p>
</sec>
<sec id="S16">
<title>Is the Consultation the Cause of the Observed Changes?</title>
<p id="P50">Together, our data from controls and comparison groups suggest that CTL consultations on C&amp;SD can directly and positively impact instructors&#x2019; C&amp;SD practices. Specifically, our results suggest that observed client gains are not likely explained by ambient syllabus drift across semesters or by previous participation in a CTL seminar on C&amp;SD.</p>
<p id="P51">Does the magnitude of change observed in clients&#x2019; syllabi differ from that in syllabi of faculty who are not clients? Yes. Unfortunately, our comparison group for ambient syllabus drift was not large enough for a rigorous statistical analysis. However, non-clients&#x2019; syllabi appear to exhibit little change across semesters compared to faculty clients&#x2019; syllabi before and after a consultation (<xref rid="F1" ref-type="fig">Figure 1</xref>). It is unlikely that the observed magnitude of changes in faculty clients&#x2019; syllabi (<inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="M41"><mml:mrow><mml:mi>S</mml:mi><mml:mi>D</mml:mi><mml:mo>=</mml:mo><mml:mn>0.73</mml:mn></mml:mrow></mml:math></inline-formula>) are due to independent, ambient instructor revisions alone, even if they are a self-selected group.</p>
<p id="P52">Is the improvement observed in clients&#x2019; syllabi after a consultation caused by the prior participation in related CTL programs? No. Prior attendance at a CTL seminar on C&amp;SD did not explain variation in pre-consultation syllabi or changes in syllabi.</p>
</sec>
<sec id="S17">
<title>Do Our Data Suggest That CTL Seminars Do Not Impact Instructors?</title>
<p id="P53">No. Our data only suggest that a stand-alone C&amp;SD seminar may not directly translate to syllabi impacts, especially after a delay. Our CTL&#x2019;s stand-alone 60&#x2013;90 minute C&amp;SD seminars are NOT designed to develop deep mastery but to introduce basic principles and (hopefully) lay a foundation for subsequent consultations. Seminars combine didactic and hands-on instruction. Following active learning exercises highlighting effective practices regarding learning objectives, assessments, alignment, policies, and inclusive climate, seminar participants may experience knowledge gains that do not transfer to syllabi creation without additional interventions, like those in a C&amp;SD consultation or institute. Unlike course design institutes (<xref rid="R26" ref-type="bibr">Palmer et al., 2016</xref>) and consultations, seminar participants are not necessarily concurrently working on their C&amp;SD, do not receive feedback on their syllabi, or collaboratively workshop C&amp;SD elements with a trained consultant. Moreover, <xref rid="R26" ref-type="bibr">Palmer et al. (2016)</xref> collected syllabi generated by participants at the end of their week-long program. Our clients often (re) design syllabi and request a consultation more than one week after a seminar. Given these two differences between C&amp;SD seminars and institutes, application of principles from seminars represents non-trivial transfer of learning. Based on this article&#x2019;s results, via authentic pre/post assessments, we plan to directly measure instructor learning gains regarding C&amp;SD knowledge and skills in future iterations of C&amp;SD seminars to better understand their impacts.</p>
</sec>
<sec id="S18">
<title>Why Did Syllabi Change More for FPP Than for Faculty Clients?</title>
<p id="P54">To our knowledge, this empirical study is the first to compare CTL consultation outcomes between current and future faculty. Two conspicuous differences between the service models are the most likely explanations for the observed differences (<xref rid="F2" ref-type="fig">Figure 2</xref>). First, FFP consultations are more scaffolded, leveraging a rubric to provide feedback. FFP clients must then use this rubric-driven feedback to iterate upon drafts until they meet minimum program requirements. The desire to complete program requirements may strongly motivate FFP clients. In contrast, faculty consultations do not necessarily leverage a rubric or involve iterating on drafts based on consultants&#x2019; feedback. Faculty may demonstrate less change due to differing time constraints, priorities, or receptivity to adopting evidence-based practices. However, we hesitate to generalize regarding such differences among faculty and FFP clients without more data. Second, FFP consultations tend to focus exclusively on C&amp;SD, but faculty consultations do not and likely exhibit less time on C&amp;SD. One or both of these factors may drive the observed differences in consultation impacts on clients&#x2019; syllabi. Third, for various reasons, some faculty clients have limited agency to alter course descriptions or learning objectives when they inherit a teaching assignment, possibly explaining why syllabi changed less for faculty than for FFP clients for these two syllabus elements.</p>
</sec>
<sec id="S19">
<title>Limitations</title>
<p id="P55">Our study has several limitations. First, we could not identify a viable comparison group for FFP clients. Second, for faculty, given the young age of our institution&#x2019;s syllabus registry, we identified only 10 courses meeting matching criteria that posted syllabi in consecutive semesters. Ideally, we would have preferred to include syllabus drift matches for all 32 faculty clients. Given the observed effect sizes for Research Question 1 and the observational data from the syllabus drift comparison group, we believe this limitation does not invalidate our conclusions. Third, because faculty rank was the last of four criteria in our matching protocol, our syllabus registry samples overrepresent senior faculty compared to our client sample (<xref rid="T2" ref-type="table">Table 2</xref>). In our opinion, this makes our exploration of Research Question 1 a conservative test. Although it is likely that our faculty client sample was less experienced than the registry sample, clients&#x2019; syllabi still demonstrated higher initial quality and improvement over time. Finally, as discussed above, documentation of instructional practices in syllabi does not necessarily guarantee implementation fidelity (<xref rid="R3" ref-type="bibr">Campbell et al., 2019</xref>). Furthermore, syllabi may not capture certain features of evidence-based instructional practices (e.g., active learning) that may be influenced by CTL consultations, especially regarding classroom teaching strategies. In future studies, another data source, such as an observation of teaching or an alternative teaching artifact (see below), could help mitigate this limitation.</p>
<p id="P56">Please note that our study did not attempt to measure classroom teaching practices. Instead, we intentionally focused on C&amp;SD as documented in syllabi to inform our services focused on C&amp;SD and syllabi (as deliverables). Despite these limitations, we believe our data has implications for educational developers regarding outcomes assessments, resource allocation, and consulting practices.</p>
</sec>
</sec>
<sec id="S20">
<title>Using Formative Assessment Data for Iterative Refinement</title>
<p id="P57">Our approach to formative assessment is motivated by our collective desire to iteratively refine our programs and services, informed by data from direct measures of outcomes. The data collected are not used to evaluate the performance of individual consultants. Instead, the primary purpose is to foster a team-based, growth mindset and data-informed collaborative search for areas for improvement. To establish and maintain buy-in as a team, CTL leadership sought consensus and staff input at every step, from rubric development to study design to data collection, analysis, and interpretation. Both rubric norming and data debrief sessions led to rich exchanges of consulting techniques and experiences as well as discussions about the implementation of service models.</p>
<p id="P58">To illustrate, the data from this study instigated deliberations on several changes to one or both service models. Foremost, discussions of the study rubric led to changes in how FFP consultations are scaffolded. The revised rubric more clearly aligns with our evidence-based objectives and priorities for clients&#x2019; development. We made similar minor changes to our generative interview protocol often used in FFP consultations. The limitations of our data source (syllabi) inspired revisions to the course design reflection memo that is part of the required FFP C&amp;SD project. We clarified prompt questions to foster clients&#x2019; reflections on instructional methods and their alignment with learning objectives, two important components of course design that are rarely captured fully in a syllabus. In future formative assessments, this additional deliverable for FFP clients can complement the data available in syllabi. Furthermore, we reevaluated when to provide the rubric during the FFP service model to best support clients and mirror what we recommend to instructors in general (i.e., provide the rubric in advance, rather than after receiving the client&#x2019;s first draft, as standard practice).</p>
<p id="P59">The difference in faculty and FFP clients&#x2019; gains suggests that components of the FFP service model may be beneficial to incorporate into the faculty consultations. In particular, we discussed using the rubric or other resources to scaffold faculty consultations. Time constraints and/or competing priorities often prevent holistic feedback and iteration on faculty syllabi. Previously, consultants varied in what resources they used to support faculty clients. Some employed recommendations from our Faculty Senate regarding suggested syllabus components (<xref rid="R10" ref-type="bibr">Eckhardt, 2017</xref>). None reported leveraging the FFP rubric with faculty. Increasing the scaffolding in faculty consultations may enhance faculty clients&#x2019; gains. However, increasing the focus of faculty consultations on particular growth areas highlighted in our data (e.g., assessments) may also be advantageous for &#x201C;meeting clients where they are&#x201D; and impacting C&amp;SD, given the constraints discussed above.</p>
<p id="P60">Without this project, these team-wide conversations likely would not have happened, at least not in the same inclusive, systematic, data-driven way. Conspicuous benefits of the data-driven approach to reflective practice presented here include the ability to (a) more precisely measure the impact of a CTL service on instructors&#x2019; course design practices; (b) leverage the power of group collaboration to generate data-informed, actionable steps to iteratively enhance a consultation service model; and (c) normalize dialogue across CTL staff members to share and enhance consultation practices and techniques. We acknowledge that this model requires both time and resources and may need to be adjusted to be feasible in other CTL contexts. Nevertheless, we believe the potential benefits merit consideration as one option for formatively evaluating CTL services.</p>
</sec>
<sec id="S21">
<title>Recommendations for CTLs and Educational Developers</title>
<p id="P61">When evaluating CTL services using direct measures of outcomes, it can be difficult to attribute causality without a comparison group. Without comparison groups, outcomes assessments document &#x201C;what is/happens&#x201D; but not &#x201C;what causes.&#x201D; And, as with our FFP client sample, finding a comparison group is non-trivial. Even without a comparison group, pre/post measurements can provide rich, evidence-based fodder for reflection and iterative refinement of CTL practices. However, one must interpret data cautiously regarding causality. Given these challenges, we advise investigating multiple data sources and/or outcomes measures to demonstrate the added value of CTL services.</p>
<p id="P62">What if a CTL has limited resources, including staff with assessment expertise? The approaches reported here would likely provide actionable, formative data with a smaller sample or when only using descriptive statistics or qualitative methods rather than parametric statistical analyses. Using study designs with large samples and inferential statistics to evaluate CTL impacts is rigorous. However, effective, data-driven, reflective practice is not limited to only that particular analytical &#x201C;way of knowing.&#x201D; We recommend starting small with direct outcomes assessments and adopting a formative lens, if possible. CTLs can prioritize efforts by asking two questions: What would you most like to know about client outcomes that you don&#x2019;t? And what would you be willing to change in response to that data, if you had it?</p>
</sec>
<sec id="S22">
<title>Future Directions: Leveraging Alternative Teaching Artifacts as Data Sources</title>
<p id="P63">Our formative assessments using syllabus analyses greatly informed our educational development practices regarding C&amp;SD. Consequently, our reflective dialogues also explored other sustainable options for measuring impacts of CTL C&amp;SD services that provide insights unavailable via syllabus analyses.</p>
<p id="P64">Classroom observations can provide systematic, objective data absent from syllabi (<xref rid="R3" ref-type="bibr">Campbell et al., 2019</xref>; <xref rid="R30" ref-type="bibr">Smith et al., 2013</xref>; <xref rid="R31" ref-type="bibr">Stanny et al., 2015</xref>; <xref rid="R34" ref-type="bibr">Wheeler &amp; Bach, 2021</xref>), especially regarding classroom teaching strategies. For instance, observation data can directly measure frequency, duration, and timing of particular teaching strategies (e.g., active learning, inclusive teaching techniques) during synchronous course sessions. Moreover, trained observers can document fine-grained teaching and learning behaviors, including patterns of participation, the nature of instructor-student interactions, how feedback is delivered, student time on task, and more. Observational data is rich and actionable. However, to measure CTL outcomes, especially at scale, observations are a logistically daunting commitment when resources are limited, even with efficient and reliable observation protocols.</p>
<p id="P65">Analyses of other teaching artifacts, above and beyond syllabi, may provide complementary and less resource-intensive alternatives. Collecting assignment prompts and/or grading rubrics from clients before and after participating in CTL services could measure alignment of assessments with learning objectives in syllabi. A similar approach could be taken for analyzing the impacts of CTL services targeting the adoption of multimedia learning principles (<xref rid="R5" ref-type="bibr">Clark &amp; Mayer, 2016</xref>) in the design of online learning modules, instructional videos, or lecture slides. Likewise, one could analyze exam item construction (<xref rid="R27" ref-type="bibr">Parkes &amp; Zimmaro, 2016</xref>; <xref rid="R28" ref-type="bibr">Piontek, 2008</xref>), exam item analytics (<xref rid="R21" ref-type="bibr">Livingston, 2006</xref>), or rubric construction before and after CTL assessment design services. Regarding inclusive teaching, one could measure patterns in the representation of authors, perspectives, or ways of knowing in assigned course readings or materials. For all of the examples above, one could potentially develop evidence-based rubrics for formative outcomes assessment. To our knowledge, no SoED studies have included these data sources in CTL outcomes assessments. We hope future research will explore these possibilities and more.</p>
</sec>
</sec>
<sec id="S23">
<title>Conclusion</title>
<p id="P66">Comprehensive CTL assessment plans should consider evaluating not only outcomes from programs and services but also organizational structure, resource allocation, and infrastructure (<xref rid="R11" ref-type="bibr">Ellis et al., 2020</xref>; <xref rid="R29" ref-type="bibr">POD, 2018</xref>). Furthermore, when strategically planning for CTL outcomes assessments, multiple sources of data from direct measures of outcomes are desirable because each tells a different part of the story and has its own pros and cons. Our study results suggest that (a) direct outcomes assessments based on artifacts of teaching are viable assessment approaches for CTLs; (b) CTL consultations appear to directly and positively impact clients&#x2019; C&amp;SD, as documented in syllabi, above and beyond natural, ambient changes across semesters; and (c) CTL one-off seminars do not conspicuously impact the starting point of C&amp;SD consultations or their impacts. While finding viable comparison groups may be challenging, we argue that periodic, pre/post analyses of teaching artifacts are broadly transferable, sustainable approaches for formative, iterative refinements of CTL services. We hope that future research will build on our results, developing practical approaches to outcomes analyses of teaching artifacts as well as how best to directly measure other types of CTL impacts.</p>
</sec>
</body>
<back>
<ack id="S24">
<title>Acknowledgments</title>
<p id="P67">The institutional review board (STUDY2016_00000148) approved this research at the university where it was conducted. The authors thank Dr. Marsha Lovett for her feedback that improved our study design and data analyses. We also thank Dr. Alexis Adams and two anonymous reviewers for their thoughtful, specific, actionable, and constructive feedback that helped improve drafts of this manuscript.</p>
</ack>
<bio id="bio1"><title>Biographies</title><p id="P69"><bold>Chad Hershock</bold> is the Director of Evidence-Based Teaching &amp; DEI Initiatives at the Eberly Center for Teaching Excellence and Educational Innovation at Carnegie Mellon University. Prior to 2013, he was an Assistant Director and Coordinator Science, Health Science, and Instructional Technology Initiatives at the Center for Research on Learning at the University of Michigan. His PhD is in Biology from the University of Michigan.</p></bio>
<bio id="bio2"><p id="P70"><bold>Laura Ochs Pottmeyer</bold> is a Data Science Research Associate at the Eberly Center for Teaching Excellence and Educational Innovation at Carnegie Mellon University. Her PhD is in Science Education from the University of Virginia.</p></bio>
<bio id="bio3"><p id="P71"><bold>Jessica Harrell</bold> is a Senior Teaching Consultant at the Eberly Center for Teaching Excellence and Educational Innovation at Carnegie Mellon University. Her PhD is in Rhetoric from Carnegie Mellon University.</p></bio>
<bio id="bio4"><p id="P72"><bold>Sophie le Blanc</bold> is a Senior Teaching Consultant at the Eberly Center for Teaching Excellence and Educational Innovation at Carnegie Mellon University. Her PhD is in Political Science from the University of Delaware.</p></bio>
<bio id="bio5"><p id="P73"><bold>Marisella Rodriguez</bold> is a Senior Consultant at the Center for Teaching and Learning at the University of California, Berkeley. When data from this study was collected, she was a Teaching Consultant at the Eberly Center for Teaching Excellence and Educational Innovation at Carnegie Mellon University. Her PhD is in Political Science from the University of California, Davis.</p></bio>
<bio id="bio6"><p id="P74"><bold>Jacqueline Stimson</bold> is a Senior Teaching Consultant at the Eberly Center for Teaching Excellence and Educational Innovation at Carnegie Mellon University. Her PhD is in Classical Studies from the University of Michigan.</p></bio>
<bio id="bio7"><p id="P75"><bold>Katharine Phelps Walsh</bold> is a Senior Teaching Consultant at the Eberly Center for Teaching Excellence and Educational Innovation at Carnegie Mellon University. Her PhD is in History from the University of Pittsburgh.</p></bio>
<bio id="bio8"><p id="P76"><bold>Emily Daniels Weiss</bold> is a Senior Teaching Consultant at the Eberly Center for Teaching Excellence and Educational Innovation at Carnegie Mellon University. Her PhD is in Chemistry from Carnegie Mellon University.</p></bio>
<ref-list>
<title>References</title>
<ref id="R1"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Ambrose</surname>, <given-names>S. A.</given-names></string-name>, <string-name><surname>Bridges</surname>, <given-names>M. W.</given-names></string-name>, <string-name><surname>DiPietro</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Lovett</surname>, <given-names>M. C.</given-names></string-name>, &amp; <string-name><surname>Norman</surname>, <given-names>M. K.</given-names></string-name></person-group> (<year>2010</year>). <source>How learning works: Seven research-based principles for smart teaching.</source> <publisher-name>Jossey-Bass</publisher-name>.</mixed-citation></ref>
<ref id="R2"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Beach</surname>, <given-names>A. L.</given-names></string-name>, <string-name><surname>Sorcinelli</surname>, <given-names>M. D.</given-names></string-name>, <string-name><surname>Austin</surname>, <given-names>A. E.</given-names></string-name>, &amp; <string-name><surname>Rivard</surname>, <given-names>J. K.</given-names></string-name></person-group> (<year>2016</year>). <source>Faculty development in the age of evidence: Current practices, future imperatives.</source> <publisher-name>Stylus Publishing</publisher-name>.</mixed-citation></ref>
<ref id="R3"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Campbell</surname>, <given-names>C. M.</given-names></string-name>, <string-name><surname>Michel</surname>, <given-names>J. O.</given-names></string-name>, <string-name><surname>Patel</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Gelashvili</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2019</year>). <article-title>College teaching from multiple angles: A multi-trait multi-method analysis of college courses.</article-title> <source>Research in Higher Education</source>, <volume>60</volume>(<issue>5</issue>), <fpage>711</fpage>&#x2013;<lpage>735</lpage>. <comment><ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1007/s11162-018-9529-8" xlink:type="simple">https://doi.org/10.1007/s11162-018-9529-8</ext-link></comment></mixed-citation></ref>
<ref id="R4"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Chism</surname>, <given-names>N. V. N.</given-names></string-name>, <string-name><surname>Holley</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Harris</surname>, <given-names>C. J.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Researching the impact of educational development: Basis for informed practice.</article-title> <source>To Improve the Academy</source>, <volume>31</volume>(<issue>1</issue>), <fpage>129</fpage>&#x2013;<lpage>145</lpage>. <comment><ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1002/j.2334-4822.2012.tb00678.x" xlink:type="simple">https://doi.org/10.1002/j.2334-4822.2012.tb00678.x</ext-link></comment></mixed-citation></ref>
<ref id="R5"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Clark</surname>, <given-names>R. C.</given-names></string-name>, &amp; <string-name><surname>Mayer</surname>, <given-names>R. E.</given-names></string-name></person-group> (<year>2016</year>). <source>e-Learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning</source> (<edition>4th</edition> ed.). <publisher-name>Wiley</publisher-name>.</mixed-citation></ref>
<ref id="R6"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Cullen</surname>, <given-names>R.</given-names></string-name>, &amp; <string-name><surname>Harris</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2009</year>). <article-title>Assessing learner-centredness through course syllabi.</article-title> <source>Assessment &amp; Evaluation in Higher Education</source>, <volume>34</volume>(<issue>1</issue>), <fpage>115</fpage>&#x2013;<lpage>125</lpage>. <comment><ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1080/02602930801956018" xlink:type="simple">https://doi.org/10.1080/02602930801956018</ext-link></comment></mixed-citation></ref>
<ref id="R7"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Doolittle</surname>, <given-names>P. E.</given-names></string-name>, &amp; <string-name><surname>Siudzinski</surname>, <given-names>R. A.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Recommended syllabus components: What do higher education faculty include in their syllabi?</article-title> <source>Journal on Excellence in College Teaching</source>, <volume>21</volume>(<issue>3</issue>), <fpage>29</fpage>&#x2013;<lpage>61</lpage>.</mixed-citation></ref>
<ref id="R8"><mixed-citation publication-type="book"><collab>Eberly Center</collab>. (<year>2021</year>, <month>December</month> <day>10</day>). <source>Learn about our Future Faculty Program.</source> <publisher-name>Eberly Center for Teaching Excellence and Educational Innovation</publisher-name>. <comment><ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="uri" xlink:href="https://www.cmu.edu/teaching/graduatestudentsupport/futurefacultyprogram.html" xlink:type="simple">https://www.cmu.edu/teaching/graduatestudentsupport/futurefacultyprogram.html</ext-link></comment></mixed-citation></ref>
<ref id="R9"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Ebert-May</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Derting</surname>, <given-names>T. L.</given-names></string-name>, <string-name><surname>Hodder</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Momsen</surname>, <given-names>J. L.</given-names></string-name>, <string-name><surname>Long</surname>, <given-names>T. M.</given-names></string-name>, &amp; <string-name><surname>Jardeleza</surname>, <given-names>S. E.</given-names></string-name></person-group> (<year>2011</year>). <article-title>What we say is not what we do: Effective evaluation of faculty professional development programs.</article-title> <source>BioScience</source>, <volume>61</volume>(<issue>7</issue>), <fpage>550</fpage>&#x2013;<lpage>558</lpage>. <comment><ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1525/bio.2011.61.7.9" xlink:type="simple">https://doi.org/10.1525/bio.2011.61.7.9</ext-link></comment></mixed-citation></ref>
<ref id="R10"><mixed-citation publication-type="confproc"><person-group person-group-type="author"><string-name><surname>Eckhardt</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2017</year>, <conf-date>April 4</conf-date>). <source>Syllabus Best Practices Group, resolutions.</source> <conf-name>Faculty Senate Meeting 9. Details omitted for reviewing.</conf-name></mixed-citation></ref>
<ref id="R11"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Ellis</surname>, <given-names>D. E.</given-names></string-name>, <string-name><surname>Brown</surname>, <given-names>V. M.</given-names></string-name>, &amp; <string-name><surname>Tse</surname>, <given-names>C. T.</given-names></string-name></person-group> (<year>2020</year>). <article-title>Comprehensive assessment for teaching and learning centres: A field-tested planning model.</article-title> <source>International Journal for Academic Development</source>, <volume>25</volume>(<issue>4</issue>), <fpage>337</fpage>&#x2013;<lpage>349</lpage>. <comment><ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1080/1360144X.2020.1786694" xlink:type="simple">https://doi.org/10.1080/1360144X.2020.1786694</ext-link></comment></mixed-citation></ref>
<ref id="R12"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Finelli</surname>, <given-names>C. J.</given-names></string-name>, <string-name><surname>Ott</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Gottfried</surname>, <given-names>A. C.</given-names></string-name>, <string-name><surname>Hershock</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>O&#x2019;Neal</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Kaplan</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2008</year>). <article-title>Utilizing instructional consultations to enhance the teaching performance of engineering faculty.</article-title> <source>Journal of Engineering Education</source>, <volume>97</volume>(<issue>4</issue>), <fpage>397</fpage>&#x2013;<lpage>411</lpage>. <comment><ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1002/j.2168-9830.2008.tb00989.x" xlink:type="simple">https://doi.org/10.1002/j.2168-9830.2008.tb00989.x</ext-link></comment></mixed-citation></ref>
<ref id="R13"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Finelli</surname>, <given-names>C. J.</given-names></string-name>, <string-name><surname>Pinder-Grover</surname>, <given-names>T.</given-names></string-name>, &amp; <string-name><surname>Wright</surname>, <given-names>M. C.</given-names></string-name></person-group> (<year>2011</year>). <chapter-title>Consultations on teaching: Using student feedback for instructional improvement.</chapter-title> In <person-group person-group-type="editor"><string-name><surname>Cook</surname>, <given-names>C. E.</given-names></string-name> &amp; <string-name><surname>Kaplan</surname>, <given-names>M.</given-names></string-name></person-group> (Eds.), <source>Advancing the culture of teaching on campus: How a teaching center can make a difference</source> (pp. <fpage>65</fpage>&#x2013;<lpage>79</lpage>). <publisher-name>Stylus Publishing</publisher-name>.</mixed-citation></ref>
<ref id="R14"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Fink</surname>, <given-names>L. D.</given-names></string-name></person-group> (<year>2013</year>). <chapter-title>Creating significant learning experiences: An integrated approach to designing college courses (Rev. and updated ed.).</chapter-title> <publisher-name>Jossey-Bass</publisher-name>.</mixed-citation></ref>
<ref id="R36"><mixed-citation publication-type="book"><person-group person-group-type="editor"><string-name><surname>Haras</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Taylor</surname>, <given-names>S. C.</given-names></string-name>, <string-name><surname>Sorcinelli</surname>, <given-names>M. D.</given-names></string-name>, &amp; <string-name><surname>von Hoene</surname>, <given-names>L.</given-names></string-name></person-group> (Eds.). (<year>2017</year>). <source>Institutional commitment to teaching excellence: Assessing the impacts and outcomes of faculty development.</source> <publisher-name>American Council on Education</publisher-name>.</mixed-citation></ref>
<ref id="R15"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Hershock</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Cook</surname>, <given-names>C. E.</given-names></string-name>, <string-name><surname>Wright</surname>, <given-names>M. C.</given-names></string-name>, &amp; <string-name><surname>O&#x2019;Neal</surname>, <given-names>C.</given-names></string-name></person-group> (<year>2011</year>). <chapter-title>Action research for instructional improvement.</chapter-title> In <person-group person-group-type="editor"><string-name><surname>Cook</surname>, <given-names>C. E.</given-names></string-name> &amp; <string-name><surname>Kaplan</surname>, <given-names>M.</given-names></string-name></person-group> (Eds.), <source>Advancing the culture of teaching on campus: How a teaching center can make a difference</source> (pp. <fpage>167</fpage>&#x2013;<lpage>182</lpage>). <publisher-name>Stylus Publishing</publisher-name>.</mixed-citation></ref>
<ref id="R16"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Hines</surname>, <given-names>S. R.</given-names></string-name></person-group> (<year>2011</year>). <article-title>How mature teaching and learning centers evaluate their services.</article-title> <source>To Improve the Academy</source>, <volume>30</volume>(<issue>1</issue>), <fpage>277</fpage>&#x2013;<lpage>289</lpage>. <comment><ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1002/j.2334-4822.2011.tb00663.x" xlink:type="simple">https://doi.org/10.1002/j.2334-4822.2011.tb00663.x</ext-link></comment></mixed-citation></ref>
<ref id="R17"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Hines</surname>, <given-names>S. R.</given-names></string-name></person-group> (<year>2015</year>). <article-title>Setting the groundwork for quality faculty development evaluation: A five-step approach.</article-title> <source>Journal of Faculty Development</source>, <volume>29</volume>(<issue>1</issue>), <fpage>5</fpage>&#x2013;<lpage>12</lpage>.</mixed-citation></ref>
<ref id="R18"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Homa</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Hackathorn</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Brown</surname>, <given-names>C. M.</given-names></string-name>, <string-name><surname>Garczynski</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Solomon</surname>, <given-names>E. D.</given-names></string-name>, <string-name><surname>Tennial</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Sanborn</surname>, <given-names>U. A.</given-names></string-name>, &amp; <string-name><surname>Gurung</surname>, <given-names>R. A. R.</given-names></string-name></person-group> (<year>2013</year>). <article-title>An analysis of learning objectives and content coverage in introductory psychology syllabi.</article-title> <source>Teaching of Psychology</source>, <volume>40</volume>(<issue>3</issue>), <fpage>169</fpage>&#x2013;<lpage>174</lpage>. <comment><ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1177/0098628313487456" xlink:type="simple">https://doi.org/10.1177/0098628313487456</ext-link></comment></mixed-citation></ref>
<ref id="R19"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Kreber</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Brook</surname>, <given-names>P.</given-names></string-name></person-group>, &amp; <collab>Educational Policy</collab>. (<year>2001</year>). <article-title>Impact evaluation of educational development programmes.</article-title> <source>International Journal for Academic Development</source>, <volume>6</volume>(<issue>2</issue>), <fpage>96</fpage>&#x2013;<lpage>108</lpage>. <comment><ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1080/13601440110090749" xlink:type="simple">https://doi.org/10.1080/13601440110090749</ext-link></comment></mixed-citation></ref>
<ref id="R20"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Kucsera</surname>, <given-names>J. V.</given-names></string-name>, &amp; <string-name><surname>Svinicki</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Rigorous evaluations of faculty development programs.</article-title> <source>The Journal of Faculty Development</source>, <volume>24</volume>(<issue>2</issue>), <fpage>5</fpage>&#x2013;<lpage>18</lpage>.</mixed-citation></ref>
<ref id="R21"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Livingston</surname>, <given-names>S. A.</given-names></string-name></person-group> (<year>2006</year>). <chapter-title>Item analysis.</chapter-title> In <person-group person-group-type="editor"><string-name><surname>Downing</surname>, <given-names>S. M.</given-names></string-name> &amp; <string-name><surname>Haladyna</surname>, <given-names>T. M.</given-names></string-name></person-group> (Eds.), <source>Handbook of test development</source> (pp. <fpage>421</fpage>&#x2013;<lpage>441</lpage>). <publisher-name>Lawrence Erlbaum Associates</publisher-name>.</mixed-citation></ref>
<ref id="R22"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>McGowan</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Gonzalez</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Stanny</surname>, <given-names>C. J.</given-names></string-name></person-group> (<year>2016</year>). <article-title>What do undergraduate course syllabi say about information literacy?</article-title> <source>portal: Libraries and the Academy</source>, <volume>16</volume>(<issue>3</issue>), <fpage>599</fpage>&#x2013;<lpage>617</lpage>.</mixed-citation></ref>
<ref id="R23"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Meizlish</surname>, <given-names>D. S.</given-names></string-name>, <string-name><surname>Wright</surname>, <given-names>M. C.</given-names></string-name>, <string-name><surname>Howard</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Kaplan</surname>, <given-names>M. L.</given-names></string-name></person-group> (<year>2018</year>). <article-title>Measuring the impact of a new faculty program using institutional data.</article-title> <source>International Journal for Academic Development</source>, <volume>23</volume>(<issue>2</issue>), <fpage>72</fpage>&#x2013;<lpage>85</lpage>.</mixed-citation></ref>
<ref id="R24"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Norman</surname>, <given-names>G.</given-names></string-name></person-group> (<year>2010</year>). <article-title>Likert scales, levels of measurement and the &#x2018;&#x2018;laws&#x2019;&#x2019; of statistics.</article-title> <source>Advances in Health Sciences Education</source>, <volume>15</volume>(<issue>5</issue>), <fpage>625</fpage>&#x2013;<lpage>632</lpage>. <comment><ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1007/s10459-010-9222-y" xlink:type="simple">https://doi.org/10.1007/s10459-010-9222-y</ext-link></comment></mixed-citation></ref>
<ref id="R25"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Palmer</surname>, <given-names>M. S.</given-names></string-name>, <string-name><surname>Bach</surname>, <given-names>D. J.</given-names></string-name>, &amp; <string-name><surname>Streifer</surname>, <given-names>A. C.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Measuring the promise: A learning-focused syllabus rubric.</article-title> <source>To Improve the Academy</source>, <volume>33</volume>(<issue>1</issue>), <fpage>14</fpage>&#x2013;<lpage>36</lpage>. <comment><ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1002/tia2.20004" xlink:type="simple">https://doi.org/10.1002/tia2.20004</ext-link></comment></mixed-citation></ref>
<ref id="R26"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Palmer</surname>, <given-names>M. S.</given-names></string-name>, <string-name><surname>Streifer</surname>, <given-names>A. C.</given-names></string-name>, &amp; <string-name><surname>Williams-Duncan</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Systematic assessment of a high-impact course design institute.</article-title> <source>To Improve the Academy</source>, <volume>35</volume>(<issue>2</issue>), <fpage>339</fpage>&#x2013;<lpage>361</lpage>. <comment><ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.3998/tia.17063888.0035.203" xlink:type="simple">https://doi.org/10.3998/tia.17063888.0035.203</ext-link></comment></mixed-citation></ref>
<ref id="R27"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Parkes</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Zimmaro</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2016</year>). <source>Learning and assessing with multiple-choice questions in college classrooms.</source> <publisher-name>Routledge</publisher-name>.</mixed-citation></ref>
<ref id="R28"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Piontek</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2008</year>). <source>Best practices for designing and grading exams (Occasional Paper No. 24).</source> <publisher-name>Center for Research on Learning and Teaching, University of Michigan</publisher-name>.</mixed-citation></ref>
<ref id="R29"><mixed-citation publication-type="web"><collab>POD Network</collab>. (<year>2018</year>). <source>Defining what matters: Guidelines for comprehensive center for teaching and learning (CTL) evaluation.</source> <comment><ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="uri" xlink:href="https://podnetwork.org/content/uploads/POD_CTL_Evaluation_Guidelines__2018_.pdf" xlink:type="simple">https://podnetwork.org/content/uploads/POD_CTL_Evaluation_Guidelines__2018_.pdf</ext-link></comment></mixed-citation></ref>
<ref id="R30"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Smith</surname>, <given-names>M. K.</given-names></string-name>, <string-name><surname>Jones</surname>, <given-names>F. H. M.</given-names></string-name>, <string-name><surname>Gilbert</surname>, <given-names>S. L.</given-names></string-name>, &amp; <string-name><surname>Wieman</surname>, <given-names>C. E.</given-names></string-name></person-group> (<year>2013</year>). <article-title>The classroom observation protocol for undergraduate STEM (COPUS): A new instrument to characterize university STEM classroom practices.</article-title> <source>CBE&#x2014;Life Sciences Education</source>, <volume>12</volume>(<issue>4</issue>), <fpage>618</fpage>&#x2013;<lpage>627</lpage>. <comment><ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1187/cbe.13-08-0154" xlink:type="simple">https://doi.org/10.1187/cbe.13-08-0154</ext-link></comment></mixed-citation></ref>
<ref id="R31"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Stanny</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Gonzalez</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>McGowan</surname>, <given-names>B.</given-names></string-name></person-group> (<year>2015</year>). <article-title>Assessing the culture of teaching and learning through a syllabus review.</article-title> <source>Assessment &amp; Evaluation in Higher Education</source>, <volume>40</volume>(<issue>7</issue>), <fpage>898</fpage>&#x2013;<lpage>913</lpage>. <comment><ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1080/02602938.2014.956684" xlink:type="simple">https://doi.org/10.1080/02602938.2014.956684</ext-link></comment></mixed-citation></ref>
<ref id="R32"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Stes</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Min-Leliveld</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Gijbels</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Van Petegem</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2010</year>). <article-title>The impact of instructional development in higher education: The state-of-the-art of the research.</article-title> <source>Educational Research Review</source>, <volume>5</volume>(<issue>1</issue>), <fpage>25</fpage>&#x2013;<lpage>49</lpage>. <comment><ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.edurev.2009.07.001" xlink:type="simple">https://doi.org/10.1016/j.edurev.2009.07.001</ext-link></comment></mixed-citation></ref>
<ref id="R33"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Tomkin</surname>, <given-names>J. H.</given-names></string-name>, <string-name><surname>Beilstein</surname>, <given-names>S. O.</given-names></string-name>, <string-name><surname>Morphew</surname>, <given-names>J. W.</given-names></string-name>, &amp; <string-name><surname>Herman</surname>, <given-names>G. L.</given-names></string-name></person-group> (<year>2019</year>). <article-title>Evidence that communities of practice are associated with active learning in large STEM lectures.</article-title> <source>International Journal of STEM Education</source>, <volume>6</volume>(<issue>1</issue>), <fpage>1</fpage>&#x2013;<lpage>15</lpage>. <comment><ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1186/s40594-018-0154-z" xlink:type="simple">https://doi.org/10.1186/s40594-018-0154-z</ext-link></comment></mixed-citation></ref>
<ref id="R34"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Wheeler</surname>, <given-names>L. B.</given-names></string-name>, &amp; <string-name><surname>Bach</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2021</year>). <article-title>Understanding the impact of educational development interventions on classroom instruction and student success.</article-title> <source>International Journal for Academic Development</source>, <volume>26</volume>(<issue>1</issue>), <fpage>24</fpage>&#x2013;<lpage>40</lpage>, <comment><ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:href="https://doi.org/10.1080/1360144X.2020.1777555" xlink:type="simple">https://doi.org/10.1080/1360144X.2020.1777555</ext-link></comment></mixed-citation></ref>
<ref id="R35"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Wiggins</surname>, <given-names>G.</given-names></string-name>, &amp; <string-name><surname>McTighe</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2005</year>). <source>Understanding by design</source> (<edition>2nd</edition> ed.). <publisher-name>Pearson</publisher-name>.</mixed-citation></ref>
</ref-list>
<app-group>
<app id="APP1">
<label>Appendix A:</label><title>Future Faculty Program Course &amp; Syllabus Design Rubric</title>
<table-wrap id="T6" position="anchor" orientation="portrait">
<table frame="hsides" rules="groups">
<colgroup>
<col align="left" valign="middle"/>
<col align="left" valign="middle"/>
<col align="left" valign="middle"/>
<col align="left" valign="middle"/>
<col align="left" valign="middle"/>
</colgroup>
<thead>
<tr>
<th align="left" valign="top">Category</th>
<th align="left" valign="top">Category description</th>
<th colspan="3" align="center" valign="top">Ratings</th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="5" align="left" valign="top"><bold>Course description &#x2013; What will students learn (i.e., knowledge, skills, attitudes, as opposed to topics)? Why will this matter to students? How will the course help students develop as scholars, learners, future professionals? What will students experience in the course? What are the instructional methods, and how will they support student learning?</bold></td>
</tr>
<tr>
<td align="left" valign="top">Course Description 1 (CD1)</td>
<td align="left" valign="top">Basic information about what students will learn in this course</td>
<td align="left" valign="top">Excellent (3)</td>
<td align="left" valign="top">Needs Improvement (2)</td>
<td align="left" valign="top">Poor (1)</td>
</tr>
<tr>
<td align="left" valign="top">Course Description 2 (CD2)</td>
<td align="left" valign="top">How this learning is significant to student development, their discipline, and/or their future career</td>
<td align="left" valign="top">Excellent (3)</td>
<td align="left" valign="top">Needs Improvement (2)</td>
<td align="left" valign="top">Poor (1)</td>
</tr>
<tr>
<td align="left" valign="top">Course Description 3 (CD3)</td>
<td align="left" valign="top">Brief information about instructional methods and what students can expect to experience in the course</td>
<td align="left" valign="top">Excellent (3)</td>
<td align="left" valign="top">Needs Improvement (2)</td>
<td align="left" valign="top">Poor (1)</td>
</tr>
<tr>
<td colspan="5" align="left" valign="top"><bold>Learning objectives &#x2013; What, specifically, will students be able to do or demonstrate once they&#x2019;ve completed the course?</bold></td>
</tr>
<tr>
<td align="left" valign="top">Learning Objective 1 (LO1)</td>
<td align="left" valign="top">Are student centered</td>
<td align="left" valign="top">All (3)</td>
<td align="left" valign="top">Some (2)</td>
<td align="left" valign="top">None (1)</td>
</tr>
<tr>
<td align="left" valign="top">Learning Objective 2 (LO2)</td>
<td align="left" valign="top">Are measurable</td>
<td align="left" valign="top">All (3)</td>
<td align="left" valign="top">Some (2)</td>
<td align="left" valign="top">None (1)</td>
</tr>
<tr>
<td align="left" valign="top">Learning Objective 3 (LO3)</td>
<td colspan="2" align="left" valign="top">Represent a range of cognitive skills and complexity</td>
<td align="left" valign="top">Present (2)</td>
<td align="left" valign="top">Absent (1)</td>
</tr>
<tr>
<td colspan="5" align="left" valign="top"><bold>Assessments</bold></td>
</tr>
<tr>
<td align="left" valign="top">Assessment 1 (A1)</td>
<td align="left" valign="top">The alignment between assessments and learning goals is clear.</td>
<td align="left" valign="top">Excellent (3)</td>
<td align="left" valign="top">Needs Improvement (2)</td>
<td align="left" valign="top">Poor (1)</td>
</tr>
<tr>
<td align="left" valign="top">Assessment 2 (A2)</td>
<td colspan="2" align="left" valign="top">Opportunities for formative (e.g., quizzes, homework, in-class exercises, reading questions) and summative assessment are included (e.g., high stakes/low stakes, drafts, scaffolding).</td>
<td align="left" valign="top">Present (2)</td>
<td align="left" valign="top">Absent (1)</td>
</tr>
<tr>
<td align="left" valign="top">Assessment 3 (A3)</td>
<td align="left" valign="top">Information about the timing of the assessments is included.</td>
<td align="left" valign="top">All (3)</td>
<td align="left" valign="top">Some (2)</td>
<td align="left" valign="top">None (1)</td>
</tr>
<tr>
<td align="left" valign="top">Assessment 4 (A4)</td>
<td align="left" valign="top">Expectations for student deliverables (what are they actually going to do) are explicit.</td>
<td align="left" valign="top">Excellent (3)</td>
<td align="left" valign="top">Needs Improvement (2)</td>
<td align="left" valign="top">Poor (1)</td>
</tr>
<tr>
<td align="left" valign="top">Assessment 5 (A5)</td>
<td colspan="2" align="left" valign="top">Criteria for evaluation are included.</td>
<td align="left" valign="top">Present (2)</td>
<td align="left" valign="top">Absent (1)</td>
</tr>
<tr>
<td align="left" valign="top">Assessment Element 6 (A6)<sup><xref rid="tfn1" ref-type="table-fn">+</xref></sup></td>
<td align="left" valign="top">Participation grade, if present, is described such that a student would understand what they need to do in order to achieve full participation credit.</td>
<td align="left" valign="top">Excellent (3)</td>
<td align="left" valign="top">Needs Improvement (2)</td>
<td align="left" valign="top">Poor (1) N/A</td>
</tr>
<tr>
<td colspan="5" align="left" valign="top"><bold>Assessment and grading policies &#x2013; How will grades be calculated for the course? Will students have the opportunity to drop any scores or submit revised drafts? How does recitation/lab/discussion/etc. factor into grading? Will different types of assignments be graded differently? What are your policies for late work, re-grades, makeups?</bold></td>
</tr>
<tr>
<td align="left" valign="top">Grading Policies 1 (GP1)</td>
<td align="left" valign="top">Grade breakdown is clear and easy to understand.</td>
<td align="left" valign="top">Excellent (3)</td>
<td align="left" valign="top">Needs Improvement (2)</td>
<td align="left" valign="top">Poor (2)</td>
</tr>
<tr>
<td align="left" valign="top">Grading Policies 2 (GP2)</td>
<td align="left" valign="top">Relevant policies including makeups, late work, re-grades, etc. are described.</td>
<td align="left" valign="top">Excellent (3)</td>
<td align="left" valign="top">Needs Improvement (2)</td>
<td align="left" valign="top">Poor (1)</td>
</tr>
<tr>
<td colspan="5" align="left" valign="top"><bold>Other course policies</bold></td>
</tr>
<tr>
<td align="left" valign="top">Other Policies 1 (OP1)</td>
<td align="left" valign="top">Accommodations</td>
<td align="left" valign="top">Excellent (3)</td>
<td align="left" valign="top">Needs Improvement (2)</td>
<td align="left" valign="top">Poor (1)</td>
</tr>
<tr>
<td align="left" valign="top">Other Policies 2 (OP2)</td>
<td align="left" valign="top">Student wellness</td>
<td align="left" valign="top"></td>
<td align="left" valign="top">Present (2)</td>
<td align="left" valign="top">Absent (1)</td>
</tr>
<tr>
<td align="left" valign="top">Other Policies 3 (OP3)</td>
<td align="left" valign="top">Academic integrity</td>
<td align="left" valign="top">Excellent (3)</td>
<td align="left" valign="top">Needs Improvement (2)</td>
<td align="left" valign="top">Poor (1)</td>
</tr>
<tr>
<td align="left" valign="top">Other Policies 4 (OP4)<sup><xref rid="tfn1" ref-type="table-fn">+</xref></sup></td>
<td align="left" valign="top">Other policies: If present, the policy and procedure are clear (e.g., attendance, laptops, email, recording lectures).</td>
<td align="left" valign="top">Excellent (3)</td>
<td align="left" valign="top">Needs Improvement (2)</td>
<td align="left" valign="top">Poor (1) N/A</td>
</tr>
<tr>
<td align="left" valign="top">Other Policies 5 (OP5)<sup><xref rid="tfn1" ref-type="table-fn">+</xref></sup></td>
<td align="left" valign="top">Other policies: If present, has an explanation.</td>
<td align="left" valign="top">Excellent (3)</td>
<td align="left" valign="top">Needs Improvement (2)</td>
<td align="left" valign="top">Poor (1) N/A</td>
</tr>
<tr>
<td colspan="5" align="left" valign="top"><bold>Organization</bold></td>
</tr>
<tr>
<td align="left" valign="top">Organization 1 (O1)</td>
<td align="left" valign="top">Is the essential information easy to locate? Are the sections of the syllabus presented in cohesive order?</td>
<td align="left" valign="top">Excellent (3)</td>
<td align="left" valign="top">Needs Improvement (2)</td>
<td align="left" valign="top">Poor (1)</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="tfn1"><p id="P68">+This category is optional and was only scored when applicable.</p>
</fn>
</table-wrap-foot>
</table-wrap>
</app>
</app-group>
</back>
</article>
