Winds of Challenge and Opportunity
In the pursuit of evidence-informed practice for enhancing the student experience, the winds and waves of change in higher education offer both challenges and opportunities. The initial rationale for evidence emanated from demands on institutions to illustrate how learning environments are supported and continue to be enhanced. While measurements linking educational development to the betterment of the student experience have been primarily expressed as numbers of faculty engaged in learning improvement, evidence of a different type became an imperative with the call for decolonizing the curriculum, dismantling systemic racism, and achieving social justice, along with the development of student success frameworks and an increasingly diversified array of teaching and learning contexts. Alternate forms of evidence are now necessary to foreground the diverse voices of students and educators, to acknowledge different lived experiences and perspectives, and to capture reflection on the impact of educational development in an equitable and inclusive manner.
The recent pandemic illustrates the need for universities to discover and document the complex and diverse experiences of faculty, staff, and students in both normal and exceptional times. The sudden move to remote delivery and online learning repositioned our attention on student and instructor experience in the virtual classroom. Koh and Kan (2021) described faculty preparedness in using learning management systems and the desire of students for more socio-constructivist learning. E-learning imperatives continue to raise issues around sustaining teacher motivation and performance in the long term (Kulikowski et al., 2022). It is also observed that online learning necessitates a better understanding of learning theories by instructors so that students feel connected, engaged, and cared for, even at a distance (Ananga, 2020).
These concerns are layered on a culture of financial pressures and accountability measures, which has led to rapid change for higher education (Lemoine et al., 2020). The 30-year trend toward cultures of accountability and new public management in many countries (Hazelkorn et al., 2018) has had a profound impact on educational development practices, as educational developers have been drawn away from ad hoc, inward-facing evidence to more outward-facing, systematic, strategic purposes in the age of evidence (Beach et al., 2016, p. 1). Nonetheless, many educational development centers have been relying on relatively superficial metrics to assess their work (Sorcinelli et al., 2017). An approach to evidencing value shifts the focus to professional experience, judgment, and knowledge of the context while calling for a context-sensitive process of systematically using and questioning a mix of evidence, harnessing and triangulating the findings, and acting on them to inform future thinking and actions (Bamber, 2013). We use the term evidencing value to avoid the limitations of instrumental and quantitative-only approaches to evaluation and to foster a more nuanced method of demonstrating the worth of educational development.
To humanize this work and reveal broader perspectives, evidencing value must shift from reliance on quantitative data toward integrating broader empirical considerations of narrative and other qualitative sources (Robertson et al., 2019). Doing so will allow us to hear multiple perspectives and acknowledge their contexts and lived experiences. The true nature of evidencing and contextual changes is difficult to assess because we are living through it. However, through examining patterns of change and emerging practices, we seek to envision and prepare for both the challenges and the opportunities.
Although higher education has experienced a push for accountability, there has also been push back against it by authors critiquing managerialism (e.g., Gourlay & Stevenson, 2017; Jamieson & Naidoo, 2004). Nonetheless, educational development is unlikely to cut loose from institutional imperatives, which are often and increasingly mandated by senior management (Bamber, 2020). Evidence will need to demonstrate the value of developers’ efforts to support a whole range of institutionally driven policy agendas, relating, for example, to supporting diverse types of learners and learners with diverse contexts, integrating new technologies, supporting quality assurance, and enriching the institution’s teaching and learning profile. That evidence will carry a double load, endorsing both the institution’s reporting to external and other stakeholders and the work of the educational development unit itself. The evidence must also have the power to underpin efforts to understand, enhance, decolonize, and humanize student and staff experiences in higher education. With the changing conditions of educational development (e.g., Sorcinelli et al., 2017), maintaining sight of what is valued in this work is essential. At the same time, “there is a real danger that if we do not identify appropriate assessment frameworks and practices, others will do it for us without necessarily understanding what is meaningful in our context” (Ellis et al., 2020). While calls for more diverse, intentional, and inclusive evidencing practices have increased, practical and usable tools to facilitate the work of evidencing value haven’t followed at the same pace. Educational developers would benefit from a more structured guide or framework highlighting key dimensions of the evidencing process in today’s dynamic higher education climate.
This article expands on the calls made to broaden concepts of evidencing the value of educational development work and puts forward a framework to help guide this process. The authors advocate for a more intentional, holistic, and humanistic approach to articulating why the work of educational developers matters and present an adaptation of an existing framework to serve as a tool to help achieve this goal. In the sections that follow, we describe the context of the framework and each of its key elements, present a case example, and outline some tensions to consider when evidencing educational development work.
Navigating Changing Seas
We looked for a flexible and context-sensitive evidencing framework to “provide a generic context for action in which some recognizable shaping characteristics are evident but within the shape a wide range of actions is possible” (Saunders, 2000, p. 15). The framework we sought needed to engage communities in the spirit of educational development moving from “done to” to “done for” to “done with” (Naylor, 2020, p. 50) and in partnership with staff and students. It needed to tap the “reservoir of experiential and other knowledge” (Saunders, 2000, p. 7) that a community has about what is being evidenced: not just “data,” but the implicit, intangible knowledge that is often locked within a group (Robertson et al., 2019). It needed to include institutional priorities while respecting educational development values and ways of practicing. It needed to make evidencing value an inclusive endeavor.
To meet these needs, we adapted Saunders’s (2000) RUFDATA framework. It is a meta-evaluative and practical framework that facilitates reflexive questioning of the reasons and purposes, uses, foci, data, audience, timing and agency of evidencing educational development work. The attraction of RUFDATA is that it provides parameters for evaluating all types of activity, from programs to research projects, from learning support to teaching enhancement. Critically, it is practice facing, and members of this writing group had experience using RUFDATA in different practice applications, such as for reviewing a two-year learning and teaching development course attended by postsecondary instructors and for assessing the outcomes of a large-scale national project. These experiences demonstrated that the framework offered non-expert evaluators a flexible tool with a focus on inclusivity (capturing the views of multiple stakeholders) and proportionality (pragmatically, what level of resource is appropriate and available for this evaluation?).
The next section gives a short explanation of each of the RUFDATA elements, followed by a short case example using the framework in an educational development function.
R: Reasons and Purposes
The first step in planning the evidencing process is to articulate the reasons for and purposes of the evidence: Why are we gathering and presenting this information? Do we aim to meet accountability requirements, to improve practices that enhance student learning, to leverage resources, to impress stakeholders with demonstrable impact and the value of educational development, or to accomplish a mix of these goals? Being clear about purpose fosters clarity in deciding on the other RUFDATA elements. If, for example, we are evidencing for accountability purposes, then we will be looking for data with an audit trail. If we aim to evidence teaching enhancement, we might purposefully collect “value creation stories” (Wenger et al., 2011, p. 33) to illuminate individual learning.
In this “age of evidence” (Beach et al., 2016, p. 1), purposeful evidencing will be our watchword. Educational developers will have real reasons to focus on evidence that is not just nice to collect but that helps with value-for-money reporting, demonstrating impact and change as well as alignment with institutional strategies and senior management agendas.
Uses
Once we have articulated the reasons and purpose of the evidence, we need to think carefully about how we use the evidence that we gather. Is it for improving educational development practices, increasing credibility and visibility with stakeholders, articulating the value added, justifying the resources used, assessing enhancements to student learning, or a combination thereof?
Launching and sustaining teaching and learning initiatives requires significant time and resources, thus making it imperative to know to what extent these initiatives are successful, require further improvement, or need to be discontinued (Kolomitro & Anstey, 2017). Sutherland and Hall (2018) similarly argued that educational developers must demonstrate that their work is “meaningful, valuable, worth the time and effort invested in it” (p. 69) as well as to inform future thinking and planning and leverage our resources for continual advocacy, promotion, and visibility in support of enhancing teaching and learning processes and experiences. In the context of funding instability, many educational development units question where they might be in a worthiness scale. The systematic approach of gathering evidence is intended to generate better knowledge of practice to enhance student learning over time, as it can describe the short- and long-term difference the program or initiative intends to make. It also enables the collection of evidence that is necessary to establish the need for a particular program or initiative. Embedding this process into routine unit operations enables educational developers to inform planning and decision-making while anticipating the needs of the institution and seeking to enhance student learning.
Foci
Considering the foci for our evaluation involves deciding on specific activities and specific outcomes, including less tangible outcomes, such as cultivating a sense of belonging and well-being, capturing the cross-pollination of ideas, and developing integrated networks.
Defining the foci requires specifying the expected pathway between the activities offered and the change expected; this pathway is called the theory of change (Weiss, 1995). A theory of change explains how activities are understood to produce a series of results that contribute to achieving the final intended results, articulating not just what but also why and how the change might happen in a particular context. The theory of change pathway specifies the strategies/intervention (e.g., workshop and funding), the immediate outputs (e.g., participant numbers, seeing student-centered approaches as relevant), likely short- or medium-term outcomes (e.g., change in teaching approaches), and ultimate long-term impacts (e.g., improved student learning).
Examples of evidencing the pathway include Condon et al.’s (2016) study connecting faculty development across two campuses, with changes in course materials, student learning, and institutional cultures through a mixed-method study. Similarly, Dawson et al.’s (2014) study linked instructional skills workshops with increased student-centered approaches to teaching. The Defining What Matters guide for educational development centers (POD, 2018) encourages collecting a range of evidence, articulating short- and long-term indicators, and contextualizing our approach to evidencing value.
Data and Evidence
With the diverse and evolving array of foci, types of data being collected, and dare we say valued, have similarly expanded. Pushing beyond headcounts and satisfaction feedback sheets (Groen et al., 2018), varied models and frameworks have emerged over the past decade to better track meaningful change in instructional practices, student learning, and the influence of educational development on institutional cultures of teaching and learning (e.g., Amundsen & Wilson, 2012; Chalmers & Gardiner, 2015; Hines, 2017). This growth in scholarly work and the collection of impact indicators has inspired educational developers to examine changes in participant conceptualizations and practices, as well as how these changes subsequently affect student learning. Necessarily, this expansion has required different data sources, collection instruments, and methods than workshop satisfaction scores. Methods such as ethnography, narrative, and network analysis have seen greater use and acceptance (Gibbs, 2013; Stewart, 2014). In addition, a greater diversity of informants is being included, and voice and perspective are increasingly important parts of the evaluation message (Bélanger et al., 2011).
While literature proposing and discussing diverse types of data has increased over the last decade, it has yet to gain real momentum in educational development practice. Kolomitro and Anstey (2017) highlighted that centers still focus far more on data associated with participant attendance and satisfaction than on change in teaching practice, impact on student learning, or influence on institutional culture. Systematic and comprehensive evaluation processes with multiple and diverse indicators face barriers, such as a lack of capacity and time and challenges in adapting existing evaluative frameworks to specific educational development contexts (Hines, 2017).
Audience
The intended audience may include senior administration, faculty, students, community partners, accreditation organizations, or a combination thereof, but having a clear sense of that audience and how they will receive and interpret the evidence is essential. With whom are we communicating and why? Data collection and subsequent reporting has to respond flexibly to shifting audiences and their needs. Evidence collected for one particular audience may also be used for multiple audiences. Communications framed for specific audiences set our context and tell our story so that it is relatable to others.
Certain perspectives will resonate with specific audiences, so being strategic about messaging is essential. While communications about the value of educational development programs must sometimes reach higher administration to renew or secure funding, the intent may also be to entice faculty to engage. This will not always require collecting different data, but it will require being strategic in how we use data.
Timing
Given common reporting practices associated with accountability or managerial notions of evaluation, educational development has long featured more episodic assessment practices. However, with the aforementioned shifts in purpose and use of evidencing in educational development toward higher impact and enhancement-driven data, more longitudinal processes are being integrated into practice that focus on sustained data collection, analysis, and action toward continuous improvement. Building in the systematic collection of data as part of their habitual actions, educational developers have begun to transform snapshots of practice (often taken every few years) into motion pictures of impact that serve the growing needs of reporting as well as increased interest in processes of continuous improvement.
Proponents of sustained evidencing, such as Hines (2017), propose that cyclical evaluation processes consisting of the regular analysis of evaluation capacity, curriculum conceptualization, evaluation planning, and plan implementation be built into the workflow of educational development practice—and that doing so will help create greater ownership of embedded evaluative processes into day-to-day work and gradually make the collection of evidential data more seamless and efficient. It should be noted that sustained evidencing is often difficult to implement in the current climate, as the nature of the work and stipulated priorities are increasingly in flux (Bamber, 2020).
Agency
Agency outlines who will conduct the evaluation, collect the evidence, analyze it, and communicate findings. This could be the educational development unit or other stakeholders, such as faculty, institutional collaborators, community partners, students, an institutional quality assurance unit, or an administrator. The rise of casualized rather than permanent positions reveals the complexities of individual agency in evidencing impact, as projects initiated by one educational developer may disappear when that person leaves.
Institutionally, the complexities of who collects and owns the evidence are intensified by the changing nature of higher education. Agency is a power issue: those who own institutional evidence, such as student survey data, are likely to control the use of that evidence. In institutions where quality assurance may fall to units and departments themselves, faculty and educational developers may have more autonomy to evidence impact of initiatives, programs, and projects that are meaningful to them. Conversely, and increasingly, centralized quality assurance may become more standardized and less departmentally managed.
In addition, educational development centers are often hubs for collaboration within institutions (Sutherland, 2018), with growing institutional and community partnerships (Bamber, 2020). This adds to the complexities of evidencing, as partners may wish to collect different information, which may provide richer data but also dilute the opportunities to create cohesive and focused evidence. While adding complexity, the advantages of collaborative evidencing include collecting different information for richer data and integrating data from a multiplicity of voices and sources in order to reach a more diverse audience and move the work of the partnership in desirable ways. Collaborations that are successful in this may also lead to other productive partnerships and to more expertise in data collection (Beach et al., 2016), resulting in educational development impacts being communicated more widely. Collaborative evidencing may also lead to educational development units working to retain their distinct identities, voice, and values they deem important in the evidencing process.
An Example in Practice
Having considered the different elements of the RUFDATA framework, what does this look like in practice? This section provides a case example of using RUFDATA to evaluate a common educational development activity—a formal learning and teaching (L&T) course. The case example illustrates how RUFDATA was used to plan and implement the longitudinal evaluation of an accredited L&T course that was an institutional requirement for all probationary professors. The workload of evidencing was reduced by building evaluative processes into course administration.
RUFDATA aimed to evaluate the effectiveness of the L&T course’s theory of change—that the course would help staff develop as reflective practitioners who gain confidence from their knowledge of L&T and would gradually contribute to a culture that is more student focused and attuned to issues of equity, diversity, inclusion, and decolonization in their subject areas. Change was anticipated at the individual level and, over time, at the discipline and then institutional level, as a critical mass of staff who had been through the course influenced institutional cultures. It was anticipated that effects would vary from discipline to discipline. In planning the evidencing process, each theory of change was linked to a specific data source. The ensuing planning grid is summarized in Table 1.
Reasons and purposes of the evaluation |
|
Uses |
|
Foci |
|
Data and evidence |
For changes in conceptions and practices:
|
For subgroup analysis:
|
Audience |
|
Timing |
|
Agency |
Evaluation done by internal educational developers, supported by the reflections of course participants |
The case example demonstrates the application of RUFDATA to a specific educational development function that is common in educational development centers. It illustrates a move away from mere tallies of participation and feedback forms as a metric of success and examines a suite of reflections and experiences to determine if, what, and how transformation has occurred. This is achieved by foregrounding different voices and perspectives in a more inclusive manner, thereby helping to triangulate findings and provide for a richer understanding. The focus on reflection, deliberation, and discussion with participants, their colleagues, and, eventually, their students helps to humanize the experience, cultivate a sense of community, cross-pollinate ideas, and make the success of the initiative a collective objective.
This article is interested in the waves and winds of change and how these changes affect both the nature of educational development and how we evidence the value of this work. We now use the RUFDATA headings to focus on key aspects of evidencing value that have been affected by the changing higher education landscape and ensuing educational development practices. Table 2 uses the elements of the RUFDATA framework to note changes in each particular aspect of the evidencing process, as a result of changes in the higher education landscape, and how this might impact educational development work. The final column suggests implications for how we are likely to evidence educational development as a result of these changes.
Aspect of the evidencing process |
The changing higher education landscape |
Impact on educational development of the changing higher education landscape |
Emerging practices in evidencing educational development |
---|---|---|---|
Reasons and Purposes of the evidence (e.g., goals of accountability or enhancement) |
Increased accountability management and also resistance Need to understand diversity of student experiences |
Funding-driven priorities; initiative- specific funding Balancing enhancement and accountability in advocacy, planning, and budgeting Supporting new trends as institutional priorities |
More reporting and evidencing of contribution, connections, impact, and change Value-for-money reporting Digging into data for more nuanced understandings |
Uses of the evidence (e.g., strategic planning, improved practice, advocacy, justifying budget) |
Increased emphasis on quality assurance in program approvals, accreditation reviews, and government funding agreements Institutional strategic planning with ongoing monitoring, dashboards, and reporting on progress Higher education as marketplace; public promotion and rankings |
Progress toward strategic milestones focusing on program choice and frequent reporting |
Reuse and repurposing toward metrics that count for strategic milestones, dashboards, public promotion, or justification of budget |
Foci of the evidencing (activities, outcomes, and how activities link to outcomes) (e.g., do workshops lead to more student-centered approaches to teaching?) |
Responding to societal priorities including reconciliation and shifting demographics Shifts in how we teach and how students learn Shifting definitions of what and how to assess: investment = change; ranking metrics of student engagement |
Expanded areas of expertise expected (scholarship of teaching and learning [SoTL], curriculum development, internationalization, Indigenization, blended learning, undergraduate research, securing, and allocating resources, etc.) Designing activities toward specific change and outcomes |
Showing how specific funded activities contribute to specific outcomes (theory of change) Attendance/participation is no longer enough Articulating intangible outcomes (being an incubator, social network growth, identity) |
Data & Evidence collected, analyzed, and reported (e.g., qualitative [case studies, narrative] and quantitative [ratings, patterns]) |
Broadened multiple ways of knowing within society and within institutions and disciplines Multiple data forms: numbers plus qualitative for human impact A focus on highlights to share publicly |
Requests from senior leaders or marketing Multiple ways of knowing and voices in co-creating and advocacy Doubled reporting expectations |
Expanded ways of knowing and voices in reporting Blended evidencing: numbers plus qualitative Evidencing intangibles: social network analysis and narratives |
Audience who will receive results (e.g., senior management, press, students) |
Rise of strategic institutional communication channels Political and social accountabilities Public data; league tables |
Communicating up to senior leaders and across with students, faculty, other academic services and communities |
Audience-focused data Reports to multiple stakeholders Political awareness |
Timing of the collection and reporting (e.g., early evidencing or implementation) |
Continuous quality monitoring at multiple levels Fluctuating institutional priorities; responding to shifting societal and government priorities; responding to viral stories, fast sound bites |
ASAP requests for educational development interventions and pivoting resources Quick requests for sound bites and news-friendly success stories |
Just-in-time evidencing and reporting, plus systematic and embedded from the start, planning for the year(s) ahead Longitudinal data |
Agency of who conducts (e.g., faculty participants, educational development collaborators) |
Quality and evidencing is everyone’s role. Casualization of workforce, rather than permanent roles Collaborative multi-unit and multi-stakeholder initiatives; partners across and from beyond institution; expanded voices |
Expanded collaborations and more partners; joint reporting and evidencing Decolonizing our own practices; (re)centering voice Casualization of roles |
Change in evidencing from one person’s role to developing everyone’s expertise in data planning, gathering, analysis, and communication Working collaboratively across university departments and with multiple voices |
Tensions in Evidencing Educational Development
Several tensions emerged as we mapped the challenges of evidencing onto the RUFDATA framework. There is a need to reframe the concept of evidence to shift institutional perspectives from power to people. Evidence, as academics currently use the term, carries value as something provable, tangible, and truthful. These attributes give it power within institutional structures. However, these same institutions are navigating how to humanize education (Hartman, 2020) with the call to embrace all our peoples in their multiplicity. A humanizing pedagogy focuses on the whole person and addresses the well-being of individuals through demonstration of compassion and shared lived experiences (del Carmen Salazar, 2013). Gathering evidence to demonstrate the impact of educational development means a shift in our thinking about the nature of evidence, its many voices, and its ultimate purpose—when examining both common ongoing educational development activities such as the one in the case example above and one-off strategic initiatives.
This shift surfaces several intertwined and synergistic tensions that can collectively serve to reshape our evidencing work. Five particular tensions arose from and across specific aspects of the RUFDATA framework as outlined in Table 2:
Adopting a philosophical standpoint on evidence (arose in the analysis of Reasons and purpose, Foci);
Embracing all voices and reflecting their unique evidence (in Foci, Data and evidence, Agency);
Anticipating and prioritizing the need for evidence (in Reason and purposes, Uses, Foci, Audience, Timing);
Facilitating evidence within a theory of change framework (in Uses, Data and evidence, Timing, Agency); and
Becoming a catalyst to inspire community and harmonize evidence (in Uses, Agency).
A. Adopting a philosophical standpoint on evidence
There is a philosophical tension between collecting evidence that illustrates value as defined by institutional convention and collecting evidence that demonstrates personal growth and freedom to learn. Institutional accountability focuses on evidence for funding justification, administrative monitoring, stakeholder engagement, quality assurance, and public reputation. Educational development seeks to capture holistic developmental growth to inform individual and shared practice, improve teaching approaches and learning experiences, and prompt scholarly investigation and discovery. In tandem with a move to developmental evaluation that draws on nimble feedback to isolate emerging trends (Patton, 2011), this tension challenges educational developers to acknowledge the reality of educators and students while continuing to demonstrate value for money. Educational developers increasingly find themselves walking a philosophical tightrope between evidence for institutional agendas and priorities and evidence for learning (Bamber, 2020). This is not new, as illustrated by the evidence plan for the L&T course in Table 1, with its political and developmental uses, but educational developers may wish to agree on their own, local philosophy before jumping into specific methods.
The purposes and foci of evidencing need not be mutually exclusive; measures aiming to capture growth, transformation, and change can equally serve purposes of accountability. While more traditional metrics such as frequency of workshops, attendance, and hours of consultation may still feature in types of data requested, measures that evidence experience and transformation can add context and richness by humanizing the numbers. Measures for the latter may include examinations of instructor changes in perspectives toward teaching and learning; changes in pedagogical practice; and impact on student learning via an examination of their performance, motivation, and level of participation or engagement.
B. Embracing all voices and reflecting their unique evidence
Tension also exists regarding who determines the type of evidence that is prioritized. To value the voices of the entire learning community and give expression to their right to be seen, heard, and respected, educational developers may seek to capture both tangible and intangible sources of evidence (Robertson et al., 2019). This tension challenges educational developers to build stories of success by sharing power for expression with the voices of teachers and students. For example, student reflections on past learning experiences can help educational developers trace and identify (or challenge) the meaningfulness of changes in teaching approaches that originated with professional development (e.g., L&T courses) several years prior. This presents real issues around resources and proportionality (collecting and using voices takes time) and questions of how to go beyond the trope of “the student voice.”
Embracing all voices equally includes those for whom educational development work or promoted instructional practice was less (or not) effective and considering how these perspectives can be collected and given proportionally equal weight in deliberations and reporting. When seeking to examine and evidence the effectiveness of broader issues, consider the breadth of voices that can inform a truly holistic representation of experience and issues. For instance, if examining the effectiveness of blended learning modalities and the resources that support them, going beyond the instructors and students to include the unique perspectives of stakeholders such as teaching assistants, departmental administrators, academic counselors, and tech support specialists may reveal unique insights that influence the effectiveness of implementation and how this modality is experienced.
C. Anticipating and prioritizing the need for evidence
Providing evidence that reinforces funding-driven priorities, purposes, and outcomes as determined by internal and external stakeholders is in tension with intentionally choosing evidence that anticipates and reflects the impact of teaching and learning processes. A proactive approach can underpin educational development unit goals and affirm the mission and identity of its members. As mentioned in the case study of the L&T course, this tension challenges time-poor educational developers to take a potentially time-intensive strategic approach, to work out in advance of an initiative how they will evidence institutional value while demonstrating and improving value and integrity within the teaching and learning process.
D. Facilitating evidence within a theory of change framework
There is tension between reacting to the push for evidence by administrators and strategically designing a plan for evidence collection. A theory of change model describes the mapping of inputs, outputs, outcomes, and impact so that evidence is named and placed in context (Rogers, 2014). When goals for data collection are analyzed as a cohesive whole, built into a unit’s operations with transparency and intention, and shared among its members, the unit can drive its own evidencing. This tension calls educational developers to shift from reactionary statistics to deliberately designed and implemented evaluation that conveys focused continuous change. What does this look like in practice? The pandemic provides an illustration: having taken emergency measures to support immediate moves to online learning and to ascertain what was helping or inhibiting that learning, educational development teams will now be discussing upcoming digital learning initiatives and deciding, collaboratively, not only the what and how of the work but also how this will be evidenced on an ongoing basis.
This tension emphasizes the need to prioritize the evidencing process by mapping out and setting up an evaluative framework that ensures planned and efficient data collection, analysis, interpretation, reporting, and enhancements to practice that are built into the existing workflow and workload of the educational development unit. It is much simpler to respond to sporadic data requests when strategically populated data sets already exist. Even if a request differs from existing data streams, one may only need to tweak a few inputs in the existing framework rather than design an evaluative process for every request.
E. Becoming a catalyst to spark community and harmonize evidence
Viewing the educational development unit as one element in a hierarchical chain is in tension with seeing it as central to the whole learning campus. Every discipline has a unique learning context and environment; however, there are challenges in education that are best addressed through a united and well-informed learning front. With the call for humanistic pedagogies, students and teachers need to connect as one entity and one community rather than stand alone within academic silos. Complementing the increase of disciplinary development support groups, an educational development hub could play a key role in setting strategic teaching and learning directions for the institution. This tension challenges educational developers to catalyze knowledge and connect communities through a coordinated campus plan and harmonization of evidence, including a shared vision and leadership for sustained growth and guidance to address change. Taking on this challenge is pivotal if we are to work effectively across institutions, such as where different educational development units are collaborating on a specific online learning initiative.
Educational development units are well positioned to overcome institutional and disciplinary silos as their members frequently work as networkers who cross-pollinate ideas, foster collaborations, and connect people. Viewed as expert resources, often at arm’s length from the institutional administration, educational developers are well positioned to gain the trust of instructors and be perceived as a non-threatening hub of instructional innovation. Given this context, how can educational development units harness this to set strategic direction and advancement within an institution’s teaching and learning ethos? How can these units bring together collaborators from across disciplines and sectors of campus to undertake bold new initiatives and innovations? How can they further foster and support a community of evidence-informed pedagogical practice?
A Path Forward
Educational developers have been at the mercy of the turbulent and complex seas of fluctuating institutional priorities, shifting societal expectations, and increased accountability requirements. Now more than ever, educational developers are pressured to report and evidence contributions, impact, and connections. As a promising guiding compass, the RUFDATA framework provides scalable options for evidencing the value of educational development work by encouraging educational developers to consider essential questions about the reason, use, foci, types of data, audience, timing, and agency related to the evidencing process, as seen in the case example about an L&T course and associated planning grid presented in Table 1. The case example demonstrates how using this type of framework in a concerted manner can help to broaden the scope of evidencing practices to meet multiple objectives, include diverse actors and voices, serve multiple audiences, and build agency and community. Beyond its applicability as a tool to guide evidencing processes, the nature of the prompts and questions that the framework helps to catalyze (in Table 2) guide educational developers to consider tensions and opportunities related to the shifting higher educational landscape on educational development work—and how they might respond to those changes through evidencing their value.
As the framework demonstrates, while it is important to gather evidence for the purposes of improving our own practices, it is imperative to be thoughtful about the data we collect and about how we use those data. For that to happen, we need to make evidencing value a purposeful, intentional, and inclusive process rather than an ad hoc, occasional engagement. It is incumbent on all of us to create a vision for the age of evidence and start living it. There is a real danger that if we don’t take stock of our data now, others will do it for us. Therefore, we cannot wait for calmer seas. The perfect time to systematically chart a course on the waves and winds of change is now.
Biographies
Jovan Groen is the Director of Academic Quality and Enhancement (AQE) at Western University. He and the AQE team work closely with academic leaders across all departments and faculties to direct and support program review and development processes. Jovan equally serves as Adjunct Professor at the University of Ottawa’s Faculty of Education, where he is involved in several research projects related to postsecondary pedagogy, evaluation, and educational development.
Carolyn Hoessler, PhD CE, is an Educational Program Designer in the Continuing Medical Education Division of the University of Saskatchewan’s College of Medicine, a past Canadian national leader in educational development, and a credentialed evaluator with the Canadian Evaluation Society.
Carolyn Ives (she/her) is a Coordinator, Learning and Faculty Development at Thompson Rivers University. She is a former faculty member in English at MacEwan University and at Thompson Rivers University. Her previous roles include Academic Integrity Officer, Curriculum Planning and Development Coordinator, and Interim Director, Centre for the Advancement of Faculty Excellence at MacEwan University. Her current work includes decolonizing academic integrity, curricular integration of sustainability and academic integrity, peer review of teaching, assessment practices, scholarship of teaching and learning, and prior learning assessment and recognition.
Veronica Bamber is Professor Emerita of Higher Education at Queen Margaret University, Edinburgh. She has over 20 years experience as an academic developer and extensive experience working in academic enhancement internationally. She chaired the Scottish Enhancement Theme on Student Transitions (2014–2017) and the national “Mastersness” project. Roni’s research focuses on managing enhancement within universities and evidencing the value of that enhancement. Her recent publication, Our Days Are Numbered, deals with the impact of metrics in academic development.
Corinne Laverty is Research and Teaching Librarian for Art, Drama, and Music at Queen’s University. For the past five years she supported educational research projects in the Queen’s Centre for Teaching and Learning with a focus on decolonizing teaching.
Klodiana Kolomitro, PhD, is the Special Advisor, Undergraduate Research and cross-appointed to the Department of Biomedical and Molecular Sciences at Queen’s University. Her research interests and publications include evaluation of educational development, wellness in higher education, quality assurance, and anatomical education. Klodiana is the recipient of the 2019 Educational Developer Leadership Award from the Educational Developers Caucus in Canada and one of the recipients of the 2021 Principal’s Educational Technology Team Award and the 2023 Indigenous Education Team Award.
References
Amundsen, C., & Wilson, M. (2012). Are we asking the right questions? A conceptual review of the educational development literature in higher education. Review of Educational Research, 82(1), 90–126. https://doi.org/10.3102/0034654312438409https://doi.org/10.3102/0034654312438409
Ananga, P. (2020). Pedagogical considerations of e-learning in education for development in the face of COVID-19. International Journal of Technology in Education and Science, 4(4), 310–321. https://doi.org/10.46328/ijtes.v4i4.123https://doi.org/10.46328/ijtes.v4i4.123
Bamber, V. (2013). Evidencing the value of educational development (SEDA Special No. 34). SEDA.
Bamber, V. (2020). Our days are numbered: Metrics, managerialism, and academic development (SEDA Paper 125). SEDA.
Beach, A. L., Sorcinelli, M. D., Austin, A. E., & Rivard, J. K. (2016). Faculty development in the age of evidence. Stylus Publishing.
Bélanger, C., Bélisle, M., & Bernatchez, P.-A. (2011). A study of the impact of services of a university teaching centre on teaching practice: Changes and conditions. Journal on Centers for Teaching and Learning, 3, 131–165. https://openjournal.lib.miamioh.edu/index.php/jctl/article/download/121/51https://openjournal.lib.miamioh.edu/index.php/jctl/article/download/121/51
Chalmers, D., & Gardiner, D. (2015). The measurement and impact of university teacher development programs. Educar, 51(1), 53–80. https://ddd.uab.cat/pub/educar/educar_a2015v51n1/educar_a2015v51n1p53.pdfhttps://ddd.uab.cat/pub/educar/educar_a2015v51n1/educar_a2015v51n1p53.pdf
Condon, W., Iverson, E. R., Manduca, C. A., Rutz, C., & Willett, G. (2016). Faculty development and student learning: Assessing the connections. Indiana University Press.
Dawson, D., Borin, P., Meadows, K., Britnell, J., Olsen, K., & McIntyre, G. (2014, February 24). The impact of the instructional skills workshop on faculty approaches to teaching. Higher Education Quality Council of Ontario. https://heqco.ca/pub/the-impact-of-the-instructional-skills-workshop-on-faculty-approaches-to-teaching/https://heqco.ca/pub/the-impact-of-the-instructional-skills-workshop-on-faculty-approaches-to-teaching/
del Carmen Salazar, M. (2013). A humanizing pedagogy: Reinventing the principles and practice of education as a journey toward liberation. Review of Research in Education, 37(1), 121–148. https://doi.org/10.3102/0091732X12464032https://doi.org/10.3102/0091732X12464032
Ellis, D. E., Brown, V. M., & Tse, C. T. (2020). Comprehensive assessment for teaching and learning centres: A field-tested planning model. International Journal for Academic Development, 25(4), 337–349. https://doi.org/10.1080/1360144X.2020.1786694https://doi.org/10.1080/1360144X.2020.1786694
Gibbs, G. (2013). Reflections on the changing nature of educational development. International
Journal for Academic Development, 18(1), 4–14. https://doi.org/10.1080/1360144X.2013.751691https://doi.org/10.1080/1360144X.2013.751691
Gourlay, L., & Stevenson, J. (2017). Teaching excellence in higher education: Critical perspectives. Teaching in Higher Education, 22(4), 391–395. https://doi.org/10.1080/13562517.2017.1304632https://doi.org/10.1080/13562517.2017.1304632
Groen, J. F., Kolomitro, K., Hoessler, C., Werhun, C., Fenton, N., & Mueller, R. A. (2018, June). Engaging in a national conversation about meaningful, usable, and formative evaluation of educational development [Conference poster]. STLHE Annual Conference, Sherbrooke, QC, Canada.
Hartman, H. J. (2020). Holistic faculty development: A learner-centered approach. In E. Sengupta, P. Blessinger, & M. Makhanya (Eds.), Developing and supporting multiculturalism and leadership development: International perspectives on humanizing higher education (Innovations in Higher Education Teaching and Learning, Vol. 30, pp. 103–125). Emerald Publishing Limited. https://doi.org/10.1108/S2055-364120200000030010https://doi.org/10.1108/S2055-364120200000030010
Hazelkorn, E., Coates, H., & McCormick, A. C. (2018). Quality, performance and accountability: Emergent challenges in the global era. In E. Hazelkorn, H. Coates, & A. C. McCormick (Eds.), Research handbook on quality, performance and accountability in higher education (pp. 3–12). Edward Elgar Publishing.
Hines, S. R. (2017). Evaluating centers for teaching and learning: A field-tested model. To Improve the Academy: A Journal of Educational Development, 36(2), 89–100. https://doi.org/10.3998/tia.17063888.0036.202https://doi.org/10.3998/tia.17063888.0036.202
Jamieson, I., & Naidoo, R. (2004). How the market economy is undermining HE performance. Management in Education, 18(4), 13–16. https://doi.org/10.1177/08920206040180040301https://doi.org/10.1177/08920206040180040301
Koh, J. H. L., & Kan, R. Y. P. (2021). Students’ use of learning management systems and desired e-learning experiences: Are they ready for next generation digital learning environments? Higher Education Research & Development, 40(5), 995–1010. https://doi.org/10.1080/07294360.2020.1799949https://doi.org/10.1080/07294360.2020.1799949
Kolomitro, K., & Anstey, L. M. (2017). A survey on evaluation practices in teaching and learning centres. International Journal for Academic Development, 22(3), 186–198. https://doi.org/10.1080/1360144X.2017.1313162https://doi.org/10.1080/1360144X.2017.1313162
Kulikowski, K., Przytuła, S., & Sułkowski, Ł. (2022). E-learning? Never again! On the unintended consequences of COVID-19 forced e-learning on academic teacher motivational job characteristics. Higher Education Quarterly, 76(1), 174–189. https://doi.org/10.1111/hequ.12314https://doi.org/10.1111/hequ.12314
Lemoine, P. A., Waller, R. E., Garretson, C. J., & Richardson, M. D. (2020). Analyzing uncertainty and change in the advancement of global higher education. International Journal of Education Humanities and Social Science, 3(4), 208–223.
Naylor, L. (2020). Swimming with the metric tide. In V. Bamber (Ed.), Our days are numbered: Metrics, managerialism, and academic development (SEDA Paper 125). SEDA.
Patton, M. Q. (2011). Developmental evaluation: Applying complexity concepts to enhance innovation and use. Guilford Press.
POD. (2018). Defining what matters: Guidelines for comprehensive centre for teaching and learning (CTL) evaluation. https://podnetwork.org/content/uploads/POD_CTL_Evaluation_Guidelines__2018_.pdfhttps://podnetwork.org/content/uploads/POD_CTL_Evaluation_Guidelines__2018_.pdf
Robertson, A., Cleaver, E., & Smart, F. (2019). Beyond the metrics: Identifying, evidencing and enhancing the less tangible assets of higher education. QAA.
Rogers, P. (2014). Theory of change: Methodological briefs—Impact evaluation 2. UNICEF Office of Research, Florence. https://www.unicef-irc.org/publications/747-theory-of-change-methodological-briefs-impact-evaluation-no-2.htmlhttps://www.unicef-irc.org/publications/747-theory-of-change-methodological-briefs-impact-evaluation-no-2.html
Saunders, M. (2000). Beginning an evaluation with RUFDATA: Theorizing a practical approach to evaluation planning. Evaluation, 6(1), 7–21. https://doi.org/10.1177/13563890022209082https://doi.org/10.1177/13563890022209082
Sorcinelli, M. D., Berg, J. J., Bond, H., & Watson, C. E. (2017). Why now is the time for evidence-based faculty development. In C. Haras, S. C. Taylor, M. D. Sorcinelli, & L. von Hoene (Eds.), Institutional commitment to teaching excellence: Assessing the impacts and outcomes of faculty development (pp. 5–16). American Council on Education.
Stewart, M. (2014). Making sense of a teaching programme for university academics: Exploring the longer-term effects. Teaching and Teacher Education, 38, 89–98. https://doi.org/10.1016/j.tate.2013.11.006https://doi.org/10.1016/j.tate.2013.11.006
Sutherland, K. A. (2018). Holistic academic development: Is it time to think more broadly about the academic development project? International Journal for Academic Development, 23(4), 261–273. https://doi.org/10.1080/1360144X.2018.1524571https://doi.org/10.1080/1360144X.2018.1524571
Sutherland, K. A., & Hall, M. (2018). The “impact” of academic development. International Journal for Academic Development, 23(2), 69–71. https://doi.org/10.1080/1360144X.2018.1451595https://doi.org/10.1080/1360144X.2018.1451595
Weiss, C. (1995). Nothing as practical as good theory: Exploring theory-based evaluation for comprehensive community initiatives for children and families. In J. P. Connell, A. C. Kubisch, L. B. Schorr, & C. H. Weiss (Eds.), New approaches to evaluating community initiatives: Concepts, methods, and contexts. Aspen Institute.
Wenger, E., Trayner, B., & de Laat, M. (2011). Promoting and assessing value creation in communities and networks: A conceptual framework (Rapport 18). Ruud de Moor Centrum, Open University of the Netherlands. https://www.asmhub.mn/uploads/files/11-04-wenger-trayner-delaat-value-creation.pdfhttps://www.asmhub.mn/uploads/files/11-04-wenger-trayner-delaat-value-creation.pdf