Introduction

While instructors rely on personal reflection and learner assessment to improve their teaching, usability testing offers another evidence-based way to improve lessons and learning objects. Through testing, instructors can build better objects and experiences from the outset rather than working retroactively. Usability testing also provides instructors with opportunities to engage directly with the audiences for which the objects or experiences are intended. Lastly, our approach focuses on aspects of instruction that are rarely usability tested, including learning objects and classroom experiences. When we use learner feedback to improve learning objects and experiences, learners become an integral part of the design process. A frequent comment from student participants is that something “could be better,” and their feedback does, indeed, help us improve the resources we create or share.

It is important to distinguish between usability testing and assessing student learning. Assessment, particularly as it relates to learning objects or instructional materials, focuses on whether students learned what they were intended to learn (Astin & Antonio, 2012, p. 38). While, as Crowther et al. (2004) noted, “poorly designed instructional applications are unlikely to be instructionally effective” (p. 289), neither is it true that making an object usable will guarantee that students achieve the outcomes within the assignment (Akpinar, 2008). However, the information gained through usability testing can help correct issues that may inhibit or stall student learning. We aim to incorporate usability testing as part of the process of creating learning objects and other teaching materials so that, ideally, students do not use objects that have not been usability tested first.

In this paper, we will share our approach to usability testing learning objects and experiences, including an overview of our program and practices; a literature review of usability testing in instruction; and case studies describing tests of learning objects, workshops, and handouts. We advocate for a multifaceted approach to user testing teaching materials that incorporates both formal and informal user responses. This approach provides structured feedback that allows space and time for changes, illuminates the ways that our perspective is different from that of our learners, and gives users both a stake and a voice in the creative process of learning design.

Program Background

We are two instruction librarians and an instructional designer who are part of the University Libraries at Virginia Tech Usability Testing Program, which includes other library faculty, staff, and student workers. This group collaboratively designs, conducts, and analyzes user tests for resources or experiences that we have created or support as a library. In the beginning, our testing program focused on user feedback for educational interfaces. As the program has developed, we have expanded to testing different types of interfaces like library webpages, in-house applications, and physical resources like a card game designed to help first-year students learn about library resources. We also began testing learning objects at various stages in their development process. Similarly, we have expanded our methodology to include focus group testing and eye tracking software in addition to the more traditional one-on-one usability test. Since the program began in 2017, others across the organization have been interested in having our group test their resources, so we have been able to partner with several other departments or teams both inside and beyond our library to help them better understand the needs of users.

Foundationally, it is important to distinguish among the various facets upon which a learning experience may be reviewed: assessment of learning, evaluation of the experience, and users’ responses to the experience. Assessment is decentralized in our library, so the Usability Testing Program does not specifically assess learning in any of our tests. However, evaluating the effectiveness and the usability of the learning experience is possible in the course of a user test, and creators can apply the findings from testing more broadly to future learning experiences. In practice, we strive for a minimum of one test per month with a goal of 4–6 participants over the course of a ninety-minute face-to-face testing session. Our program is based on Krug (2014); we try to test frequently with the goal of making iterative changes based on the responses. Depending on the focus of the test, we might invite a specific audience—like teaching faculty or members of the community—or solicit participants from the common areas in Newman Library. In either case, we try to make sure that the participants feel comfortable sharing their feedback by offering a welcoming environment and providing snacks and beverages.

Literature Review

Those concerned with user experience (UX) in libraries have a range of research methods to choose from, depending on the stage of the project and what they wish to learn about their users (Farrell, 2017). Academic librarians use methods such as surveys, usability tests, interviews, field studies, and accessibility evaluations as they seek to better understand their users (Young et al., 2020). Usability testing, “the process of learning about users from users by observing them using a product to accomplish specific goals of interest to them,” requires a product that can be tested (Barnum, 2011, p. 6) and is one of the most popular UX methods used in libraries (Young et al., 2020). Within academic libraries, these products might include the library website or software and web-based tools offered through the library.

The library may have little control over the design and usability of software or web-based tools, but a bit more control over the library website. Accordingly, much of the conversation concerning usability testing in libraries focuses on some aspect of testing an academic library’s website, including case studies of and lessons learned through website redesigns (Becker & Yannotta, 2013; Dominguez et al., 2015; Miller, 2019; Overduin, 2019; Paladino et al., 2017), evaluations of websites’ information architecture (Guay et al., 2019; Pant, 2015; Silvis et al., 2019), the impact of word choice on the library website on user experience (Gillis, 2017), and testing of responsive websites (Tidal, 2017).

Less frequently, others have focused on usability testing of LibGuides, including the differences between two versions of LibGuides (Bowen et al., 2018), best practices for creating subject guides (Chan et al., 2019; Sonsteby & DeJonghe, 2013), students’ potential confusion regarding the purpose of LibGuides and their relationship to the library website (Conrad & Stevens, 2019), and recommendations for structuring a guide (Thorngate & Hoden, 2017).

The literature less commonly describes usability testing of library tutorials and other learning objects, although these objects are likely to be ones over which the library has control of the design and use. When it comes to usability testing of library tutorials, differences emerge about when to conduct usability testing: during the development process, known as formative testing, or after the product is finished, also called summative testing (Barnum, 2011). While some authors described usability testing tutorials students were already using (Bury & Oud, 2005; Bussmann & Plovnick, 2013), others described conducting usability testing as part of the tutorial design process (Armstrong & Georgas, 2006; Held & Gil-Trejo, 2016) with some authors noting that usability testing is a best practice of tutorial design (Blummer & Kritskaya, 2009; Bowles-Terry et al., 2010; Mestre, 2012).

Beyond conducting usability testing on library tutorials, some have used information gained through usability testing to improve their teaching. Baird and Soares (2018) described using insights from usability testing of the library website—which provides detailed, ongoing information about how users navigate the website—to inform their teaching. Mitchell and West (2017) showed how librarians can use usability data to improve the library website and to inform librarians’ teaching; the data allows web developers to make informed decisions about web design or composition and provides instruction librarians with data on how students use the library website, which allows them to better anticipate student needs and questions.

Outside of library literature, usability testing as part of tutorial development seems to be more common. Several studies described usability testing in e-learning for medical education (Davids et al., 2014; Doubleday et al., 2011; Gould et al., 2008; Sandars & Lafferty, 2010), science education (Crowther et al., 2004; Kramer et al., 2018), and employee training (Chang, 2011). Others described testing the usability of a learning management system (Alelaiwi & Hossain, 2014; Junus et al., 2015; Laurent et al., 2018; Mabila et al., 2014). Usability rubrics for evaluating learning objects appear in the literature as well (Akpinar, 2008; Davids et al., 2014; Leacock & Nesbit, 2007).

Our Approach

While there have been some studies on usability testing of learning objects both inside and outside of libraries, these studies have not included learning experiences and materials beyond digital learning objects. In the case studies below, we will fill a gap in the literature by describing usability testing not only of learning objects but also of learning experiences and teaching materials. These three approaches to teaching apply across a library instruction program. Case Study 1 describes usability testing learning objects, Case Study 2 describes testing a learning experience, and Case Study 3 describes testing print materials. Each case study features overview, methods, findings/improvements, and lessons learned sections. Through these studies, we make a case for incorporating usability testing into many aspects of teaching, arguing that testing allows us to create more effective learning experiences by inviting users to share their voice at various stages in the process and giving us insight into our users and their needs that we would not gain otherwise.

Case Study 1: Digital Learning Objects

Overview

Digital learning objects such as videos and tutorials represent unique usability challenges. In some of their most common applications, the object’s creator is likely never to see users interacting with them, particularly if they are designed to replace in-person teaching or provide asynchronous instruction. Also, it’s likely that there will be few analytics available to shed light on users’ behavior. Even if an instructor will be monitoring a learning object in the learning management system, it is difficult to know if usability problems are negatively affecting students’ performance in the module, and feedback mechanisms like surveys cannot provide the same kind of in-depth feedback that usability testing can. An additional usability challenge for learning objects is the difficulty of identifying all the users. Learning objects are designed to be reused, and object creators cannot be sure who future users are and what their precise needs will be (Sicilia & Garcia, 2003). While some issues of usability will be universal, others may be unique to groups of users who differ from the audience the object was originally designed for.

Methods

Our team was creating learning objects before we ever began a usability testing program within our library, but we did not begin regularly testing learning objects until our usability testing program had been in place for almost eighteen months. Our first learning object usability test, of an information literacy Canvas module about information types designed to be embedded in first-year writing classes, was eye opening, to say the least. We chose to test it because it had been in use for over a year, and we noticed that students seemed to be rushing through the quizzes embedded in the module and making few attempts to select correct answers. Overall, the module seemed to be an ineffective replacement for an in-person library instruction session.

We designed a simple test that used a think-aloud protocol in which we asked participants to work through the module, and then we asked follow-up questions about their experience. Two library faculty facilitated the tests with four student participants. We used a script for consistency among tests and recorded the participants’ screens and voices.

Findings

Usability testing data revealed deep flaws not only in the structure of the Canvas module but also in an embedded Storyline tutorial that made up a significant portion of the content. While the participants were familiar with using Canvas, the module gave them insufficient context about what to expect, and they were able to move to the quiz without viewing the embedded tutorial. The Storyline tutorial launched automatically when students opened the first page of the module, causing participants to miss its beginning while they were reading the directions above it, and it lacked clear directions for how students should proceed and when they could use the Previous and Next buttons to navigate.

A portion of the tutorial asked students to hover their mouse over example articles to explore their author, purpose, date, and audience, but a lack of clarity in directions led to some students expressing surprise when new content appeared in response to mouse movement, while others seemed to move their mouse around the tutorial window until they saw all the new content, and they could proceed to the next slide.

Additionally, the tutorial did not achieve its intended goal; the students generally thought that the content of the tutorial poorly prepared them to take the quiz. These pedagogical issues were exacerbated by the usability problems. These flaws were serious enough that we immediately made the Storyline tutorial unavailable in our learning object repository (http://odyssey.lib.vt.edu/) and stopped promoting the Canvas module to first-year writing instructors.

Part of our surprise at having distributed such an ineffective tutorial and module stemmed from the fact that they were not a thrown-together effort; they represented the joint labor of multiple librarians and an instructional designer, and drafts had undergone multiple rounds of edits and feedback from team members. We were not new to creating learning objects, having created dozens in the preceding years. Yet in all our drafting and feedback from colleagues, we had not sought feedback from our users, leading to widespread use of these ineffective instructional modules. Watching students interact with the content immediately revealed the module’s usability problems in a way that previous feedback had been unable to show.

Results/Improvements

While we chose to pull the module and interactive tutorial from use, rather than make improvements to them, and to focus instead on creating replacement content, the testing experience led to changes in our content development workflow. After our summative test of a learning object already in use revealed usability issues that should have been corrected before the module was implemented, we realized that a formative testing approach during development was a more effective way to create high-quality learning objects. Consequently, over the next year we focused our usability testing program on drafts of learning objects. Our tests included the app layout and video content of our library’s self-paced iPad tour, a LibGuide for a first-year engineering program, and an interactive American Psychological Association (APA) style tutorial (http://odyssey.lib.vt.edu/s/home/item/216). While the LibGuide had been in use for several years and had been created by a previous liaison to the program, the iPad tour and the interactive tutorial were both in development at the time they were tested, and creators made changes to both items based on usability testing data before librarians shared them with students.

Digital Learning Objects: Lessons Learned

Test During Development

The experience of testing the APA tutorial during the development phase provides a useful contrast to the testing experience we described above; it emphasized for us the importance of testing during development. While it would be ideal to test all learning objects prior to their distribution, usability testing as part of the development stage is crucial for more complex learning objects like interactive tutorials and learning management system modules. The APA tutorial, which asks users to explore APA reference list citations by clicking on various parts of example citations, was complex from a development perspective, with many elements on each slide and possibilities for users to opt in to audio narration. The tutorial link is included in the appendix.

This complexity, with its many opportunities for usability problems, made the item a high priority to test during the development stage. We tested the draft tutorial with three participants, asking them to think aloud as they moved through the tutorial, and then we asked three questions about the purpose of the tutorial, their experience with the ability to toggle the audio on/off, and any navigation problems they encountered. As in the previous test, we used a script for consistency between tests and recorded participants’ screens and audio. Their feedback led to adjustments in the navigation of the tutorial, clearer instructions, and changes in the display of the tutorial interface. These adjustments are reflected in the final version of the tutorial that creators eventually shared in our learning object repository.

Small Changes Can Have a Big Impact

In testing the APA tutorial, we were reminded afresh that even seemingly small elements of a learning object can greatly affect the user’s experience. For instance, while the object’s creator had assumed that users would be ready to begin the tutorial when it opened, users expressed a strong preference for a tutorial that did not automatically start playing, with multiple participants commenting that they felt flustered when the tutorial began automatically, thereby causing them to miss important instructions.

While one test of one tutorial is not necessarily enough evidence for us to permanently change the way we structure similar learning objects, it does provide us with feedback that informs how we approach new learning objects. The more we test, the more we expand our knowledge of what users want and need. Having this knowledge does not remove the requirement to seek user feedback, but it helps us to create objects that are more likely to be usable from the start. Subsequent usability testing also allows us to test the assumptions informed by previous tests.

Case 2: Learning Experiences (Workshops)

Overview

Every time we teach, we learn something new, whether we’re teaching a workshop, a library instruction session, or in a semester-long course. We may learn that an activity works exactly as we planned or that it doesn’t; we may learn that our strategy for explaining a concept or demonstrating a task is clear or that it doesn’t make sense to our learners. Instructors get this kind of informal, anecdotal feedback from every interaction, but we advocate seeking out such feedback strategically. By formally and informally evaluating the learning experience as a whole—and not focusing solely on assessment of student learning—the instructor can revise and adapt based on learners’ feedback. This may start by simply asking learners to reflect at different points throughout the workshop and noting those responses, but it could develop into running a scale version of your session with a test audience to center the feedback process during development.

Methods

We informally responded to and learned from user feedback before we decided to conduct formal usability testing on a learning experience. Our first opportunity to explore this approach came during preparation for a conference presentation, and we decided to try a workshop usability test.

As we would for any other type of test, we first identified the audience. The final presentation was going to be a 50-minute participatory workshop offered at the LOEX 2019 conference, so the audience would be instruction librarians and those interested in information literacy. With this in mind, we took a dual approach to recruitment. We identified several individuals who would be able to offer feedback on both the content and the experience and invited them directly. Then, we shared the event more broadly with our libraries’ faculty and staff, inviting anyone interested in the topic to attend. Our hope was that this would allow for various levels of feedback and more accurately represent the diverse experience of participants at the conference. We had five participants in this usability test, a number that matched the number of participants we anticipated being in each small group during the actual workshop session. With this plan in place for our participants, we worked to structure the test session.

Often, usability tests are only able to focus on a subset of the learning experience—perhaps one activity or one handout—but we wanted this test to encompass the full experience as authentically as possible. This was important because we were presenting the theoretical underpinnings of our topic as well as the practical details and asking the attendees to participate in a multi-step activity within the span of 50 minutes. To accomplish these goals, we planned buffer time before and after the delivery of the workshop, resulting in a testing session lasting about 75 minutes.

These portions before and after the workshop looked much more like our standard user testing process: we greeted the participants and gave them an overview of what we would be doing during the session before transitioning to the workshop itself, and, at the end, we prompted the participants to debrief with us about what worked, what didn’t, and what they still had questions about. Throughout this feedback collection, we asked the participants to focus on their response to the learning experience. As we’ve discussed in our previous examples, we did not take steps to assess the participants as learners; instead, we asked them to be active parties in evaluating the effectiveness of the workshop and reflecting on their experiences as users. Having participants with more and less experience in the subject matter also offered helpful insights for that evaluation.

Findings

Through the testing experience, we received feedback on our activities, transitions, and overall design. As is common in a usability test, the things we assumed participants would focus on were not necessarily what they had feedback on. Our workshop involved two activities, one in which a group conducted a brief usability test of a LibGuide we had created, using a script provided for them. While we thought we had provided sufficient directions, during the test it became clear that we had not provided clear guidance on the roles that participants were to take, so we lost valuable time while participants figured out what they needed to do.

In the second activity, participants were to watch a video of a sample usability test of the LibGuide in order to practice data collection. While we assumed that watching the video as a large group, with video captions turned on, would work, test participants had trouble listening to the video and wanted to experience the video in multiple modalities, including on their own screens. Finally, participants also wanted more support for how to begin conducting usability tests themselves.

Results/Improvements

The feedback from participants led us to make several changes to the structure of the workshop. While the content remained basically the same and participants in the test felt that they learned about usability testing, we worked to clarify those elements that affected the participants’ experience of the content: its usability, in other words. We clarified instructions and gave participants a set of three roles that they needed to fill when they conducted their own test. We also created a shortened URL for the video, asked participants to watch it on their own screens rather than as a large group, and provided a transcript of the video for easy referral when they were working on their data analysis. Finally, we added additional resources to the handout, so it contained not only the directions for the session but also resources to use in the future. The resources are included in the appendix. The workshop as we presented it at the conference went smoothly within our allotted 50 minutes, and it would not have gone so smoothly without the feedback we gained from usability testing.

Learning Experiences: Lessons Learned

What Benefits Most From Testing

Of course, this kind of work is not without challenges. We cannot teach every workshop or instruction session twice to receive feedback and revise each time, but we can use some of these approaches to solicit key feedback from learners and apply what we’ve learned to similar teaching scenarios in the future. As with testing learning objects, where more complicated objects are most in need of usability testing, we can prioritize testing complex lessons and activities.

There may be components of a workshop or instruction session, such as active learning activities, that we use frequently with minor adaptations. Such activities would be ideal for user testing. For example, if we frequently have learners break into small groups and collaboratively draft research statements, but the topic varies from one class to the next, then we might focus on getting user feedback for our instructions and perhaps instructor feedback about the transitions into and back from the activity. By seeking to improve this activity, we are able to affect many learners because we can apply the feedback we’ve collected to other iterations of this activity as well as to strategies to transition into and away from any activity.

Benefits for Those Beyond the Usability Testing Team

Just as with testing learning objects, testing learning experiences like workshops allows us to gain a greater understanding of the needs and preferences of various audiences without testing every instance. By foregrounding the feedback, we can invite learners into the knowledge-creation process, helping us to better address their needs and helping them to articulate those needs. Asking for this formal feedback also helps instructors avoid some of the perils of assumptions. Whether working with a group of K–12 students, first-year undergraduates, or our peers, it is important to remember that our users are not ourselves.

This approach of strategically and actively pursuing feedback informs the way we teach, but it also reminds us to continue asking questions. In practice, this kind of feedback can be helpful for an entire department or organization. By sharing what we learn about our own teaching with each other, we can offer learners an instruction experience that is informed by many experiences and more consistent across the organization. We often reuse and adapt each other’s print materials—such as handouts or slides—for our classes; why not adapt and integrate lessons learned from each other’s instruction to make learners’ experiences that much better?

Case 3: Print Materials

Overview

Print materials cover a wide scope including marketing materials, informational handouts, and activity-based worksheets. Print materials can be just as important to test as digital learning objects and learning experiences. Multiple factors can affect their success including accessibility, length, design, clarity, organization, navigation, and scope. The priority of these factors changes from object to object and can be contradictory. For instance, accessibility needs can conflict with design considerations. Having print materials as central parts of a learning experience adds even more complexity. For example, a handout that shares emails, links, and resources could be tested, but it is even more important to test a handout that includes interactive elements like writing, drawing, or other activities. Longer materials like booklets or packets are also important to test as they can become unwieldy or confusing because of a high number of elements and the manner in which those elements are combined.

In 2019, our group tested a sixteen-page packet (http://odyssey.lib.vt.edu/s/home/item/221) about education-based virtual reality spaces. Creators designed this packet to address common questions that arise during the development of such spaces in K–12 environments. Creators intended the packet for local K–12 teachers, technologists, and administrators. The complexity of the topic made it essential that we test the object before making it publicly available.

Methods

Our team invited local educators for an hour-long feedback session to review materials and assess their usability. While the session was framed more like a focus group and not a true usability test, we focused our questions around the usefulness and usability of the printed content. Four educators, from different educational settings and subject areas, responded to our open call. They included some who had worked closely with a colleague from our library to build a similar virtual reality space and others who were interested in the technology but did not have personal experience with it. A single moderator ran the session, and team members answering additional questions, facilitating activities, and taking notes assisted them. With the participants’ permission we recorded audio of the discussion, and we took handwritten notes on visual cues and as a backup in case of technical difficulties.

One of the challenges of focus groups is keeping conversations on track. While the group should feed off one another’s ideas, moderators should be prepared to redirect the conversation. To combat unproductive conversation, we provided a general discussion outline to participants and displayed discussion questions on large screens in the room throughout the activity. We asked participants to engage with and review the packet and give general feedback on clarity, scope, readability, and design. The moderator then gave the focus group more context about the document including its conception and ongoing projects to create additional supplementary materials. Then the moderator asked the group to give more feedback on specific sections as well as describe how they would imagine using or interacting with the packet in their own educational settings.

Findings

During the session, participants gave notes on many areas of design and use, including improving organization, tweaking continuity of certain topics, adding additional photos, fixing copy issues like incorrect titles or jargon, and addressing content gaps, as well as ideas for future supplemental materials. There was also helpful feedback regarding visualizations in the packet, including clarity of icon meanings, use of real-world photos, and the value of space visualizations. For example, participants were confused over the icon for “nausea” and in the process of suggesting alternative visualizations identified that “disorientation” would be a preferable term to cover both the possible physical and mental responses to virtual reality experiences. This feedback helped to not only avoid a possibly confusing icon, but to improve the learning material as a whole.

Participants also vocalized a preference for more and larger photos of real-world virtual reality set-ups. While they approved of the graphic design visualizations of virtual reality spaces, the photos had a more positive response because they made it easier for learners to imagine designing virtual reality experiences in their own spaces. One common branding element used at our university, a dotted line, was also noted to cause some confusion at certain use points because participants assumed it was encouraging them to cut or tear pages out of the packet.

Overall, the feedback was extremely positive regarding the quality of design, content, and curation. Participants said the packet gave them a better understanding of the processes, technology, and space requirements of virtual reality, including participants who had little to no virtual reality experience before the test.

Results/Improvements

Designers improved the second version of the packet based on the feedback from participants. Rather than funneling efforts into more in-depth visual design, they increased the number of photos in the packet. This was a fortunate improvement because photos take less time to develop and create than visualizations, and, in this case, photos also improved the learning experience. Designers replaced confusing icons and branding elements, and they moved activity-based handouts to the end of the packet. They also added graph paper and blank pages to improve the use quality. Improvements to the visual hierarchy mean “action item” areas of the workbook are now more visible, so that participants can easily scan for goals and takeaways. Finally, designers fixed copyediting and continuity errors. Future supplementary material and packet revisions will incorporate suggestions regarding the content scope and additional materials.

Print Materials: Lessons Learned

User Feedback vs. Peer Feedback

This usability review session garnered lessons regarding both group administration and print material design. First, it allowed us to continue tweaking our process to more effectively both condense and expand content during the development process. We have found it is most useful to ask end users to tell us what is missing and then work with peers and collaborators to determine what content should be removed. In our experiences, end users had more ideas to add and were less likely to cut something. Peers and collaborators tend to have institutional context and experience that allows them to know more precisely what to trim. This specific use case shows how by starting relatively small and speaking with end users during the middle of the process we were able gain valuable insight into which areas of content to expand.

This experience also confirmed the importance of outlining specific questions for such a session. End users often have more experience with the content area than with graphic design or user experience. This can lead discussions to center on these topics which are important, but not always the goal of the test. The question structure we prepared helped to guide the conversation toward specific design and user experience questions without stifling feedback on other areas. This underscored our knowledge that specificity and intentional aim are vital in structuring a usability review, wording questions, and facilitating discussion.

Our Preferences vs. Users’ Preferences

We also learned that less is not always more when it comes to visualizations. Our initial aim was to use icons, graphic design elements, and illustrations to create imagined and minimalist versions of information and spaces. However, our participants made clear that in learning to set up real-world spaces with physical technology and people, they preferred photo-based visuals. This information did not necessarily negate the previous work, but we used it to greatly improve the visual quality and usefulness of the packet in its second iteration.

Lastly, we learned that users felt trepidation about drawing in a booklet that appears professionally produced as they feared “messing it up.” This knowledge was helpful in the creation of future print materials for multiple projects. For example, on another packet project in parallel development, we ended up moving all activity-based sheets to the end and creating cut-lines for pages to show that they can be removed from the packet. We also decreased the fidelity of some workshop handouts for our teaching teams, making them black and white with only basic shapes and limited sections so that students would feel more inclined to draw, write, and engage with the print materials as their own learning object and not as pristine print artifacts.

Discussion

In evaluating the usability of learning objects, experiences, and print resources, some lessons have emerged that have affected our usability testing program overall. First, we have learned that testing some objects is better than testing none at all. Our tests have led to valuable confirmations and improvements that not only increase the quality of one object but add value to future projects and our entire creative and pedagogical process. This insight has proved true not only in testing traditional objects like digital interfaces, but for all the objects that make up our learning and teaching experiences.

We have also found that while ideal usability tests would be done with participants from the object’s intended audience, feedback from any users is better than no feedback at all, and feedback from users outside of the intended audience can help identify some of the potential challenges of making an object more broadly usable. Recognizing the limitations of time and availability of our target audience, we keep our number of participants small and welcome feedback from a wide range of people, including undergraduate and graduate students and faculty from within and beyond the library. Faculty can give valuable feedback even though they are most likely not the target audience for the objects; they approach the object with fresh eyes and an unfamiliarity that makes them sensitive to usability concerns in a way that those who have been involved in the object’s creation are too close to the project to notice. Their familiarity with student needs also provides a helpful perspective that we may not have. Overall, testing allows us to engage directly with users, which grants us actionable feedback that we, as the creators, cannot always provide.

We’ve also learned that testing resources while in development is far more helpful than testing objects that are already in use. Testing early not only improves the quality of the content overall, but also saves valuable time in the creative process. While follow-up testing after an object has been in use is helpful, particularly as the needs and preferences of users change, testing something for the first time after it has already been in use may reveal issues that are serious enough that librarians may need to remove the object from distribution, causing disruption if it is used a lot in classes or programs at the institution. In contrast, we can build usability testing and subsequent editing into the object-creation workflow and timeline, so that an object begins life as usable as possible.

Ideally, we would test every object with multiple users before it is published. However, we have not yet been able to reach that goal, and it likely wouldn’t be feasible in many contexts; all members of our usability team have additional job responsibilities. We have prioritized testing complex tutorials, handouts, and experiences. For other objects, we have collected informal feedback from one to two student employees rather than conduct a full usability test with four to six participants.

We have also learned how important it is that our program continues to develop and grow. In light of the transition to remote work and learning in March 2020, we have reconsidered the possibilities for online usability testing, since our program traditionally has relied on in-person testing in the library, with coffee and snacks provided in exchange for participation. While we have conducted some virtual tests, we are still exploring options for recruitment and both moderated and unmoderated remote testing. We hope that exploring these options will help us in the future as online learning continues to grow.

We have also considered how we can incorporate testing for accessibility into our usability testing program. While we consider accessibility when we are creating objects and use the knowledge we have gained through professional development to design accessible objects, we are considering how we can partner with those who use assistive technology, knowing from experience that their insights will be incredibly valuable and give us greater insight into the needs of all users.

Conclusion

While conducting usability testing of library websites and LibGuides is an important part of improving UX, our experience has convinced us that testing a variety of learning objects and experiences needs to become an essential practice as well. Learning objects and experiences are almost always more complex than we think, and issues that may be invisible or minor to the creators of a learning experience can render it unusable to others. Furthermore, it can be difficult, if not impossible, to view our objects or experiences from the perspective of our learners or users. Because these types of learning materials are less commonly tested, creators can be operating from uninformed assumptions about what users need. However, usability testing allows us to engage with our users and create better, more usable products for them. Creating certain experiences and objects may also be new to us in practice and genre, but usability testing provides an evidence-based way to build better objects and experiences from the outset. By testing, we are able to see both the big-picture issues as well as some of the small details that might derail an experience.

Adding another level of work to the already time-consuming teaching process is daunting. However, we have found that formative testing helps to limit re-work on current and future projects, and we have found that the lessons learned during testing are often not object-specific but can be applied to other projects as well. This transferability bolsters the UX, graphic design, and pedagogical skills of our whole team as lessons are shared and incorporated across the board. We also find that testing a representative object, such as one video from a longer series, is much better than testing nothing at all, and what we learn from the testing is worth the time invested.

This work supports our goal of making learning objects and experiences as usable for as many users as possible and for as long as possible. The field needs further research about the usability testing of learning objects and experiences in libraries, particularly beyond that of library websites and guides. Researchers should also explore the place of usability testing in the design process and its connection with student learning goals. We hope that usability testing will become a regular part of library learning object design and instruction and that, with the help of our patrons, libraries can continue to make user and learner experiences in instructional contexts better.

References

Akpinar, Y. (2008). Validation of a learning object review instrument: Relationship between ratings of learning objects and actual learning outcomes. Interdisciplinary Journal of E-Skills and Lifelong Learning, 4, 291–302. https://doi.org/10.28945/380https://doi.org/10.28945/380

Alelaiwi, A., & Hossain, M. S. (2014). Evaluating and testing user interfaces for engineering education tools: Usability testing. International Journal of Engineering Education, 30(3), 603–609.

Armstrong, A., & Georgas, H. (2006). Using interactive technology to teach information literacy concepts to undergraduate students. Reference Services Review, 34(4), 491–497. https://doi.org/10.1108/00907320610716396https://doi.org/10.1108/00907320610716396

Astin, A. W., & Antonio, A. L. (2012). Assessment for excellence: The philosophy and practice of assessment and evaluation in higher education. Rowman & Littlefield Publishers.

Baird, C., & Soares, T. (2018). A method of improving library information literacy teaching with usability testing data. Weave: Journal of Library User Experience, 1(8). http://dx.doi.org/10.3998/weave.12535642.0001.802http://dx.doi.org/10.3998/weave.12535642.0001.802

Barnum, C. M. (2011). Usability testing essentials: Ready, set…test! Morgan Kaufmann.

Becker, D. A., & Yannotta, L. (2013). Modeling a library web site redesign process: Developing a user-centered web site through usability testing. Information Technology and Libraries, 32(1), 6–22. https://doi.org/10.6017/ital.v32i1.2311https://doi.org/10.6017/ital.v32i1.2311

Blummer, B. A., & Kritskaya, O. (2009). Best practices for creating an online tutorial: A literature review. Journal of Web Librarianship, 3(3), 199–216. https://doi.org/10.1080/19322900903050799https://doi.org/10.1080/19322900903050799

Bowen, A., Ellis, J., & Chaparro, B. (2018). Long nav or short nav?: Student responses to two different navigational interface designs in LibGuides version 2. The Journal of Academic Librarianship, 44(3), 391–403. https://doi.org/10.1016/j.acalib.2018.03.002https://doi.org/10.1016/j.acalib.2018.03.002

Bowles-Terry, M., Hensley, M., & Hinchliffe, L. J. (2010). Best practices for online video tutorials: A study of student preferences and understanding. Communications in Information Literacy, 4(1), 17–28. https://doi.org/10.15760/comminfolit.2010.4.1.86https://doi.org/10.15760/comminfolit.2010.4.1.86

Bury, S., & Oud, J. (2005). Usability testing of an online information literacy tutorial. Reference Services Review, 33(1), 54–65. https://doi.org/10.1108/00907320510581388https://doi.org/10.1108/00907320510581388

Bussmann, J. D., & Plovnick, C. E. (2013). Review, revise, and (re)release: Updating an information literacy tutorial to embed a science information life cycle. Issues in Science and Technology Librarianship, 74. https://doi.org/10.5062/F4F769JQhttps://doi.org/10.5062/F4F769JQ

Chan, C., Gu, J., & Lei, C. (2019). Redesigning subject guides with usability testing: A case study. Journal of Web Librarianship, 13(3), 260–279. https://doi.org/10.1080/19322909.2019.1638337https://doi.org/10.1080/19322909.2019.1638337

Chang, C. (2011). Usability testing for e-learning material for new employee training: A design-based research approach. British Journal of Educational Technology, 42(6), E125–E130. https://doi.org/10.1111/j.1467-8535.2011.01216.xhttps://doi.org/10.1111/j.1467-8535.2011.01216.x

Conrad, S., & Stevens, C. (2019). “Am I on the library website?”: A LibGuides usability study. Information Technology and Libraries, 38(3), 49–81. https://doi.org/10.6017/ital.v38i3.10977https://doi.org/10.6017/ital.v38i3.10977

Crowther, M. S., Keller, C. C., & Waddoups, G. L. (2004). Improving the quality and effectiveness of computer-mediated instruction through usability evaluations. British Journal of Educational Technology, 35(3), 289–303. https://doi.org/10.1111/j.0007-1013.2004.00390.xhttps://doi.org/10.1111/j.0007-1013.2004.00390.x

Davids, M. R., Chikte, U., Grimmer-Somers, K., & Halperin, M. L. (2014). Usability testing of a multimedia e-learning resource for electrolyte and acid-base disorders. British Journal of Educational Technology, 45(2), 367–381. https://doi.org/10.1111/bjet.12042https://doi.org/10.1111/bjet.12042

Dominguez, G., Hammill, S. J., & Brillat, A. I. (2015). Toward a usable academic library web site: A case study of tried and tested usability practices. Journal of Web Librarianship, 9(2–3), 99–120. https://doi.org/10.1080/19322909.2015.1076710https://doi.org/10.1080/19322909.2015.1076710

Doubleday, E. G., O’Loughlin, V. D., & Doubleday, A. F. (2011). The virtual anatomy laboratory: Usability testing to improve an online learning resource for anatomy education. Anatomical Sciences Education, 4(6), 318–326. https://doi.org/10.1002/ase.252https://doi.org/10.1002/ase.252

Farrell, S. (2017, February 12). UX research cheat sheet. Nielsen Norman Group. https://www.nngroup.com/articles/ux-research-cheat-sheet/https://www.nngroup.com/articles/ux-research-cheat-sheet/

Gillis, R. (2017). “Watch your language!”: Word choice in library website usability. Partnership: The Canadian Journal of Library and Information Practice and Research, 12(1). https://doi.org/10.21083/partnership.v12i1.3918https://doi.org/10.21083/partnership.v12i1.3918

Gould, D. J., Terrell, M. A., & Fleming, J. (2008). A usability study of users’ perceptions toward a multimedia computer-assisted learning tool for neuroanatomy. Anatomical Sciences Education, 1(4), 175–183. https://doi.org/10.1002/ase.36https://doi.org/10.1002/ase.36

Guay, S., Rudin, L., & Reynolds, S. (2019). Testing, testing: A usability case study at University of Toronto Scarborough Library. Library Management, 40(1/2), 88–97. https://doi.org/10.1108/LM-10-2017-0107https://doi.org/10.1108/LM-10-2017-0107

Held, T., & Gil-Trejo, L. (2016). Students weigh in: Usability test of online library tutorials. Internet Reference Services Quarterly, 21(1–2), 1–21. https://doi.org/10.1080/10875301.2016.1164786https://doi.org/10.1080/10875301.2016.1164786

Junus, I. S., Santoso, H. B., Isal, R. Y. K., & Utomo, A. Y. (2015). Usability evaluation of the student centered e-learning environment. International Review of Research in Open and Distributed Learning, 16(4), 62–82. https://doi.org/10.19173/irrodl.v16i4.2175https://doi.org/10.19173/irrodl.v16i4.2175

Kramer, M., Olson, D., & Walker, J. D. (2018). Design and assessment of online, interactive tutorials that teach science process skills. CBE—Life Sciences Education, 17(2). https://doi.org/10.1187/cbe.17-06-0109https://doi.org/10.1187/cbe.17-06-0109

Krug, S. (2014). Don’t make me think, revisited: A common sense approach to web usability (3rd ed.). New Riders.

Laurent, X., Fresen, J., & Burholt, S. (2018). Usability methodology and testing for a virtual learning environment. International Journal on E-Learning, 17(3), 341–375.

Leacock, T. L., & Nesbit, J. C. (2007). A framework for evaluating the quality of multimedia learning resources. Journal of Educational Technology & Society, 10(2), 44–59.

Mabila, J., Gelderblom, H., & Ssemugabi, S. (2014). Using eye tracking to investigate first year students’ digital proficiency and their use of a learning management system in an open distance environment. African Journal of Research in Mathematics, Science and Technology Education, 18(2), 151–163. https://doi.org/10.1080/10288457.2014.928449https://doi.org/10.1080/10288457.2014.928449

Mestre, L. S. (2012). Student preference for tutorial design: A usability study. Reference Services Review, 40(2), 258–276. https://doi.org/10.1108/00907321211228318https://doi.org/10.1108/00907321211228318

Miller, J. (2019). The design cycle and a mixed methods approach for improving usability: A case study. Journal of Web Librarianship, 13(3), 203–229. https://doi.org/10.1080/19322909.2019.1600451https://doi.org/10.1080/19322909.2019.1600451

Mitchell, E., & West, B. (2017). Collecting and applying usability data from distance learners. Journal of Library & Information Services in Distance Learning, 11(1–2), 1–12. https://doi.org/10.1080/1533290X.2016.1223963https://doi.org/10.1080/1533290X.2016.1223963

Overduin, T. (2019). “Like a robot”: Designing library websites for new and returning users. Journal of Web Librarianship, 13(2), 112–126. https://doi.org/10.1080/19322909.2019.1593912https://doi.org/10.1080/19322909.2019.1593912

Paladino, E. B., Klentzin, J. C., & Mills, C. P. (2017). Card sorting in an online environment: Key to involving online-only student population in usability testing of an academic library web site? Journal of Library & Information Services in Distance Learning, 11(1–2), 37–49. https://doi.org/10.1080/1533290X.2016.1223967https://doi.org/10.1080/1533290X.2016.1223967

Pant, A. (2015). Usability evaluation of an academic library website: Experience with the Central Science Library, University of Delhi. The Electronic Library, 33(5), 896–915. https://doi.org/10.1108/EL-04-2014-0067https://doi.org/10.1108/EL-04-2014-0067

Sandars, J., & Lafferty, N. (2010). Twelve tips on usability testing to develop effective e-learning in medical education. Medical Teacher, 32(12), 956–960. https://doi.org/10.3109/0142159X.2010.507709https://doi.org/10.3109/0142159X.2010.507709

Sicilia, M.-A., & Garcia, E. (2003). On the concepts of usability and reusability of learning objects. The International Review of Research in Open and Distributed Learning, 4(2). https://doi.org/10.19173/irrodl.v4i2.155https://doi.org/10.19173/irrodl.v4i2.155

Silvis, I. M., Bothma, T. J. D., & de Beer, K. J. W. (2019). Evaluating the usability of the information architecture of academic library websites. Library Hi Tech, 37(3), 566–590. https://doi.org/10.1108/LHT-07-2017-0151https://doi.org/10.1108/LHT-07-2017-0151

Sonsteby, A., & DeJonghe, J. (2013). Usability testing, user-centered design, and LibGuides subject guides: A case study. Journal of Web Librarianship, 7(1), 83–94. https://doi.org/10.1080/19322909.2013.747366https://doi.org/10.1080/19322909.2013.747366

Thorngate, S., & Hoden, A. (2017). Exploratory usability testing of user interface options in LibGuides 2. College & Research Libraries, 78(6), 844–861. https://doi.org/10.5860/crl.78.6.844https://doi.org/10.5860/crl.78.6.844

Tidal, J. (2017). One site to rule them all, redux: The second round of usability testing of a responsively designed web site. Journal of Web Librarianship, 11(1), 16–34. https://doi.org/10.1080/19322909.2016.1243458https://doi.org/10.1080/19322909.2016.1243458

Young, S. W. H., Chao, Z., & Chandler, A. (2020). User experience methods and maturity in academic libraries. Information Technology and Libraries, 39(1), 1–31. https://doi.org/10.6017/ital.v39i1.11787https://doi.org/10.6017/ital.v39i1.11787

Appendix

“Examining APA Citations”: http://odyssey.lib.vt.edu/s/home/item/216

Odyssey Learning Object Repository: http://odyssey.lib.vt.edu/

Usability Testing Resources: https://bit.ly/3jgHLGz

  • This folder contains resources provided to participants in the LOEX workshop described in this paper, including workshop activities, template usability testing scripts, and template usability testing reports.

“Planning for Virtual Reality: An Introductory Guide”: http://odyssey.lib.vt.edu/s/home/item/221