Introduction
If you’ve never conducted user testing before, it can seem like you need to wait until all the pieces fall into place before you begin. While you do need to have a well-developed plan, the truth is that anyone can conduct valuable user testing without having everything figured out. In 2023, three research and instruction librarians at San Francisco State University—Lizzy, Hesper, and Zia—implemented a user testing program based on the User Experience (UX) Café model to troubleshoot and improve the J. Paul Leonard Library’s new website.
Our program involved a substantial amount of planning, but in many ways we were also flying the plane while building it. We learned how to assemble a functional team, advocate for administrative support, develop a plan, set up a space, conduct tests, deal with challenges, analyze and share results, and reflect on and change our practice. It was by no means a smooth ride the entire time, but ultimately we were successful in making substantial, evidence-based changes to our website. More importantly, we gained valuable insights about our users and became better UX researchers. The biggest keys to our success weren’t having a perfect plan or limitless resources at our disposal but rather collaborating as a team and having a growth mindset. With that in mind, what follows is information about our methods and some recommendations from our experience to help you get off to a good start.
Background
Before sharing our recommendations, we’ll explain a little about our process and program. We started by identifying the user testing method that would work best for our situation. Academic librarians are all familiar with the need to do more with less. Various models of low-cost, low-effort—“lean”—user testing and usability testing have existed for decades, and a number of librarians have written helpful articles about adapting those practices for academic environments.
Nuccilli et al. (2018) wrote about “guerilla” usability testing at Wayne State University, where they set up a testing station in a central, populated location of the library for one hour per week, each week, for two semesters. They recruited participants for short, scripted tests that took 5–10 minutes. They noted that this model “can be successful with minimal planning, minimal staffing, minimal equipment, and minimal time” (para. 6).
Systematizing the guerilla model, Chao (2019) developed the UX Café model of user testing at Penn State University. This model was intended to be a “cost-effective, agile, and sustainable UX study framework” in which passersby participated in short user tests in exchange for coffee and snacks (para. 4). UX Café quickly caught on in academic libraries due to the “intentional, systematic,” and consistent nature of the framework (Blakiston, 2021, para. 5). Librarians have since adapted it for use at University of Arizona Libraries, University of Houston Libraries, North Carolina State University Libraries, and now San Francisco State University Library (Blakiston, 2021; NC State University Libraries, n.d.)!
Our UX Café was similar to others that came before it. Since we were dealing with a new website and substantially different homepage, we knew we needed immediate feedback to catch problems and oversights. We devised testing scenarios that addressed discrete parts of the website and involved short tasks or questions. During some tests, we encouraged users to think aloud, a well-established user research method that asks participants to tell us what they were thinking while they completed a task (Krug, 2010).
Our setup was a table near the library entrance run by three librarians who recruited five participants for 5–10 minutes of testing in exchange for a snack and a drink. The testing sessions were 1–1.5 hours long, once per week for two semesters, skipping weeks during breaks and finals. Down the road, longer user testing sessions could give us more insight into our website and systems as a whole, but UX Café worked well for our initial goals. There’s a lot you can learn in 5–10 minutes, and we highly recommend librarians interested in trying user testing use this model as a starting point.
Top Seven Recommendations
1. Get Administrative Approval in Advance
Even for a small user testing program, it’s important to start planning well in advance. It took us a little over a full semester to prepare for our UX Café. Lizzy developed a detailed proposal in August 2022 and delivered a pitch to library administration at the end of that month. She proposed that a small budget, simple setup, and minimal time commitment would produce outsize results for student success. Happily, our library administrators were supportive and agreed to a modest budget. Shortly after the pitch, Lizzy submitted the user testing project for Institutional Review Board approval. They determined the project didn’t qualify as human subjects research and was solely for program improvement, so approval wasn’t needed. This may or may not be the case for other user testing programs at academic libraries, which is another reason to start planning early. We spent the rest of the fall semester completing the website migration, reviving the dormant User Interfaces Working Group, and recruiting collaborators.
2. Recruit a Cross-Library Team
The working group was re-established in January 2023 for the purpose of facilitating user testing and implementing changes based on findings. The online learning librarian, Lizzy, was chair, and the other members were three research and instruction librarians (including Hesper and Zia), three members of Library IT, the library’s graphic designer, and the new discovery and systems librarian. User testing teams absolutely do not need to be this large, but it is important to bring together stakeholders from different units.
The group purposefully included a cross-section of library faculty and staff, each with different areas of expertise and experience. Research and instruction librarians work most closely with users, Library IT has technical knowledge, the graphic designer understands web design, and the discovery and systems librarian knows how the website works in conjunction with other library systems.
Only a small subset actually conducted the user testing (Zia, Hesper, and Lizzy), with other members substituting when one of the core team was out. The other members attended meetings to identify priorities, stay informed on testing updates, provide input on decisions or roadblocks, and help turn findings into real changes. Not every team member needs to play an equal part, but it is beneficial to get input and perspectives from a variety of library employees.
3. Define Roles and Responsibilities Based on Preferences and Strengths
At the start of testing, we experimented with different roles in order to figure out the responsibilities of each one and our own preferences. We settled on recruiter, notetaker, and facilitator. After our early UX sessions, we discovered that Hesper liked to recruit and wasn’t shy about getting attention from headphone-clad students, Zia was good at capturing detailed notes, and Lizzy enjoyed talking with students while running the tests.
Our core group of three also divvied up duties to prepare for our weekly practice. Zia took the lead on finalizing the script and creating note-taking forms. Lizzy managed the buying, delivery, and reimbursement paperwork for the student incentives (i.e., snacks). Occasional absences allowed us to cover for each other and invite other colleagues in the library to join us, training them on one of the three jobs and orienting them so they could suggest new tests for future weeks.
4. Schedule in Advance
It was helpful for us to have recurring times for user testing, reserving hours on our calendars far in advance. At first, we had our UX sessions at 10 a.m. every Tuesday morning, which was helpful for remembering to prepare and meet, but meant we were approaching the same library users week after week. The second semester, we began switching off Tuesday and Wednesday mornings in order to reach a more mixed group of students. It also helped us to reserve time at the beginning of each week for a planning meeting, where we chose a focus for the week’s test, brainstormed questions, and figured out logistics for running the test.
Scheduling a longer amount of time than needed—1.5 hours when we usually finished much sooner—allowed us to stay calm and not worry if recruitment was slow on a particular day. After each tester finished, we had the leisure to briefly check in with each other to mention observations and tighten up our process. Once we reached five participants, we again checked in to see if we wanted more participants that day, depending on the time available and the task. When finished, we discussed our observations to capture some of our impressions right away and start planning how we might approach related testing the following week. We also backed up any data that same day.
Some tests needed more processing, so one or more of us would volunteer to analyze the data later in the week. For example, when we tested users on our room reservation system, we needed to time how long it took them to complete each task. We each rewatched a set of recordings and made a table of the tasks and times. Although you likely won’t need to process data every week, it’s useful to schedule time to do it soon after the test so that you can still remember what happened.
5. Be Ready to Repeat Easily
With lean UX and our small budget in mind, it was important to work with the resources we already had and make our process easy to repeat week to week. We enabled this in a few ways:
Keep supplies together: We kept all our supplies (snacks, drinks, consent forms, pens, hand sanitizer, etc.) together in a tote bag so it would be easy to take downstairs to the library entrance.
Make checklists: We created procedural checklists for prep, day-of, and post-test periods so we wouldn’t forget anything.
Write scripts: We had a script for the facilitator to read when introducing UX Café and that week’s test. The first few weeks of testing were iterative; it took us 3–4 weeks to draft an introductory script that we were happy with as we tried to balance brevity with clarity.
Use templates: Before running each test, we created a Google Form for the notetaker to use. It followed the same structure as the test, so it was easy for the notetaker to follow along and keep notes organized by question.
Have a dedicated laptop: Our library lends laptops to students, so Lizzy borrowed one of those to maintain as a neutral laptop so students could complete tests without browser bookmarks, desktop files, or other customizations. She made sure the laptop was charged and loaded with the day’s test and that any recorded sessions were backed up afterward to Box.
Use shelf-stable snacks as incentives: Lizzy bought nut bars, granola bars, canned cold brew and bottled iced tea in bulk. Though donuts and hot coffee might seem more appealing, having shelf-stable options was much easier on us and our budget, since we didn’t have to go shopping each week, could reuse leftovers, and could set up quickly. Luckily, students didn’t seem to mind and were often drawn toward our table by the bottles of green tea, which many liked saving for later.
6. Have a Growth Mindset and Sense of Humor
Our UX Café did not go perfectly—far from it! Challenges included adjusting to absences among our core team’, managing community members stopping at our table wanting to chat, struggling to recruit participants, finding our UX Café sign missing more than once, and hearing surprisingly loud screeches from people’s shoes walking across our echo-prone lobby on our audio recordings.
There were also times when human error tripped us up. At first we didn’t use incognito mode for tests, so searches that prior users had done popped up as suggestions. Sometimes we hadn’t done enough preparation, and we were surprised about something on our own website (e.g., we weren’t completely familiar with the process for new users to reserve rooms because the process for library staff looked different). Occasionally the facilitator forgot to start the screen recording or veered off script when something unexpected happened. At times it was hard for the notetaker to hear and record everything that happened.
Whenever challenges came up, no matter what caused them, we saw them as learning opportunities. Instead of beating ourselves up or feeling like our work wasn’t worthwhile if it wasn’t perfect, we had a good laugh and reminded ourselves that we always got something of value out of every session. What we learned may not have been what we’d expected, but these lessons helped us move forward nonetheless.
7. Keep at It
Sometimes you’ll see quick results, sometimes not. That can be frustrating! Just like we had to be flexible with the user testing itself, we also had to be flexible with effecting change. Though we were able to make a number of improvements to the library’s online presence, we are still working to improve the way we communicate our results to the library faculty and staff more generally. For example, despite rewriting the instructions for reserving a room to explicitly address known pain points, we realized months later that many librarians were still not aware of those pain points nor the new instructions. When there are many stakeholders, some stretched quite thin, it’s important to be patient. However, it’s also important to keep your obligation to provide users with the best experience possible. Be understanding of everyone’s time and obligations, but don’t let your hard work go to waste if you can help it.
If you’re interested, our materials, examples, and photos are in this Public Access folder.
References
Blakiston, R. (2021, September 27). User research can be fast, frequent, and frugal. Here’s how. uxEd. https://medium.com/ux-ed/user-research-can-be-fast-frequent-and-frugal-heres-how-eb292a976b1chttps://medium.com/ux-ed/user-research-can-be-fast-frequent-and-frugal-heres-how-eb292a976b1c
Chao, Z. (2019). Rethinking user experience studies in libraries: The story of UX Café. Weave: Journal of Library User Experience, 2(2). https://doi.org/10.3998/weave.12535642.0002.203https://doi.org/10.3998/weave.12535642.0002.203
Krug, S. (2010). Rocket surgery made easy: The do-it-yourself guide to finding and fixing usability problems. New Riders.
NC State University Libraries. (n.d.). Tiny Café. https://www.lib.ncsu.edu/tiny-cafehttps://www.lib.ncsu.edu/tiny-cafe
Nuccilli, M., Polak, E., & Binno, A. (2018). Start with an hour a week: Enhancing usability at Wayne State University Libraries. Weave: Journal of Library User Experience, 1(8). https://doi.org/10.3998/weave.12535642.0001.803https://doi.org/10.3998/weave.12535642.0001.803