Usability testing has long been used to identify website issues that are problematic for users. By having users complete assigned tasks, website designers and other stakeholders can view how their target audience uses their creations; this type of testing helps uncover areas of concern that didn’t arise during the initial planning or development of the website.

Librarians can hone and improve their usability testing facilitation skills with regular practice. This set of best practices for usability testing will benefit librarians who are new to usability testing, librarians who have been tasked with performing usability testing, and seasoned usability testing facilitators looking to get back to basics and provide evidence for why they do what they do.

To begin development of these best practices, I sent an email to the Code4Lib listserv asking for interested participants to complete a form to set up an interview with me. The Code4Lib listserv “isn’t entirely about code or libraries. It is a volunteer-driven collective of hackers, designers, architects, curators, catalogers, artists, and instigators from around the world, who largely work for and with libraries, archives, and museums on technology ‘stuff’” (Code4Lib, n.d.). It is a rich group of experts who have years of experience in all things technology in libraries and a supportive community of people with the desire to advance the information profession.

I conducted nine interviews with academic library workers who have user experience (UX) or usability in their job titles.

Several of the UX librarians I interviewed remarked on relationship building as perhaps the most important part of usability testing. Developing and maintaining good relationships with various constituencies—students, student clubs, administrators, colleagues, librarians in other departments, faculty, etc.—aids in eliminating some of the struggles that often arise when conducting usability testing. Also, maintaining good relationships will support recruitment of testers, will help in identifying users who may be reluctant to reach out and participate in this way, and will aid in the implementation stage. Related to this, several of the librarians mentioned how important it is to close the loop on usability testing. People want to know the results! What did you discover? What changes did you make? When will the new interface go live?

Usability testing is primarily a discovery tool, so it is important for test designers to be removed from the service design as much as possible and for them to objectively discover flaws or issues in a tool. When a service is tested near completion, it is often easier to ignore a repeated issue and move ahead with the original plan. Leading subjects and faking results defeats the entire point of usability testing.

Finally, usability testing is a humanistic practice. The aim of making a service, whether a web interface or physical space, more user-centric should always focus on the specific people who will be using these services. Often, simple problems are solved when they are seen from a different perspective, and designers who are intimately familiar with the product do not have this perspective. Usability testing can be humbling, it can be frustrating, and it can lead to more time spent on a project that was close to completion. However, the end result will be something the target audience can actually use, hopefully saving time in the long run and providing a positive user experience.

Interviews with UX Librarians

The first part of my sabbatical project consisted of interviewing UX librarians. I conducted nine interviews with academic library workers who have UX or usability in their job descriptions. These librarians are from across the United States, come from different types of academic institutions, and are gender diverse.

I asked all interviewees five questions:

  1. Can you tell me a little bit about what you do as a UX librarian? The holistic picture.

  2. How does usability testing fit into your work? I’m primarily interested in this from a website perspective, but if you do usability testing of other things, I’d be interested in hearing about that as well.

  3. I’m building a set of best practices. There are best practices out there, some are out of date, and some are very broad. What do you believe are essential best practices for usability testing of academic library websites?

  4. What should librarians who are new to usability testing—or those who may find themselves doing the work without having previous experience—know about the process?

  5. Is there anything else you would like to add about usability testing? In your own library? Information you’d like to impart to others? Thoughts? Final words?

The first few questions were intended to break the ice and begin the conversation. Several questions had some overlap in the types of answers that the librarian interviewees provided, but the five questions led to fruitful conversations that helped shape and define these best practices.

Main Themes in the Best Practices

Even though the main purpose of usability testing is technological—to uncover issues in a design—ultimately, you are working with humans who are notoriously messy, uncertain, contradictory, strong-willed, unreliable, and different from one to the other.

Most of the librarians I interviewed stressed that relationship building and connections with user groups can address many of the upfront challenges with usability testing. Building a connection with students demonstrates that the library has their best interests in mind. When students—or whoever your main user group is—are treated as co-creators or co-designers through usability testing, they establish ownership and may boost use and encourage others to use your services. Users have their own expertise. One librarian bravely stated, “Maybe we [librarians] don’t actually know best!” Thus, it is important to establish and maintain these relationships outside of periods of usability testing for authenticity and so that it doesn’t seem like the library only turns to certain student groups when it needs something. Not maintaining and fostering these types of relationships directly contradicts the intention of valuing ongoing user input and demonstrating that users are partners in library services.

Conducting usability testing from beginning to end requires large amounts of organization and planning—but the beauty of usability testing is that it allows for some spontaneity and quick action once the groundwork is in place.

I did not address accessibility outright throughout the best practices, but each librarian I interviewed mentioned accessibility as inherent to usability testing and not something that is separate from the process. Accessible websites are essential, and the web is a basic human right. Designing for accessibility is the right thing to do. Tim Berners-Lee said, “The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect” (W3C, n.d.).

Above all, conducting usability testing is a practice. Much like yoga, playing an instrument or sport, or any other activity where an individual persists with an aim to improve, usability testing only gets better with increased sessions and time spent doing the work.

The best practices are divided into three sections: 1) planning and implementing, 2) conducting the test, and 3) following up and concluding testing. When possible, each best practice has supporting evidence from the literature and is written to be practical and quickly integrated into anyone’s work in usability testing.

Best Practices for Planning and Implementing

Find a Good Collaborator (or Several)

The entire testing process should be as collaborative as possible. Collaborators may shift over time, and this is a great way to get buy-in from others in the library or even across campus. Diversity in the usability testing team is as important as in any team or experience. Librarians I interviewed represented institutions of multiple sizes: some had entire UX departments, and some were the only UX person in the entire library. The consensus from the librarian interviews is that usability testing should not be conducted by one person, but it is okay if that is the only way it can be done. Getting others to participate is a strategy that can foster cross-departmental collaboration and allow people the opportunity to interact directly with users when it may not typically be part of their job. Allowing colleagues to facilitate tests and participate in planning gives them a sense of ownership and familiarizes them with various methods used in the work.

Alternatively, getting others involved in designing and facilitating removes those who are most directly involved with design from the main role. A few of the librarians mentioned that it is often difficult for designers to hear criticism, and some may only hear what they want to hear instead of what the actual problems may be. Sometimes designers facilitating tests who are tasked with writing reports at the conclusion select evidence either to support their own aims for redesign or to confirm something they thought they already knew. A good collaborator—or multiple collaborators—can maintain objectivity throughout the entire process.

Research supporting this best practice:

Test Frequently

As you develop the test, the script, and the other materials, usability testing can become routine over time. An example of regularity and the expectation that usability testing be part of the weekly schedule could be something like two days per week for two hours a day, or just conducting two tests per week. This level of frequency may be unrealistic for some libraries, but each library can tailor appropriate frequency to their own needs. One librarian suggested aiming for one test a month, but if you end up testing three or four times a year, this is perfectly fine. The aim is to have transparency around the testing, which communicates to students and other library users that this is a positive interaction with the library and that staff are seeking real and valuable feedback from users. If possible, the tests should be iterative, in that you are testing one thing, then retesting it for the most accurate results.

Get Institutional Review Board Approval or Clearance

Institutional review board approval or clearance should be in place before you start recruiting test subjects or conducting usability interviews. One librarian mentioned that since usability testing differs from traditional research, their library UX department has a standing agreement with their institutional review board, so they don’t have to go through the approval process each time they begin testing.

Decide on the Scope of the Test

Do usability testing before a new service goes live and shortly after implementation. It could be as simple as “I wonder if this page works?” Maybe users seem confused when submitting certain forms on your website, such as title requests or interlibrary loans. Why is that the case? Maybe there are plans to redesign a certain part of your website. Usability testing will help to determine how you might redesign it. These are ideas that plant the seeds for what to study. One librarian I interviewed stressed that any time you can evaluate something that isn’t fully developed is valuable. Keeping data on issues that pop up throughout the year can help inform usability testing, and any department—circulation, instruction, acquisitions, etc.—can play a role in identifying what would benefit from testing.

Usability testing does not often mean testing the whole website. Sometimes you are testing one very small part of the website, perhaps a filter on the catalog, the discovery layer for the digital library, a highly used part of your LibGuides, or other specific parts of the website. One librarian cited usability testing where the site-wide icons were the target of the test. Scope creep can be a real issue with usability testing, so it is essential to clearly identify the scope of the test and to adhere to it. It is important not to do usability testing just for the sake of doing usability testing, or testing for the wrong reasons, like to prove a point or to back up something with evidence. It should go without saying that usability test results should not be faked. Usability testing is ultimately a discovery process.

Research supporting this best practice:

Write the Test Tasks with Care

Multiple librarians mentioned some general guidelines about writing the test tasks:

  • Only test one thing at a time.

  • Be careful about how things are worded, and eliminate jargon. If certain words are used, users will look for them. For example, instead of using the term “interlibrary loan” on your test, ask users what they might do to request a book that the library does not have in its collection.

  • If possible, ask your subject matter experts to review the tasks for accuracy, conciseness, and appropriate use of discipline-specific terminology.

  • Do not write the test to be more intuitive than the interface being tested.

  • Understand that testers will likely not use the features you want them to use. An example of this happening is when a user is asked to solve something and instead of using a library tool, they open a new tab and use a search engine. Try to design your test so users do not seek such external tools, but understand that it will likely happen anyway.

Keep the test as simple and iterative as possible. The goal is not the “perfect test”; you can edit and change things as needed for the next round.

Research supporting this best practice:

Complete Prep Work Ahead of Time

A consistent answer across the interviews was to do as much prep work as possible, especially written parts of testing. These could include templates for recruitment emails; scheduling of correspondence; and confirmations, reminders, and thank-you messages to test participants. When this administrative work is completed in advance, or you can use templates to complete it, you will save time, maintain consistency from one tester to the next, and ensure that anyone on your team will be able to jump in as needed.

Identify User Groups to Recruit

Dividing your test into categories based on types of users will save time and will aid in recruitment to ensure that you get the best possible results. Examples of user groups could be undergraduate students, graduate students, faculty, discipline-specific faculty or students, community users, or staff and administrators. Try to recruit five to eight testers from each category, and plan to spend 20–45 minutes conducting usability testing with each. It is okay if you cannot identify that many people from each user group or test with multiple audiences. Nielsen (2012) stated that five is an ideal number of users to test with.

Recruit a Diverse Group of Participants

Most of the librarians I interviewed cited difficulties recruiting participants. Some suggested using social media, student workers, or student library advocacy or advisory groups. Student library advocacy or advisory groups serve as sort of a standing focus group for some of the institutions represented in my interviews. However, one librarian mentioned that it might be best to avoid using library student workers or advisory groups because they are already familiar with the library and may have a skewed perspective.

Others asked teaching librarians to mention the usability tests in their classes and provided contact information for those interested in participating. Another suggestion was to send an email blast to the entire student body asking if they want to participate in usability testing, with a simple yes/no form that would allow library staff to recruit from a pool of affirmative answers in the future. Other librarians used the “library lobby” method, where test conductors set up a station in a busy location to recruit and conduct tests spontaneously.

Unfortunately, these methods do not ensure a truly diverse representation of user groups in terms of characteristics like age of students, use of accessibility tools, and users from diverse backgrounds. Students from underrepresented groups, those with disabilities, and others may not feel comfortable stepping forward and participating in such an open forum. Ideally, you would try to include subjects from these groups because, for example, if you do not know how a user with a screen reader interacts with your website, you are not getting the complete picture.

The part of the website being tested may influence who you recruit. One interviewee provided an example that had to do with discipline-specific testing, such as biology versus humanities, and various levels of domain knowledge. You may be interested in how science majors interact with certain parts of your site, so recruiting humanities majors or early first-year students may not yield the most accurate results for this level of specificity.

Scheduling around the academic calendar can be an added challenge for usability testing in academic libraries. There are times during the year when there are fewer students and faculty on campus, times when these groups are too busy to participate in usability testing, and times when campus is closed. Advance planning of when to conduct the tests will ensure that you get the most and best participation.

Several of the librarians I interviewed shared that organizing and corresponding with testers was one of the most time-consuming parts of usability testing when doing tests on a schedule, as opposed to library-lobby or open-room testing. There is a best practice for using templates to help with correspondence, but back and forth is inevitable, as is no-show testers leaving gaps in your recruitment efforts.

Research supporting this best practice:

Define Roles for those Conducting the Tests

Ideally, the usability testing team will have multiple people conducting the test, so you can include a notetaker/timekeeper who is different from the test conductor and even the observer. However, the notetaker/timekeeper can also be the observer. Perhaps your testing team could have an even split with just a separate timekeeper, if time is an element of the test.

Have a premade form for the notetaker. If you have ever served on a hiring committee, you’ve probably had a sheet of paper with the questions on it with space to take notes on the candidate’s answers. This idea is exactly what this best practice suggests.

Conduct a Dress Rehearsal / Dry Run of the Test

This will help identify things you may have forgotten and address issues that may pop up. It will also take the burden off your first real test subject and allow that individual to have the same experience everyone else has.

Keep Implementation Low-Tech, Simple, and Fast

A successful option for usability testing is to go low-tech, have an open forum or lobby test, and do sessions over the course of a few weeks. Some libraries simply do not have the luxury of a one-on-one formal, coordinated, recorded, facilitated user test, and that is okay. Some libraries need to be results-driven and low-tech; simple and fast will still get results. Planning for this style of testing is much less involved; all you need is a task list or questions to evaluate at the end. The downside is that it does not get the broadest spectrum of users, does not specifically aim for inclusion of underrepresented groups or users with disabilities, and may not get participants who don’t use the library. However, you’ll still get something to work with.

Research supporting this best practice:

Keep a Checklist

Your checklist can be something you start with during planning, maintain throughout the design process, and keep with you as you begin testing to ensure that each tester has the same experience. The checklist could be as simple as a handwritten paper or a formal typed document. The most important thing about the checklist is that it will help to keep testing moving along and avoid the need to regress at any point.

Research supporting this best practice:

Best Practices for Conducting the Test

Don’t Forget Your Checklist

The checklist you used during planning can also be part the actual test. The checklist contains all of the little details for your test including things like having a pencil, printed directions, timer, and informed consent form if required.

Print Copies of Test, Task List, and Questions

Ensure that you have printed copies of the tasks or questions for participants. This avoids the need to repeat instructions and reduces cognitive load.

Allow Participants to Use Their Own Devices

Try to recruit users who rely on adaptive services, and let them take the test using the tools they need to navigate your site. This will also ensure that you test on mobile devices and not just laptops or desktop computers.

Allow Participants to Fail

Several of the librarians I interviewed mentioned objectivity as essential to usability testing. This can be difficult, as librarians generally want to help people complete a task or succeed at something. Librarians can be uncomfortable watching people struggle. One librarian said, “You have to leave people with the impression that the library is there to help them, even though the point of usability testing is to discover things that they cannot accomplish successfully on their own in order to discover problems with the website.” Of course, you want to find ways to redirect people without showing them what to do.

Research supporting this best practice:

Use Your Facilitation and Interview Skills

Conducting usability testing can help foster facilitation and interview skills. Usability testing gives you the opportunity to learn how to make people feel comfortable while completing challenging tasks. You will be asking questions to get testers to elaborate on why they feel a certain way or why they made a certain choice. Ask them for details about their experience without leading them down a certain path. Some participants are not going to want to talk, so the facilitator must find ways to prompt them to share what they are thinking throughout testing.

Getting comfortable with silence is an important element of usability testing, and most people are uncomfortable with it. Your testers will pause and hesitate. Krug (n.d.) offers a fantastic document on his website called “Things a Therapist Would Say,” which contains phrases that a facilitator can use to redirect users, help them without really helping them, and fill an uncomfortable silence. The document also contains phrases to redirect chatty testers who may spend too much time talking and want to share their opinions about how the library could improve its website or services.

Research supporting this best practice:

Compensate All Participants

Compensation is essential, and there are many ways to offer it that are not monetary. You could provide snacks or food, but it is important to also offer a non-food item for those with dietary restrictions or other issues. One librarian shared that their library offered a meal delivery service coupon and a USB battery pack to their testers. Other ideas may be a candy bar, library promotional items, pencils or pens, and tote bags. A little can go a long way, and compensation demonstrates appreciation for the person’s time spent helping to improve the library website.

Best Practices for Following Up and Concluding Testing

So far, these best practices have focused on planning and conducting the tests. Much of the literature excludes information about what to do with the results of these tests, beyond implementing changes to whatever is being tested. The goal is to implement these changes, but there is no comprehensive answer for how to share the results and close the loop on usability testing.

Several of the librarians stressed the importance of keeping lines of communication open, which ties back to the best practice of relationship building and maintenance. This includes all stakeholders and major players in the testing, from vendors to other internal and external parties.

Use Report Templates

An earlier best practice recommends templates for communication with participants, but it also saves time and effort to have report templates for displaying data and communicating results to your stakeholders. This gives your usability practice a consistent format for sharing your process and discoveries. Some of the librarians I interviewed explained that reporting can also be a collaborative process. Different people can write different parts of the report, which continues the momentum of keeping everyone involved throughout the process.

Research supporting this best practice and examples of successful implementation of report templates:

Share Results with All Stakeholders, Including Test Participants

Issues raised as a result of usability testing ultimately need to be addressed with the departments responsible for making changes. It can be difficult to raise design issues with the people who created the design, and there can be differing opinions on how to implement the change(s) needed to resolve issues discovered in the testing. Some stakeholders do not have the same expertise as the design team or even the testing team. Leaving ego behind can be challenging for some folks, but will serve everyone better in the long run. It can mean a lot to incorporate stakeholder and user feedback into the reports, even if the findings ultimately aren’t implemented.

Users who participated in testing may be interested to know what changes or redesigns resulted from it, so sharing findings with your participants is another important part of closing the loop. This gives participants an opportunity to respond and ask questions. Reaching back out to testers with results demonstrates that the library cares about their participation throughout the entire process. Presenting results periodically over the course of several months is another strategy to maintain interest and continue to demonstrate the value of this type of work to your stakeholders.

Be Strategic in Planning Changes

One of the librarians I interviewed explained that it is helpful to think of your users as excellent refiners. They are likely not designers, and the redesign team doesn’t have to take every suggestion that arose during testing; however, it can be helpful to see what users think are potential design issues. Common problems will rise to the top, but bearing in mind that individual users will have their own idiosyncratic issues will help keep the number of needed changes to a reasonable level. You can always fix an issue that a single user raises during testing, but you probably do not need to spend a great deal of time worrying about that issue if it’s not widespread or a quick fix.

Sometimes the solution to an issue is to raise it with a vendor, if it’s not something you can change locally. One of the librarians I interviewed offered increased communication between libraries and vendors on issues of usability as an opportunity for further collaboration. Collaboration among institutions on issues of usability in relation to vendor tools is another opportunity for addressing common problems. This librarian cited an example of creating a multi-institution white paper to send to a vendor about ongoing usability and design issues that several library customers were experiencing.

One issue specific to academic libraries is working around the academic calendar. This challenge was mentioned regarding recruiting participants, but it also affects when to implement design changes. With enterprise and non-academic websites, you can make changes anytime during the year, since they do not follow an academic calendar. Within academia, it is essential that the website team decide which changes they can make during the semester versus those to make between semesters. You should make major changes over the winter or summer breaks to minimize effects on current students. Any interface change temporarily increases cognitive load, even if it leads to a better design. Incremental changes are acceptable throughout the semester, but when redesigns must wait to be implemented over winter or summer break, the entire process slows down. While the aim of the change is to decrease cognitive load and make the interface more usable, temporary discomfort for users that should be considered when implementing any changes.

A conclusion from several interviews is that users will always complain. You will never satisfy users 100 percent of the time. The design team needs to be aware of and sympathetic to this fact of life. They must be willing to listen to complaints but remain firm in the decision to make or not make research-informed changes resulting from usability testing.

Do-It-Yourself Usability Testing Toolkit

The final component of my sabbatical project was the creation of a do-it-yourself usability testing toolkit ( The toolkit is based on the best practices above, and it consists of content that librarians can use to quickly conduct usability testing from start to finish at their own institutions. The main sections of the toolkit are:

  • Correspondence & Report Templates

  • Scripts

  • Task Lists

  • User Profiles

  • Materials & Conducting the Test

  • Usability Best Practices

  • Additional Resources


Usability testing should be done frequently and as an ongoing process, but how frequently depends on each library’s specific situation regarding staffing and test subjects.

Issues that come up via usability testing aren’t always design-based, but may be based in content. Library leadership should be prepared to provide training and recommendations for writing better web content, which is staff time well spent.

One of the great philosophical questions relating to designing for libraries is, “Do you build a better interface, or do you build a better user?” Designing library websites and search interfaces is notoriously difficult. The interface can’t be so basic that the user cannot actually succeed at their tasks.

Usability testing is not the only way to develop a user-friendly and user-informed interface. Site visits, participatory workshops, and other methods can be just as useful and offer different perspectives. Something as simple as sitting down with users and interviewing them or having informal—but structured—conversations about user habits can be helpful. Using different testing methods can refresh things and also may elicit results that would not have appeared during traditional usability testing. These alternate strategies may also appeal to users who would not be likely to participate in traditional usability testing.

At the end of the day, usability testing should be fun! It’s a combination of different skills: interviewing, analysis, technology, web design, graphic design, observation, communication, and relationship building. There is something for everyone in usability testing work, and it can be a great way to step out of your normal library role and try something that will ultimately improve your services and demonstrate your library’s value to library stakeholders and users.


Azadbakht, E., Blair, J., & Jones, L. (2017). Everyone’s invited: A website usability study involving multiple library stakeholders. Information Technology and Libraries, 36(4), 34–45.

Benjes, C., & Brown, J. F. (2000). Test, revise, retest: Usability testing and library web sites. Internet Reference Services Quarterly, 5(4), 37–54.

Code4Lib. (n.d.). About.

Conrad, S., & Stevens, C. (2019). “Am I on the library website?”: A LibGuides usability study. Information Technology and Libraries, 38(3), 49–81.

Dominguez, G., Hammill, S. J., & Brillat, A. I. (2015). Toward a usable academic library web site: A case study of tried and tested usability practices. Journal of Web Librarianship, 9(2–3), 99–120.

Eaton, M., & Argüelles, C. (2019). Usability study for a community college library website: A methodology for large-scale data gathering. Community & Junior College Libraries, 23(3–4), 99–113.

Farrell, S. (2016, May 22). Open-ended vs. closed-ended questions in user research. Nielsen Norman Group.

Gallavin, G. (2014). System Usability Scale (SUS): Improving products since 1986. Retrieved March 22, 2023, from

Gillis, R. (2017). “Watch your language!”: Word choice in library website usability. Partnership: The Canadian Journal of Library and Information Practice and Research, 12(1).

Graves, S., & Ruppel, M. (2007). Usability testing and instruction librarians: A perfect pair. Internet Reference Services Quarterly, 11(4), 99–116.

Hyams, R. (2020). Tending to an overgrown garden: Weeding and rebuilding a LibGuides v2 system. Information Technology & Libraries, 39(4).

Klug, B. (2017). An overview of the System Usability Scale in library website and system usability testing. Weave: Journal of Library User Experience, 1(6).

Krug, S. (n.d.). Downloads. Steve Krug.

Moran, K. (2019, December 1). Usability testing 101. Nielsen Norman Group.

Nielsen, J. (2003, June 1). Usability for $200. Nielsen Norman Group.

Nielsen, J. (2012, January 3). Usability 101: Introduction to usability. Nielsen Norman Group.

North Carolina State University Libraries. (n.d.). User research projects.

Overduin, T. (2019). “Like a Robot”: Designing library websites for new and returning users. Journal of Web Librarianship, 13(2), 112–126.

Swanson, T. A., Hayes, T., Kolan, J., Hand, K., & Miller, S. (2017). Guiding choices: Implementing a library website usability study. Reference Services Review, 45(3), 359–367.

Texas Tech University Libraries. (n.d.). User research.

University of Miami. (2017, February 20). User-testing: Card-sorting & one-on-one tests. UM Libraries Website Redesign.

Vaughn, D., & Callicott, B. (2003). Broccoli librarianship and Google-bred patrons, or what’s wrong with usability testing? College & Undergraduate Libraries, 10(2), 1–18.

W3C. (n.d.). Accessibility.