Introduction
St. Mary’s College of Maryland (SMCM) has an enrollment of approximately 1,500 students. As a rural school, the majority of students live on campus. At the time of this study, the college did not offer any 100 percent online courses, and drop-in research and reference assistance was available 24 of the 100 hours the building was open. Librarians staffed the Reference Desk Monday to Thursdays from 10 a.m. to 12 p.m. and 1 p.m. to 5 p.m. These hours provided in-person research and reference assistance at the busiest times while also accommodating the librarians’ other job duties, primarily information literacy instruction.
While students still use the drop-in reference hours or schedule research consultations with librarians, some students prefer to interact with the library completely online, through article databases and electronic interlibrary loan, and may have no need or desire to physically enter the building. Considering these factors, directing students to robust online, asynchronous help materials is an important aspect of present and future public services. Since the completion of this study, the COVID-19 pandemic has further highlighted the importance of strong online help materials.
SMCM’s Hilda C. Landers Library began using Springshare’s LibAnswers platform in July 2014. In 2016, the library migrated to LibAnswers version 2, at which time co-author Amanda VerMeulen, as the library Springshare administrator, customized the generic design to match the look and feel of the college’s website.
In the Spring 2018, we performed a user experience (UX) research study of the library’s LibAnswers FAQ homepage. We were interested whether the customized LibAnswers FAQ homepage design was meeting the needs of SMCM students. Despite a large body of literature focused on LibGuides usability testing and best practices, at the time of the study, there was a dearth of literature on usability testing of LibAnswers knowledge bases and FAQ pages. Though there have been a few presentations and reports focused on LibAnswers’s usability in the intervening years, the literature is still limited. As such, this study not only tested participants’ ability to complete task-based scenarios using the customized LibAnswers interface, but it also explored designing a testing protocol to study the LibAnswers FAQ platform.
Literature Review
As library and information science professionals have embraced UX and user testing, much of the research related to specific products revolves around LibGuides. In the past decade, there have been a number of studies testing everything from navigation to content in LibGuides.
Many UX studies of LibGuides explore general student guide use and preferences that result in best practices documents for guide creation (Almeida & Tidal, 2017; Barker & Hoffman, 2021; Conerton & Goldenstein, 2017; Sonsteby & DeJonghe, 2013). Others have studied specific aspects of LibGuides, such as guide navigation (Bowen et al., 2018; Chan et al., 2019) or page interface options like the number of columns (Thorngate & Hoden, 2017). A few have performed research in preparation to migrate from version 1 to version 2 (Conrad & Stevens, 2019; Quintel, 2016). Kahili-Heede (2020) researched the effectiveness of LibGuides as the library website. Despite the plethora and variety of studies performed on LibGuides, there is little UX literature regarding another common Springshare product, the LibAnswers knowledge base/FAQ module.
FAQs, as we know them today, were developed for online mailing lists and Usenet groups in the 1980s to cut down on unnecessary traffic in the groups from repeated questions (Kehoe, 1992). Now, FAQs are considered essential for any business, organization, or institution not only to address questions, but to reduce anxiety and empower users—whether they be customers, employees, or students (Kumar, 2021).
Research from the commercial sector suggests that users prefer to first answer questions with self-service sources, such as FAQs and knowledge bases, before contacting a person for help. Zendesk, a major software-as-a-service company providing customer support and communication tools, including knowledge bases and FAQ, reports that 63 percent of users first search a company’s online resources to answer their questions, 69 percent of users “want to resolve as many issues as possible on their own” (Zendesk Customer Experience Trends Report 2020, 2020, p. 13), and 91 percent of users would use a knowledge base that meets their needs (Self-Service: Do Customers Want to Help Themselves?, 2013).
In online education settings, recent research suggests that students in a massive open online course preferred a static FAQ webpage to a dynamic chatbot providing the same information, as language barriers reduced the chatbot’s ability to retrieve relevant information (Han & Lee, 2022). In general, online learners seem to prefer asynchronous options to learn at their own pace (McDonald, 2016). While it is hard to quantify exactly how many libraries, educational institutions, government agencies, non-profits, and commercial companies employ FAQ pages, a casual glance around the internet shows that FAQs and knowledge bases are ubiquitous, and not having FAQs is considered unusual enough to warrant an explanation (Richards, 2013).
Even in our search-engine-driven world, experts argue that well-designed FAQ pages are still relevant to users (S. Farrell, 2014; Roberts, 2018). Most site- or built-in search functions are not as adept as Google at parsing natural language searches, so many users cannot quickly find information with a site search since they are using their own terms rather than the language of the site creator (S. Farrell, 2014). Many usability studies, including Gillis’s (2017) study of word choice in website usability, have found that users do not understand the jargon that often finds its way onto public-facing websites. Beyond the language disconnect, most users are not adept at constructing search queries to retrieve the information they need (S. Farrell, 2014).
In theory, a well-curated and easy-to-use FAQ can alleviate some of these issues. As an online tool, FAQs can support virtual reference (Labrake, 2019) and meet users’ need to access information services outside of regular service hours (Jones et al., 2009). In their study of the usability of company websites, Cappel and Huang (2007) used the existence of an FAQ page as a metric when evaluating websites.
Much of the literature about LibAnswers focuses on implementation and best practices (Archambault & Simon, 2012; B. Farrell & Leousis, 2017; Shepherd & Korber, 2014; Stevens, 2013; Tay, 2013). In some instances, the implementation of a LibAnswers FAQ page was a recommendation or result of usability testing on a library website as a whole (Tobias, 2017) or in response to the need for more online services (Wilairat et al., 2021). When user testing was carried out in LibAnswers, it usually focused on the chat component (Imler et al., 2016). Other related research used data generated by LibAnswers, such as chat transcripts or reference statistics, to improve other services, such as the library website (Wimer, 2017), or to provide a general picture of user needs (Marrall & Western Libraries Usability & Design Working Group, 2016).
In the process of developing our study and writing this paper, we came across three sources that studied LibAnswers for the purpose of improving the FAQ component, two of which were published after we undertook our testing in 2018. Drummond (2019) analyzed LibAnswers’ Query Analyzer data to develop FAQ guidelines for content creators and create buttons leading to individual FAQs based on top search keywords. Crescenzi et al. (2011) examined Query Analyzer data to determine how often users were connected to stored questions and then ran usability testing to identify specific issues. UX @ Harvard Library (n.d.) performed a comparison study of other libraries’ LibAnswers FAQ pages and surveyed user preferences rather than conducting usability testing. Our study fits within this growing body of LibAnswers FAQ literature.
Despite using different methods, we found similar pain points as Crescenzi et al. and UX @ Harvard Library related to the purpose and use of “Topics” and the naming of the service, details of which we discuss later in this paper. The UX @ Harvard project summary provides an alternative testing option for libraries who may feel more comfortable with survey-style testing than usability testing. Crescenzi et al. provide inspiration for additional testing by using Query Analyzer data to inform usability testing scenarios and gathering student feedback on the question submission form. The details of our testing protocols and example artifacts included in this paper add value to this topic’s landscape by providing others with the tools to easily develop their own LibAnswers FAQ usability testing.
Methods
To figure out how students use the library’s LibAnswers FAQ homepage, what they find helpful, and what pain points they experience that limit use of the FAQs and knowledge base, we conducted two phases of testing to gather feedback: five second testing and usability testing.
Five Second Testing
The method for our first test was inspired by the finding that users develop an impression about a web page in as little as 50 milliseconds (Lindgaard et al., 2006). To study the first impressions of our users, we tried the method known as the five second test, thinking that this method would best capture those first impressions (An Introduction to Five Second Testing, n.d.). We conducted five second testing on a drop-in basis in front of the entrance of the primary student dining hall on campus. We had twenty-one participants who were a convenience sample of current SMCM students recruited through various methods: advertising through the All Student listserv, an announcement on the Library website, flyers posted around campus, and approaching students as they passed the dining hall entrance during the testing. Participation was voluntary, and the only criterion for participation was status as a current SMCM student.
After signing the consent form, we provided participants with a paper packet that included a brief questionnaire to gather basic demographics and information on average use of the library’s LibAnswers FAQs, an instruction sheet, a printout of the library’s LibAnswers FAQ homepage, and a response sheet. The instruction sheet asked participants to view the printout of the LibAnswers FAQ homepage for five seconds (see fig.1), then flip to the final page of the packet—the response sheet—and answer the questions. The response sheet asked participants to recall the overall purpose of the page and as much information as they remembered seeing on the page. We instructed participants to return the testing packet to us when finished.
Due to the drop-in nature of this testing and the fact that there were only two of us conducting the research, after participants returned their testing packet, we handed them a written debriefing paragraph in lieu of doing a verbal debrief. Participants received incentives in the form of library buttons and a raffle entry form for a chance to win a library-branded water bottle. The raffle entry form included the option to be contacted about participating in usability testing at a future date.
We identified all testing materials with an alphanumeric code for cross-referencing to ensure participant anonymity. We did not code consent forms and raffle entries for cross-referencing, and we kept them separate from any study material.
Usability Testing
Usability testing involves a series of scenario-based tasks using a think-aloud protocol (Nielsen, 2012). We designed our tasks to reflect information the LibAnswers FAQs are intended to help students find. We provided participants with a series of short tasks to complete using the LibAnswers FAQ homepage, and we asked them to describe aloud their experiences while completing the tasks.
The conventional wisdom for usability testing is that you only need to test five participants, as after five you tend to see the same successes and pain points and thereby get diminishing returns per subsequent test (Krug, 2010). As our goal with this study was to identify recurring issues that could be used to inform a LibAnswers FAQ homepage redesign to improve overall usability, we chose to follow the typical usability research approach.
We recruited a convenience sample of six current SMCM students through the All Student listserv, an announcement on the Library website, flyers posted around campus, and follow-up emails to five second testing participants who indicated interest in usability testing on their raffle entry form. This gave us the standard five participants plus one participant in case someone dropped out. In the end, all six participants showed up for testing. Of our six participants, five lived on campus and one was a commuter. Two of our six participants were transfer students. Among all participants, time spent as an SMCM student ranged from two to eight semesters.
Testing occurred in the college’s Usability Testing Lab to reduce ambient noise. Further measures to ensure a quiet testing space included using wooden chairs to reduce distracting noise produced by chair movement. Another benefit of using the Usability Testing Lab was using the TechSmith Morae usability software licensed for that lab. While Morae has many features, we used the usability testing options of screen recording, designating tasks to identify sections of the recordings easily, annotating recordings at specific timestamps, and creating and analyzing pre- and post-test surveys. Morae is by no means necessary to conduct computer-based testing, but it can streamline analysis of test recordings.
Initially, we planned to use a method inspired by Krug (2010) in which one investigator acts as facilitator and a second investigator acts as observer. Ideally in this scenario, both facilitator and observer are present in the room during testing sessions with the observer making notes about participant behavior to supplement the data captured by the usability software. However, due to an unexpected illness that sidelined one of us during the scheduled testing sessions, we conducted the majority of testing sessions with only a facilitator.
The usability testing setup consisted of a desktop personal computer connected to a laptop, both installed with Morae. The participant completed the tasks on the desktop computer, while the facilitator administered pre- and post-test survey questions and controlled the recording in Morae from the laptop to avoid distracting the participant. When moving between usability tasks, the facilitator selected each task in Morae and stopped and started the recording to help with data analysis. The facilitator sat to the side of the testing participant, but at an angle so as to not crowd the participant and so the laptop screen was not visible to the participant during testing. As facilitator and observer, we put away our phones and requested participants do the same.
After providing consent, participants completed a brief demographic questionnaire as well as a pre-test survey, followed by an icebreaker task—a library homepage tour. We designed this task to help participants get comfortable with the think-aloud protocol in which participants are encouraged to expose their internal thought processes by talking as they complete tasks and interact with the website (Krug, 2010). The facilitator encouraged participants to think aloud, and the observer—when present—took notes on details not captured by screen or audio recording, such as facial or body expressions or other contextual details, while documenting which tasks the notes referred to. Following the series of tasks, participants completed a post-test survey. At the conclusion of the testing session, we debriefed each participant and gave them a library-branded water bottle as incentive.
To test the library’s LibAnswers FAQ homepage, we created four tasks based on common questions to test participants’ abilities to find needed information in the knowledge base. For all tasks, participants received a card with the task wording printed on it while the facilitator read the task aloud. We did not control how participants could look for the information; they were able to browse by question, topic, or use the search function. See Table 1 for the individual tasks and their wording. We did not share the task names with the participants in print or verbally; as researchers we used them to refer to the tasks internally. At the time of the study, the library’s LibAnswers FAQ page was branded Ask Us.
Task name | Task wording |
---|---|
Liaison contact | You need help finding statistics for an economics paper you’re writing. You wonder if there is an economics librarian who you could make an appointment with. Use the Ask Us to find out. |
View previous St. Mary’s Projects (SMPs) | Your SMP advisor suggested checking out some past SMPs for ideas. Use the Ask Us to find out how you can access previous students’ SMPs. |
Summer hours | You’re planning on staying in the area this summer and want to know when the library will be open. Use the Ask Us to find out what the SMCM Library’s summer hours are. |
Printing | Arg, GoPrint deducted your print money but your document never actually printed! Use the Ask Us to find out if you can get your money back. |
To protect participant anonymity, we assigned screen and audio recordings and questionnaires an alphanumeric code for cross-referencing. We did not code consent forms for cross-referencing, and we kept them separate from any study materials. We redacted any identifying information that came up during the testing from audio recordings and written transcripts.
Results
Five Second Testing
After completing the five second testing, we transcribed the responses to the questions into a spreadsheet and evaluated the responses to the first question, “What do you think the overall purpose of this webpage is?” to determine whether participants did in fact understand the overall purpose of the page. When reviewing the responses to the second question, “What information is available on this webpage? Please list everything you remember seeing on the page,” we identified a few recurring themes such as whether they noticed the search bar, the topics links, the option to submit a new question, and the Library’s contact information. Results of the five second testing reflected the open-ended nature of the question with responses showing great variation in length, verbosity, specificity, and effort.
When asked about the overall purpose of the page, students generally understood that the library’s LibAnswers FAQ homepage is there to provide answers to common questions, with fifteen out of twenty-one responses mentioning some variation on “questions,” “Q&A,” or “FAQs.” Of the other six responses, five thought the purpose was to talk about the resources the library has to offer. Only one response did not seem to understand the question, responding simply with, “Kinda bland.” When asked to list everything they remembered seeing on the page, ten responses identified the field for question submission and three identified topic-level shortcuts and specific FAQs; only two out of twenty-one respondents identified the search bar and Library contact information. This exemplifies one of the major contrasts between participant responses: generality vs. specificity. When asked, “What information is available on this webpage? Please list everything you remember seeing on the page,” some participants identified general ideas of the page whereas others identified specific aspects.
We can only speculate about whether the lack of specifics indicated the information was not encoded or retained by participants, or if it was due to the wording of the question. For example, it is possible that participants who reported general ideas latched onto the first portion of the question “What information is available on this webpage?” whereas participants who reported specific aspects latched onto the second portion of the question “Please list everything you remember seeing on the page.”
While we had hoped students would observe and remember major navigational aspects of the page such as the search bar and topic level shortcuts, that was not the case. This illustrates important lessons we learned as first-time testers: one, there is not always a direct translation from methods in the literature to your specific context; two, our expectations for what we hoped to see in the results may have colored our feelings about the results we did receive; and three, the test design itself may be flawed. The questions we presented to participants could have been more concise, leaving less room for additional interpretations. After some reflection, perhaps a more straightforward question would have been, “What is the purpose of this page?”
Usability Testing
After we completed the usability test with the six participants, we reviewed the screen and audio recordings individually, noting participant statements and behavior we found interesting or important, such as whether they used the search function versus browsing, search terms employed, and anything participants said in response to our prompting to think aloud. From these detailed notes we—again individually—identified between one to five major takeaways for each participant testing session, focusing on pain points the participants experienced during the tests. Once we completed our individual review of the test recordings, we compiled our major takeaways into a list of the top ten issues identified during the tests, ranked by how often they occurred. From this top ten list, we identified our top two takeaways, which both concerned navigability related to language and naming conventions.
With regard to navigability, to our surprise, almost all participants completed every task using the search box. Provided there was no language disconnect between users’ search terms and language used in the specific FAQs, this was generally a successful strategy. However, this strategy was less successful if participants expected to find their specific search keywords reflected exactly in the FAQ results. If participants did not see these keywords replicated in the results, they resorted to trial and error, at times resulting in erratic, seemingly unfocused or random search behavior. This behavior resulted in two of the six participants failing to complete the Liaison Contact task.
We attribute this to the participants latching onto specific keywords from the task and using those to search the knowledge base. For example, in the Liaison Contact task, both participants initially searched for “economics librarian,” which was the example in the task prompt. The top search result for these keywords was the FAQ “How can I set up a meeting for research?” which contained the relevant information on setting up an appointment with a subject liaison. However, as the FAQ title did not match the specific keywords “economics librarian,” these two participants expressed hesitation and ignored this FAQ. The participants then resorted to scanning through the topic links at the bottom of the LibAnswers FAQ homepage, first selecting Research Help then Library Services then Faculty before ultimately giving up without retrieving the task-relevant information.
Separate from the actual usability tasks but again related to language, three of the six participants were unsure of how to navigate from the library homepage to the library’s LibAnswers FAQ homepage at the very beginning of the test. We attribute this to the inconsistency of naming conventions for the service across the organization’s web platforms at the time. Specifically, on the library homepage and the overall library website navigation bar, the LibAnswers FAQ service was referred to as “Ask a Librarian.” In contrast, on the LibAnswers FAQ homepage itself, the service was titled “Ask Us.” These naming conventions led to additional misunderstanding of the purpose of the page. For example, one participant said that they thought the “Ask a Librarian” link led to a chat feature for immediate, direct contact with a librarian, a service that the library did not provide at that time. To resolve this, the service was renamed “FAQs” on both the library and LibAnswers FAQ homepages.
Limitations
As Hungerford et al. (2010) pointed out, there are limitations implicit in the nature of formal usability testing: the testing environment and presence of the experimenter(s) can influence the behaviors of a participant. In addition to the acknowledged limitations of usability testing, several additional limitations exist for this study: the novelty of this type of study for the library and our status, at that time, as novice usability researchers. Due to our own inexperience, we followed conventional UX research protocols and mirrored wording of model studies, even though we were unsure of how this would translate to the particular context of this study. We expand on this in the Reflections section.
The second limitation for this study was the potential for priming. In usability testing, performance may increase as users become more familiar with a specific product or more familiar with the way tasks are designed. Or users may latch onto the specific keywords used in the testing protocols. With any type of task-based testing, you have to come up with scenarios based on what you wish to test, making priming difficult to avoid (Budiu, 2016). Priming could explain why, during the five second testing, some participants focused on general ideas while others focused on specifics or why participants latched on to finding specific keywords in our Liaison Contact usability testing task. Though priming may have affected our study, we do not think it detracts from our findings, as they align with other available UX research.
The third limitation for this study was the ability of participants to fully engage with the tasks. This is especially evident in the five second testing responses, which had a much greater variety in the length of responses and participant engagement than the usability testing. This could be attributable to the fact that five second testing occurred outside of the dining hall before standard meal times. As researchers, we thought this would be an ideal time to recruit as many participants as possible, however, some participants may have provided responses in a more rushed manner than in another setting.
Recommendations
Our recommendations for libraries interested in improving the usability of their LibAnswers FAQs without having to perform their own usability testing primarily deal with clearly naming the service and using user-centric language in the FAQs and Topics.
Naming the Service
First, when it comes to naming the service, users in both our study and the UX @ Harvard (2018) study expected names with “Ask” in them—such as “Ask Us” or “Ask A Librarian”— to immediately connect them to chat, not an FAQ site. As FAQ pages are still widely used across the internet, we feel that users understand what it means and what to expect when they see a link to “FAQs” (see fig. 2).
Using Expected Language
Second, we found that participants were looking for FAQs that matched the words they used in their search. Participants had the most success when the FAQs had the expected language, such as the FAQ “What are the library hours during the summer?” for the Summer Hours task, versus more esoteric language, such as the FAQ “How can I set up a meeting for research?” for the Liaison Contact task.
Describing Topics
Third, topics should be more descriptive and their purpose more clear. For example, in the Liaison Contact task, one participant browsed under the Faculty topic, equating it with information about library faculty and not information for teaching faculty. An easy solution here would be to make the topic more descriptive by calling it Information for Faculty (see fig. 2).
However, what words and phrases are going to be most useful to users may be hard to determine without user input, so we suggest focusing energy into card sorting or other research methods to uncover how users would categorize FAQs. Regardless of whether that is feasible, we suggest mapping FAQs to all topics the questions relate to, so they have greater likelihood of showing up where users are looking.
Additionally, in both our study and the UX @ Harvard (n.d.) study, participants who used the topics did not fully understand their purpose. UX @ Harvard recommends wording like “FAQ Topics” and “Question Search Topics”; we suggest “FAQ Categories” as another possibility (see fig. 2).
Conducting Usability Testing: Recommendations for Researchers
For other researchers interested in performing usability testing to improve their own LibAnswers FAQ pages, we can offer some recommendations. First and foremost, a specialized lab or software such as Morae is not required to run a usability test. We used the lab and Morae software because it was freely available to us on our campus—and we wanted to try it—but in truth, we would have been able to perform every aspect of the usability testing without it.
According to Krug (2010), all that is needed to run a usability test is a computer with internet access, screen recording software, a mouse, and a microphone plus an external monitor, keyboard, and mouse which helps the facilitator set up tasks without reaching into the participants’ space. You can create this basic setup with equipment and software already available in most libraries. Using a specialized setup and software does offer more features, but this also increases the learning curve.
The second recommendation is to select methods wisely, and consider streamlining the number of methods you use. Usability testing is ultimately about making improvements to your products, services, or interfaces, for your user base, based on problems that they are experiencing. While takeaways from others’ usability testing can be informative—and we hope you find ours informative—results are not intended to be widely applicable outside of the context in which they were gathered.
With that in mind, usability studies do not have to be complexly designed with a multi-method approach. One quick round of testing, using one method, will generate results that will provide insight, even if that insight is how to do it better next time. As Krug (2010) puts it, “Doing this kind of testing is enormously valuable if you do it, and people don’t do it because they have the impression that it’s more complicated than it needs to be” (p. 11).
The methods we selected for our study were the ones that we felt would best help us uncover what we wanted to know based on the literature we were reading at the time. We also wanted to try them and gain the experience of running these tests, especially the usability testing. There are myriad methods for UX research, and the ones we present are by no means the only ones. It is important to select methods based on your own research objectives. One resource to get started is a handy chart from Anderson (2020) published on the UX Collective blog.
Reflections
As we mentioned already, at the time of this study we considered ourselves novice usability researchers. While this was not the first experience with research design and implementation for either of us, it was our first time designing a research study for this context using these methods. Since running this study in 2018, we have both gone on to design and implement research studies using different methodologies, both solo and with other research partners. We bring this up not to absolve ourselves of any mistakes, but to provide context for our study and—hopefully—break down barriers to first-time researchers, whether it’s your first time designing and implementing a study, the first time studying a certain product or population, or the first time using certain methods.
Much of the research literature glosses over mistakes or failures. Our five second testing did not provide the clear results we were hoping for, and we could have elected to omit them entirely from this paper. However, there is benefit in showing when things don’t work as expected and identifying where the issue occurred to iterate and improve for the future. The heart of research is learning, and while our expressed goal as researchers is to learn about our subjects, we also learn about ourselves. Those lessons, all too often, are left out of the reporting.
Additionally, as librarians, it’s important to recognize that we are capable of designing and implementing research, including UX research, even if it is not our only or primary duty. At the time of this study we were an undergraduate student and early career librarian. We undertook this study to try something new and learn along the way. Hopefully this encourages other librarians, library professionals, and students reading this paper to just try it. Experience is the best teacher, and even the most expert researchers were once beginners. There is always knowledge to be gained, even if it is only how to do it better next time.
Conclusion
We set out to determine whether the library’s customized LibAnswers FAQ homepage design was meeting the needs of SMCM students and what pain points they encountered using both five second testing and usability testing. We recruited twenty-one participants for five second testing and six for usability testing; participants were a mix of residential and commuter students from all . While results from the five second testing showed that students generally understood the purpose of the page on first impression, participant performance and feedback during the usability testing highlighted pain points that had more to do with the wording of the FAQs and how they were interpreted in search results than the actual design of the homepage.
Since we did not restrict participants from using the search, we were surprised that almost all participants defaulted to using the search box. This is a bit at odds with the idea that FAQs have value due to users’ limited search skills (S. Farrell, 2014). As the trend for FAQs shift from highly curated lists of limited questions to comprehensive, self-service knowledge bases, this specific value may be less and less relevant.
As an information service, FAQ knowledge bases continue to provide critical support to users regardless of whether they are accessing them in lieu of help from a library professional because it is outside of regular service hours or because they prefer a self-help option. Our study suggests that clear language is the key to making them useful and usable.
Appendix
Five Second Testing Protocols
[P. 1] Testing Instructions
On the next page, you have a paper version of the SMCM Library Ask Us homepage. Take approximately five seconds to view the homepage. Once you’ve viewed the homepage, flip to the final page to complete the task.
[P. 2] Image of LibAnswers FAQ Homepage (see fig.1)
[P. 3] Task Questions
Now that you’ve viewed the Ask Us homepage, please answer the following questions:
What do you think the overall purpose of this webpage is?
What information is available on this webpage? Please list everything you remember seeing on the page.
Icebreaker & Homepage Tour Script
(Adapted from Krug (2010) and Hungerford et al. (2010) procedures)
Hi! Thank you for taking part in our research study. My name is __________, and I’m going to be walking you through this session today. If I could ask you at this time to silence and put away any electronic devices during the session. If you have a snack, please set that aside until the session is over. [Observer asks participant to put snack on different table.]
Before we begin, I have some information to share with you, and I’m going to read it to make sure I cover everything. Your participation today will help the Library make student-centered, informed decisions on how to best update the websites under study. Decisions made as a result of the study will be publicized on the Library website and through All Student emails. If you are interested in discussing the research further, please contact [researcher email and phone number].
We’ve asked you to come in today to help us evaluate the Library’s Ask Us web resources. We want to see if the sites work as intended, and we will be asking you to perform several tasks and will observe you while you do them. The session should last about an hour.
First, I want to make it clear that we are testing the site, not you. You can’t do anything wrong here. In fact, this is probably the one place today where you don’t have to worry about making any mistakes.
As we go through the tasks, I’m going to ask you, as much as possible, to try to think out loud: say what you’re looking at, what you’re trying to do, and what you’re thinking. We really want to hear what you honestly think, so please do not worry about hurting our feelings. We’re trying to get as much honest information as possible so we can improve the webpages. Your thoughts and input will be a big help to us.
If you have any questions as we go along, please feel free to ask them. I may not be able to answer them right away, since we’re interested in what people do when they don’t have someone sitting next to them who can help, but I will try to answer any questions you may still have when we are finished. Also, if you need a break at any point, please just let me know.
One last thing: we would like to record what happens on the screen and through the microphone. We want to record this session so we can later review what happened and take better notes. We don’t use our recordings for purposes other than helping us learn about the site, and there will be no identifying information in any of the recordings, so your participation will be anonymous.
If you would, I’m going to ask you to sign a simple consent form that just says that you are willing to take part in this study. After that if you could fill out a brief demographic questionnaire and short survey on how you use the Ask Us. After the test there will be another very short follow-up survey.
(Participant signs consent form)
(Give participant demo questionnaire and pre-test survey)
(Collect and move into homepage tour)
Just to get started, let’s take 5 minutes to take a look around this webpage and practice thinking aloud. Here are some ideas of things to let me know:
What is your overall impression of this page?
What do you think the overall purpose of the page is?
Where do you think you can navigate to from this page?
What aspects of this page do you notice first?
What aspects of this page do you find confusing?
If you could change anything about this page, what would it be?
Is there anything you find most helpful on this page?
Try to think aloud as much as possible; I might also ask you what you are thinking occasionally. And again, we want as honest feedback as possible. Nothing you say is wrong or will hurt our feelings in any way!
(Encourage participant to think aloud)
(After 5 minutes) Ok great! Now I’m going to present you with a series of specific tasks. I’m going to read each one out loud and give you a printed copy. Again, as much as possible, it will help us if you can try to think aloud as you go along through these tests.
Let’s get started!
(See Ask Us testing scenarios to continue)
Usability Testing Tasks (Facilitator Copy)
NOTE: Participants received a printed copy with only the BOLDED scenario prompt
Task 1: Liaison librarian contact (Facilitator - read the BOLDED text. For this section it is OK if participant uses search box)
(FACILITATOR READ:) You need help finding statistics for an economics paper you’re writing. You wonder if there is an Economics librarian who you could make an appointment with. Use the Ask Us to find out.
(Prompt participant to navigate back to LibAnswers homepage)
Task 2: View previous St. Mary’s Projects (Facilitator - read the BOLDED text. For this section it is OK if participant uses search box)
(FACILITATOR READ:) Your SMP advisor suggested checking out some past SMPs for ideas. Use the Ask Us to find out how you can access previous students’ SMPs.
(Prompt participant to navigate back to LibAnswers homepage)
Task 3: Summer hours (Facilitator - read the BOLDED text. For this section it is OK if participant uses search box)
(FACILITATOR READ:) You’re planning on staying in the area this summer and want to know when the library will be open. Use the Ask Us to find out what the SMCM Library’s summer hours are.
(Prompt participant to navigate back to LibAnswers homepage)
Task 4: Printing (Facilitator - read the BOLDED text. For this section it is OK if participant uses search box)
(FACILITATOR READ:) Arg, GoPrint deducted your print money but your document never actually printed! Use the Ask Us to find out if you can get your money back.
(Prompt participant to navigate back to LibAnswers homepage)
References
Almeida, N., & Tidal, J. (2017). Mixed methods not mixed messages: Improving LibGuides with student usability data. Evidence Based Library and Information Practice, 12(4), 62–77. https://doi.org/10.18438/B8CD4Thttps://doi.org/10.18438/B8CD4T
Anderson, N. (2020, February 17). Which UX research methodology should you use? [Chart]. UX Collective. https://uxdesign.cc/which-ux-research-methodology-should-you-use-chart-included-fd85dd2cd4bdhttps://uxdesign.cc/which-ux-research-methodology-should-you-use-chart-included-fd85dd2cd4bd
Archambault, S. G., & Simon, K. (2012, June 21). Apples and oranges: Lessons from a usability study of two library FAQ websites [Conference presentation]. 18th Annual Reference Research Forum (ALA), Anaheim, CA, United States. https://www.slideshare.net/susangar/apples-and-oranges-13411083https://www.slideshare.net/susangar/apples-and-oranges-13411083
Barker, A. E. G., & Hoffman, A. T. (2021). Student-centered design: Creating LibGuides students can actually use. College and Research Libraries, 82(1), 75–91. https://doi.org/10.5860/crl.82.1.75https://doi.org/10.5860/crl.82.1.75
Bowen, A., Ellis, J., & Chaparro, B. (2018). Long nav or short nav?: Student responses to two different navigational interface designs in LibGuides version 2. The Journal of Academic Librarianship, 44(3), 391–403. https://doi.org/10.1016/j.acalib.2018.03.002https://doi.org/10.1016/j.acalib.2018.03.002
Budiu, R. (2016, January 24). Priming and user interfaces. Nielsen Norman Group. https://www.nngroup.com/articles/priming/https://www.nngroup.com/articles/priming/
Cappel, J. J., & Huang, Z. (2007). A usability analysis of company websites. Journal of Computer Information Systems, 48(1), 117–123. https://doi.org/10.1080/08874417.2007.11646000https://doi.org/10.1080/08874417.2007.11646000
Chan, C., Gu, J., & Lei, C. (2019). Redesigning subject guides with usability testing: A case study. Journal of Web Librarianship, 13(3), 260–279. https://doi.org/10.1080/19322909.2019.1638337https://doi.org/10.1080/19322909.2019.1638337
Conerton, K., & Goldenstein, C. (2017). Making LibGuides work: Student interviews and usability tests. Internet Reference Services Quarterly, 22(1), 43–54. https://doi.org/10.1080/10875301.2017.1290002https://doi.org/10.1080/10875301.2017.1290002
Conrad, S., & Stevens, C. (2019). “Am I on the library website?” A LibGuides usability study. Information Technology and Libraries, 38(3), 49–81. https://doi.org/10.6017/ital.v38i3.10977https://doi.org/10.6017/ital.v38i3.10977
Crescenzi, A., McGraw, K. A., & Main, L. (2011, January 1). LibAnswers usability: Rethinking online reference. Medical Library Association Annual Meeting, Minneapolis, MN, United States. https://doi.org/10.17615/3r5k-8d46https://doi.org/10.17615/3r5k-8d46
Drummond, D. M. (2019). Creating library FAQ guidelines using Query Spy. In Baudino, F., Johnson, C Meneely, B., & Young, N.(Eds.), 19th Annual Brick & Click Proceedings (pp. 113–118). https://dspace.lib.miamioh.edu/xmlui/bitstream/handle/2374.MIA/6600/e-proceedings.pdf?sequence=1&isAllowed=y#page=119https://dspace.lib.miamioh.edu/xmlui/bitstream/handle/2374.MIA/6600/e-proceedings.pdf?sequence=1&isAllowed=y#page=119
Farrell, B., & Leousis, K. (2017). Integrated reference à la carte: Evaluating, selecting, and implementing the best features for your library. Journal of Library Administration, 57(5), 548–562. https://doi.org/10.1080/01930826.2017.1326727https://doi.org/10.1080/01930826.2017.1326727
Farrell, S. (2014, December 21). FAQs still deliver great value. Nielsen Norman Group. https://www.nngroup.com/articles/faqs-deliver-value/www.nngroup.com/articles/faqs-deliver-value/
Gillis, R. (2017). “Watch your language!”: Word choice in library website usability. Partnership: The Canadian Journal of Library and Information Practice and Research, 12(1). https://doi.org/10.21083/partnership.v12i1.3918https://doi.org/10.21083/partnership.v12i1.3918
Han, S., & Lee, M. K. (2022). FAQ chatbot and inclusive learning in massive open online courses. Computers & Education, 179, 104395. https://doi.org/10.1016/j.compedu.2021.104395https://doi.org/10.1016/j.compedu.2021.104395
Hungerford, R., Ray, L., Tawatao, C., & Ward, J. L. (2010). LibGuides usability testing: Customizing a product to work for your users. https://digital.lib.washington.edu/researchworks/bitstream/handle/1773/17101/UWLibGuidesUsability-2010LAC-tawatao.pdf?sequence=2&isAllowed=yhttps://digital.lib.washington.edu/researchworks/bitstream/handle/1773/17101/UWLibGuidesUsability-2010LAC-tawatao.pdf?sequence=2&isAllowed=y
Imler, B. B., Garcia, K. R., & Clements, N. (2016). Are reference pop-up widgets welcome or annoying? A usability study. Reference Services Review, 44(3), 282–291. https://doi.org/10.1108/RSR-11-2015-0049https://doi.org/10.1108/RSR-11-2015-0049
An introduction to five second testing. (n.d.). UsabilityHub. Retrieved June 10, 2019, from https://usabilityhub.com/guides/five-second-testinghttps://usabilityhub.com/guides/five-second-testing
Jones, S., Kayongo, J., & Scofield, J. (2009). Ask us anytime: Creating a searchable FAQ using email and chat reference transcripts. Internet Reference Services Quarterly, 14(3–4), 67–81. https://doi.org/10.1080/10875300903256555https://doi.org/10.1080/10875300903256555
Kahili-Heede, M. (2020). Health sciences library website using LibGuides: A usability study [Unpublished master’s project]. University of Hawaiʻi at Mānoa. https://scholarspace.manoa.hawaii.edu/items/3e7c6127-84b8-4e33-860c-c14b8a428a11https://scholarspace.manoa.hawaii.edu/items/3e7c6127-84b8-4e33-860c-c14b8a428a11
Kehoe, B. P. (1992, January). Frequently asked questions. Zen and the Art of the Internet: A Beginner’s Guide to the Internet. https://legacy.cs.indiana.edu/docproject/zen/zen-1.0_6.html#SEC58https://legacy.cs.indiana.edu/docproject/zen/zen-1.0_6.html#SEC58
Krug, S. (2010). Rocket surgery made easy: The do-it-yourself guide to finding and fixing usability problems. New Riders.
Kumar, B. (2022, July 30). How to build a FAQ page: Examples and FAQ templates to inspire you. Shopify Blog. https://www.shopify.com/blog/120928069-how-to-create-faq-pagehttps://www.shopify.com/blog/120928069-how-to-create-faq-page
Labrake, M. (2019). Getting your FAQs straight: How to make your knowledgebase power virtual reference. Computers in Libraries, 39(8), 14–19.
Lindgaard, G., Fernandes, G., Dudek, C., & Brown, J. (2006). Attention web designers: You have 50 milliseconds to make a good first impression! Behaviour & Information Technology, 25(2), 115–126. https://doi.org/10.1080/01449290500330448https://doi.org/10.1080/01449290500330448
Marrall, R. M., & Western Libraries Usability & Design Working Group. (2016). LibAnswers and user patterns: Executive summary (No. 1; Usability & Design Working Group Documents). https://cedar.wwu.edu/library_udwgdocs/1https://cedar.wwu.edu/library_udwgdocs/1
McDonald, D. (2016, October 6). Asynchronous vs. synchronous communication in the online classroom. Wiley University Services. https://ctl.wiley.com/asynchronous-vs-synchronous-communication-in-the-online-classroom/https://ctl.wiley.com/asynchronous-vs-synchronous-communication-in-the-online-classroom/
Nielsen, J. (2012, January 15). Thinking aloud: The #1 usability tool. Nielsen Norman Group. https://www.nngroup.com/articles/thinking-aloud-the-1-usability-tool/https://www.nngroup.com/articles/thinking-aloud-the-1-usability-tool/
Quintel, D. F. (2016). LibGuides and usability: What our users want. Computers in Libraries, 36(1), 4–8.
Richards, S. (2013, July 25). FAQs: Why we don’t have them. Government Digital Service. https://gds.blog.gov.uk/2013/07/25/faqs-why-we-dont-have-them/https://gds.blog.gov.uk/2013/07/25/faqs-why-we-dont-have-them/
Roberts, C. (2018, October 11). The FAQ as advice column. A List Apart. https://alistapart.com/article/the-faq-as-advice-columnhttps://alistapart.com/article/the-faq-as-advice-column
Self-service: Do customers want to help themselves? (2013, January 10). Zendesk Blog. https://www.zendesk.com/blog/searching-for-self-service/https://www.zendesk.com/blog/searching-for-self-service/
Shepherd, J., & Korber, I. (2014). How do I search for a book?: Implementing LibAnswers at your library. College & Research Libraries News, 75(4), 197–214. https://crln.acrl.org/index.php/crlnews/article/view/9106/10000https://crln.acrl.org/index.php/crlnews/article/view/9106/10000
Sonsteby, A., & DeJonghe, J. (2013). Usability testing, user-centered design, and LibGuides subject guides: A case study. Journal of Web Librarianship, 7(1), 83–94. https://doi.org/10.1080/19322909.2013.747366https://doi.org/10.1080/19322909.2013.747366
Stevens, C. R. (2013). Reference reviewed and re-envisioned: Revamping librarian and desk-centric services with LibStARs and LibAnswers. The Journal of Academic Librarianship, 39(2), 202–214. https://doi.org/10.1016/j.acalib.2012.11.006https://doi.org/10.1016/j.acalib.2012.11.006
Tay, A. (2013). Helping users help themselves: Maximizing LibAnswers usage. In Dobbs, A. W., Sittler, R. L., & Cook, D. (Eds.), Using LibGuides to enhance library services: a LITA guide (pp. 175–190). ALA TechSource.
Thorngate, S., & Hoden, A. (2017). Exploratory usability testing of user interface options in LibGuides 2. College and Research Libraries, 78(6), 844–861. https://doi.org/10.5860/crl.78.6.844https://doi.org/10.5860/crl.78.6.844
Tobias, C. (2017). A case of TMI (too much information): Improving the usability of the library’s website through the implementation of LibAnswers and the A–Z database list (LibGuides v2). Journal of Library & Information Services in Distance Learning, 11(1–2), 175–182. https://doi.org/10.1080/1533290X.2016.1229430https://doi.org/10.1080/1533290X.2016.1229430
UX @ Harvard Library. (n.d.). LibAnswers usability testing. Harvard Wiki. https://wiki.harvard.edu/confluence/display/UHL/LibAnswers+Usability+Testinghttps://wiki.harvard.edu/confluence/display/UHL/LibAnswers+Usability+Testing
Wilairat, S., Svoboda, E., & Piper, C. (2021). Practical changes in reference services: A case study. Medical Reference Services Quarterly, 40(2), 151–167. https://doi.org/10.1080/02763869.2021.1912567https://doi.org/10.1080/02763869.2021.1912567
Wimer, K. E. (2017). Decoding virtual reference: Using chat transcripts to guide usability testing and improve web design [Unpublished capstone project]. University of Denver. https://digitalcommons.du.edu/lis_capstone/3https://digitalcommons.du.edu/lis_capstone/3
Zendesk customer experience trends report 2020. (2020). Zendesk. https://d1eipm3vz40hy0.cloudfront.net/pdf/cxtrends/cx-trends-2020-full-report.pdfhttps://d1eipm3vz40hy0.cloudfront.net/pdf/cxtrends/cx-trends-2020-full-report.pdf