<?xml version="1.0" encoding="utf-8"?>
<article xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="JATS-journalpublishing1-mathml3.xsd" dtd-version="1.2" article-type="Peer Reviewed">
<front>
<journal-meta>
<journal-id journal-id-type="publisher">weaveux</journal-id>
<journal-title-group>
<journal-title>Weave (WEAVE)</journal-title>
</journal-title-group>
<issn pub-type="epub"></issn>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">6099</article-id>
<article-id pub-id-type="manuscript">optimizing-library-website-usability-sy-edits-final.docx</article-id>
<article-id pub-id-type="doi">10.3998/weaveux.6099</article-id>
<title-group>
<article-title>Optimizing Website Usability Testing: Remote vs. Traditional</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Salvesen</surname>
<given-names>Linda</given-names>
</name>
<aff id="aff1"><institution>WILLIAM PATERSON UNIVERSITY OF NEW JERSEY</institution></aff>
<email>salvesenl@wpunj.edu</email>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Jeitner</surname>
<given-names>Eric</given-names>
</name>
<email>eric.jeitner@stockton.edu</email>
<aff id="aff2"><institution>STOCKTON UNIVERSITY</institution></aff>
</contrib>
</contrib-group>
<pub-date><day>21</day><month>4</month><year>2026</year></pub-date>
<volume>9</volume><issue>1</issue>
<issue-title></issue-title>
<fpage></fpage>
<lpage></lpage>
<history>
<date date-type="received"><day></day><month></month><year></year></date>
<date date-type="rev-recd"><day></day><month></month><year></year></date>
<date date-type="accepted"><day>24</day><month>11</month><year>2025</year></date>
</history>
<permissions>
<copyright-statement></copyright-statement>
<copyright-year></copyright-year>
<license><license-p>CC BY 4.0</license-p></license>
</permissions>
<abstract id="ABS1"><p id="P1">Regular usability testing in academic libraries is essential to ensure that users can effectively navigate and use the resources available. With the shift toward online resources and the rise of remote and hybrid learning, the modality of these studies must evolve accordingly. Additionally, the growing use of mobile devices to access library resources and services highlights the need for more mobile-specific usability testing. To address these gaps, we compare our experiences conducting usability studies remotely with adapted in-person testing. We outline specific methods and steps for usability practitioners to plan and execute usability testing, regardless of the method. While remote testing proved less effective than traditional methods in certain areas, its lower resource demands make it a viable and scalable alternative when researchers apply it with careful planning and design.</p></abstract>
<kwd-group>
<kwd></kwd>
</kwd-group>
<funding-group/>
<counts>
<fig-count count="0"/>
</counts>
<custom-meta-group>
<custom-meta id="competing-interest">
<meta-name></meta-name>
<meta-value></meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<sec id="S1"><title>Introduction</title>
<p>For as long as academic libraries have had an online presence, they have relied on usability studies to uncover problems with their websites, discovery layers, research guides, and other services in order to provide a better user experience (UX) for their populations. As the technology facilitating modern workplaces evolved, so did its application in usability testing. Previously expensive software with specialized functionalities have made their way into professional arenas through clever adoption and affordable, cloud-based apps designed for ease of use. By the late 2010s, internet-based video conferencing was a boon to those interested in exploring remote options for testing. By 2020, these opportunities would prove ideal for academic librarians who were beginning to experience new hurdles in conducting usability research.</p>
<p>Since the start of the COVID-19 pandemic in 2020, many more higher education institutions offer both remote and hybrid options for their course modalities and service venues (<xref ref-type="bibr" rid="R29">van Wyk, 2023</xref>). At William Paterson University of New Jersey (WPUNJ), fully online students rose from 4 percent of the population in the fall of 2020 to 40 percent of the population in the fall of 2024. Additionally, colleges and universities have seen an increasing reliance on mobile technologies to access library content. In response to these large shifts in both usage and modalities, virtual usability testing is becoming an essential tool for academic libraries to evaluate their services. However, while remote testing is a viable solution in this new environment, there are both advantages and drawbacks to explore.</p>
<p>While virtual usability testing is starting to be discussed more often in the library literature, it has for some time been widely used in a variety of similar industries. One of the primary benefits of virtual usability studies is that they are more cost-effective and do not require a lab or any form of travel, eliminating the need for in-person meetings and travel expenses. Removing the need for a physical study space also makes it easier to recruit a more diverse group of participants. Virtual usability studies are also highly flexible and customizable to meet the needs of specific projects or products. For example, some studies may focus on UX and design, while others may examine the functionality and performance of a website. By tailoring the study to the specific needs of the project, researchers can obtain targeted feedback to inform decisions about website development and design.</p>
<p>Although there are many benefits to virtual usability testing, both research participants and research administrators have reported several challenges. Being distracted while participating in the study in a familiar environment is the most common issue participants reported. Administrators, on the other hand, cited under-reporting of issues and missing non-verbal cues&#x2014;both of which can affect the depth and accuracy of usability findings. We will further explore these and other issues throughout this article. </p>
<p>We provide insight into our experiences conducting remote and in-person testing across institutions. This collaboration originated from the New Jersey Academic Libraries Network, a collective of five academic institutions that implemented the same integrated library system and previously maintained a shared lending network. Other universities in the network are New Jersey Institute of Technology, The College of New Jersey, and Rowan University. Implementing an integrated library system as a group has become a common practice, in part because it provides all participating institutions with additional opportunities for partnerships among the institutions. We were also members of the network&#x2019;s Discovery/UX Group. Building upon this foundation, we combined our efforts while exploring the burgeoning area of remote usability testing in libraries.</p>
<p>Since we each navigated pandemic conditions to conduct usability studies at our respective libraries in spite of the challenges, we detail our experiences so other academic librarians can benefit from our work and implement similar studies in their libraries. By leveraging the insights gained from these studies, usability researchers can ensure that their library websites are intuitive, efficient, and enjoyable to use, which can lead to increased user satisfaction and foster institutional loyalty&#x2014;a growing priority in higher education.</p>
</sec>
<sec id="S2"><title>Literature Review</title>
<p>To chart the development of modern usability testing in libraries, we examined literature stemming from both the librarian profession and the world of commercial design practice. These sources provided interesting insight on the early history of remote testing, its overall practical advantages, the administrator roles involved in running them, the fundamental similarities and differences between traditional and remote testing, and the common technology and drawbacks associated with this approach.</p>
<p>In conducting our review of the literature, one thing was apparent: conducting usability testing with remote participants is not new. Although conducting virtual usability studies in libraries has become more commonplace, primarily in response to the COVID-19 pandemic and the quick move to virtual services starting in 2020, earlier studies looking at virtual usability in similar populations do exist in the literature. For example, researchers in the 1990s explored remote usability testing as a cost-effective method for testing users in a more natural setting (<xref ref-type="bibr" rid="R5">Castillo et al., 1998</xref>; <xref ref-type="bibr" rid="R12">Hartson et al., 1996</xref>). Not long after, a study looking at information systems acknowledged long-standing concerns over the logistics of managing an in-person study with large numbers of participants and found remote, non-synchronous solutions to be preferable (<xref ref-type="bibr" rid="R14">Hong et al., 2001</xref>). </p>
<p>The first studies on remote usability in libraries appeared in the library literature a few years later. <xref ref-type="bibr" rid="R24">Thompson et al. (2004)</xref> focused on the potential of different software tools to assist in remote observation, and <xref ref-type="bibr" rid="R25">Thomsett-Scott (2008)</xref> combined the need for libraries to evaluate their web presence with the growing trend of remote usability. These researchers found that it is important for libraries to actively endorse usability as a central element of organizational culture. They can accomplish this through the creation of standing working groups focused on UX and accessibility concerns, making an effort to ground universal design within library operations (<xref ref-type="bibr" rid="R3">Baird &amp; Soares, 2018</xref>; <xref ref-type="bibr" rid="R10">Godfrey, 2015</xref>). The commitment to regular, repetitive, iterative usability testing is also important for libraries (<xref ref-type="bibr" rid="R9">DeStasio &amp; Jeitner, 2024</xref>; <xref ref-type="bibr" rid="R21">Nuccilli et al., 2018</xref>).</p>
<p>The remote testing medium offers many benefits to both administrators and participants, and perhaps the most evident of these is time and cost efficiency for those conducting the study. Remote usability testing allowed the administrators to spend less money and time without compromising the number of participants (<xref ref-type="bibr" rid="R2">Alharbi &amp; Mayhew, 2015</xref>; <xref ref-type="bibr" rid="R13">Hill et al., 2021</xref>; <xref ref-type="bibr" rid="R22">Relawati et al., 2022</xref>; <xref ref-type="bibr" rid="R24">Thompson et al., 2004</xref>; <xref ref-type="bibr" rid="R25">Thomsett-Scott, 2008</xref>). Multiple studies also argued that, by placing the participants in a familiar environment, they were more likely to feel comfortable and experience less unease in attempting to complete study tasks (<xref ref-type="bibr" rid="R2">Alharbi &amp; Mayhew, 2015</xref>; <xref ref-type="bibr" rid="R13">Hill et al., 2021</xref>; <xref ref-type="bibr" rid="R25">Thomsett-Scott, 2008</xref>). </p>
<p>Also, participant recruitment was easier if volunteers could take part remotely (<xref ref-type="bibr" rid="R2">Alharbi &amp; Mayhew, 2015</xref>; <xref ref-type="bibr" rid="R13">Hill et al., 2021</xref>; <xref ref-type="bibr" rid="R25">Thomsett-Scott, 2008</xref>). In cases in which testing occurred on a mobile-based platform, such studies benefitted from their similarity to actual, real-world usage, for example, a study participant testing an app or platform that would be used remotely on a mobile device (<xref ref-type="bibr" rid="R4">Campbell &amp; Monkman, 2021</xref>).</p>
<p>The role of a testing administrator can vary greatly depending on the type of test being conducted and the conditions under which it is run. In instances of synchronous remote testing, an administrator plays a hands-on role by observing the participants during their sessions and recording notes based on those observations (<xref ref-type="bibr" rid="R1">Alhadreti, 2021</xref>; <xref ref-type="bibr" rid="R7">Currie et al., 2022</xref>; <xref ref-type="bibr" rid="R19">Marzec &amp; Piotrowski, 2023</xref>; <xref ref-type="bibr" rid="R24">Thompson et al., 2004</xref>). Indeed, <xref ref-type="bibr" rid="R25">Thomsett-Scott (2008)</xref> argued that the involvement of a testing administrator is extremely beneficial and helpful, yielding useful insight and feedback through their direct observation. In contrast, during asynchronous studies, there is essentially no administrator involvement between recruitment of participants and analysis of the data (<xref ref-type="bibr" rid="R2">Alharbi &amp; Mayhew, 2015</xref>; <xref ref-type="bibr" rid="R22">Relawati et al., 2022</xref>; <xref ref-type="bibr" rid="R30">Yi et al., 2022</xref>).</p>
<p>In terms of process and procedure, remote usability studies share many similarities with traditional, in-person studies in libraries. For example, most remote studies also use task-completion as their primary method of assessing usability (<xref ref-type="bibr" rid="R1">Alhadreti, 2021</xref>; <xref ref-type="bibr" rid="R2">Alharbi &amp; Mayhew, 2015</xref>; <xref ref-type="bibr" rid="R3">Baird &amp; Soares, 2018</xref>; <xref ref-type="bibr" rid="R4">Campbell &amp; Monkman, 2021</xref>; <xref ref-type="bibr" rid="R19">Marzec &amp; Piotrowski, 2023</xref>; <xref ref-type="bibr" rid="R22">Relawati et al., 2022</xref>; <xref ref-type="bibr" rid="R24">Thompson et al., 2004</xref>; <xref ref-type="bibr" rid="R30">Yi et al., 2022</xref>). Also, both traditional and remote usability studies frequently employ a think-aloud protocol, in which participants are encouraged to talk out loud as they complete study tasks, sharing their thoughts, feelings, and frustrations in real time (<xref ref-type="bibr" rid="R1">Alhadreti, 2021</xref>; <xref ref-type="bibr" rid="R3">Baird &amp; Soares, 2018</xref>; <xref ref-type="bibr" rid="R11">Haggerty, 2019</xref>; <xref ref-type="bibr" rid="R19">Marzac &amp; Piotrowski, 2023</xref>; <xref ref-type="bibr" rid="R25">Thomsett-Scott, 2008</xref>; <xref ref-type="bibr" rid="R24">Thompson et al., 2004</xref>). </p>
<p>As in best practices for traditional usability studies, remote usability research emphasizes careful procedural design. <xref ref-type="bibr" rid="R25">Thomsett-Scott (2008)</xref> stressed the importance of beta testing to refine tasks and prevent technical failures. <xref ref-type="bibr" rid="R2">Alharbi and Mayhew (2015)</xref> further strengthened study control by using a separate version of the library website for comparison and providing participants with a dedicated portal for instructions and materials.</p>
<p>Usability researchers conducting remote studies also employed many of the same recruitment methods used in traditional studies, such as email (<xref ref-type="bibr" rid="R2">Alharbi &amp; Mayhew, 2015</xref>; <xref ref-type="bibr" rid="R11">Haggerty &amp; Scott, 2019</xref>; <xref ref-type="bibr" rid="R24">Thompson et al., 2004</xref>), social media posts and on-campus flyers (<xref ref-type="bibr" rid="R2">Alharbi &amp; Mayhew, 2015</xref>; <xref ref-type="bibr" rid="R17">Kliewer et al., 2016</xref>), and student listservs (<xref ref-type="bibr" rid="R15">Jacobs et al., 2020</xref>). Most studies relied primarily on undergraduate students as participants, although some also included graduate students (<xref ref-type="bibr" rid="R2">Alharbi &amp; Mayhew, 2015</xref>; <xref ref-type="bibr" rid="R19">Marzec &amp; Piotrowski, 2023</xref>; <xref ref-type="bibr" rid="R30">Yi et al., 2022</xref>). <xref ref-type="bibr" rid="R23">Sexton (2022)</xref> cautioned against recruiting student workers as a convenience sample, noting that such practices may introduce ethical concerns and compromise the validity of findings when the intended population is the broader student body.</p>
<p>Remote usability tests also share certain similarities of practice with traditional tests where post-test assessments are concerned. Multiple studies asked participants to self-assess their experience using a five-point Leikert scale (<xref ref-type="bibr" rid="R2">Alharbi &amp; Mayhew, 2015</xref>; <xref ref-type="bibr" rid="R19">Marzec &amp; Piotrowski, 2023</xref>; <xref ref-type="bibr" rid="R22">Relawati et al., 2022</xref>; <xref ref-type="bibr" rid="R24">Thompson et al., 2004</xref>; <xref ref-type="bibr" rid="R30">Yi et al., 2022</xref>). Still others employed qualitative coding to analyze the think-aloud and other recorded portions of the sessions (<xref ref-type="bibr" rid="R1">Alhadreti, 2021</xref>; <xref ref-type="bibr" rid="R3">Baird &amp; Soares, 2018</xref>; <xref ref-type="bibr" rid="R4">Campbell &amp; Monkman, 2021</xref>; <xref ref-type="bibr" rid="R19">Marzec &amp; Piotrowski, 2023</xref>). Further, <xref ref-type="bibr" rid="R2">Alharbi and Mayhew (2015)</xref> reported employing follow-up questions to gauge participants&#x2019; distractibility, response to interruptions, internal mood, and website familiarity.</p>
<p>Among instances of remote testing, there are outlying studies that employed methods unlike many others. <xref ref-type="bibr" rid="R14">Hong et al. (2001)</xref> conducted remote testing by studying data of real-time activity on websites via their server logs. They conducted no formal testing in the typical sense outside of reviewing the logs and extrapolating from that information. Yet another study used the novel approach of testing by asking the participant to instruct the testing administrator via synchronous screen sharing (<xref ref-type="bibr" rid="R7">Currie et al., 2022</xref>). The remote participant gave instructions to the administrator, who carried out tasks using real equipment in a lab environment based on those orders, which researchers subsequently assessed for accuracy. <xref ref-type="bibr" rid="R16">Kaiser et al. (2022)</xref> developed their own &#x201C;low-cost mobile usability lab&#x201D; (p. 234) called POPKit, which they sent to study participants and used to conduct the asynchronous testing sessions remotely. The POPKit was &#x201C;mobile&#x201D; both in the sense that it could be set up and used anywhere, and in that it used the participant&#x2019;s mobile phone for recording purposes.</p>
<p>Researchers also must give careful consideration to the software and equipment used in remote studies. Remote usability tests often employ specific platforms and applications, including Loop11 usability research platform (<xref ref-type="bibr" rid="R2">Alharbi &amp; Mayhew, 2015</xref>); Microsoft NetMeeting video conferencing software (<xref ref-type="bibr" rid="R24">Thompson et al., 2004</xref>); Microsoft Teams video conferencing/messaging software (<xref ref-type="bibr" rid="R19">Marzec &amp; Piotrowski, 2023</xref>); NVIVO data analysis software (<xref ref-type="bibr" rid="R3">Baird &amp; Soares, 2018</xref>); Otranscribe transcription assistance browser plugin (<xref ref-type="bibr" rid="R3">Baird &amp; Soares, 2018</xref>); POPKit mobile usability lab kit (<xref ref-type="bibr" rid="R16">Kaiser et al., 2022</xref>); sound recording software (<xref ref-type="bibr" rid="R24">Thompson et al., 2004</xref>); and Snag-It logging software (<xref ref-type="bibr" rid="R24">Thompson et al., 2004</xref>). According to <xref ref-type="bibr" rid="R25">Thomsett-Scott (2008)</xref>, screen sharing software is absolutely necessary for synchronous remote testing.</p>
<p>When it comes to remote usability testing, there are not exclusively net positives; drawbacks do exist for this approach. For example, only remote study participants reported that instances of distraction were an issue during study sessions (<xref ref-type="bibr" rid="R2">Alharbi &amp; Mayhew, 2015</xref>). It is also possible that by excluding opportunities for shared-space observation&#x2014;or, in the case of asynchronous remote testing, any real-time, face-to-virtual-face observation at all&#x2014;remote testing administrators could miss common, non-verbal cues from participants (<xref ref-type="bibr" rid="R25">Thomsett-Scott, 2008</xref>, p. 525). Additionally, there were significant differences in times-to-completion for remote participants versus in-person participants; remote participants took longer to complete tasks on average (<xref ref-type="bibr" rid="R2">Alharbi &amp; Mayhew, 2015</xref>; <xref ref-type="bibr" rid="R24">Thompson et al., 2004</xref>). </p>
<p>Finally, although remote testing participants tended to identify a higher number of usability problems than traditional participants , they also generally underreported the severity of those problems in assigned ratings (<xref ref-type="bibr" rid="R2">Alharbi &amp; Mayhew, 2015</xref>; <xref ref-type="bibr" rid="R24">Thompson et al., 2004</xref>). Overall, in spite of any minor drawbacks, researchers often described remote studies as just as effective in assessing usability as traditional, in-person studies (<xref ref-type="bibr" rid="R2">Alharbi &amp; Mayhew, 2015</xref>; <xref ref-type="bibr" rid="R24">Thompson et al., 2004</xref>).</p>
<p>A final area to consider is the burgeoning field of artificial intelligence. Librarians are actively incorporating these tools in many areas, including enhancing library website usability testing. In this area, artificial intelligence tools are used to develop user personas, map user journeys, create study tasks, define and test navigation, and analyze user feedback (<xref ref-type="bibr" rid="R6">Clapp, 2024</xref>). Although we did not use such tools during our usability testing, we explored their potential throughout the writing of this article. These technologies are evolving rapidly, and while there are serious ethical concerns, they will likely play a significant role in future usability testing efforts.</p>
</sec>
<sec id="S3"><title>Case Studies</title>
<p>To further explore both the advantages and drawbacks of this process, we consider the remote usability testing the first author, Linda Salveson, conducted at the Cheng Library of WPUNJ, a mid-sized public university in New Jersey; we will refer to this study as US1. We will compare this study with a usability study the second author, Eric Jeitner, conducted at the Richard E. Bjork Library of Stockton University, another mid-sized New Jersey public university library; we will refer to this study as US2. </p>
<p>Both studies occurred during the pandemic. However, unlike at WPUNJ, the Stockton University researchers conducted their usability study under traditional, in-person conditions while also maintaining masking and social distancing. The two usability tests occurred independent of each other and were not designed and carried out with any intention of comparing them. However, the methodologies and procedures used in both tests were similar enough that it allowed for a straightforward comparison. This similarity provided valuable insights into the strengths and weaknesses of different usability testing formats.</p>
<sec id="S4"><title>Methods</title>
<sec id="S5"><title>User Study 1</title>
<p>This section of the methods is written from the perspective of the first author, who administered US1, the remote usability study.</p>
<p>Conducted in the 2021&#x2013;2022 academic year, during the later days of the COVID-19 pandemic, US1 looked at the usability of the Alma LMS discovery tool, Primo VE (<ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="uri" xlink:type="simple" xlink:href="http://primo.wpunj.edu">primo.wpunj.edu</ext-link>). This study was fully remote, and, while various colleagues assisted with portions of the study, such as posting the flyer or promoting via word-of-mouth, I managed the project as a single librarian, carefully considering and comparing available options for conducting usability testing under pandemic conditions. </p>
<p>Following typical study protocol, I applied for and received institutional review board approval before the usability study began. This ensured the maintenance of participants privacy and confidentiality of data, including the anonymizing of all data and recordings. </p>
<p>I placed a QR code throughout the library, posted on social media, and sent it to students in campus announcements that linked to a screening survey form. The form recorded the volunteer&#x2019;s full name, university email address, the frequency with which they used Primo VE on a weekly basis expressed on a Likert scale, and whether they had received virtual and/or in-person library instruction in their university classes. Finally, the form collected date and time availability for the usability testing sessions. I used this information to determine other demographic data about the volunteers, including their university status and major/subject area. Since participants should be as close to a representative sample as possible, I recruited a range of student participants for screening based on both their year and major and scheduled participants for virtual usability testing sessions.</p>
<p>I used Zoom to conduct the real-time portion of US1. The platform&#x2019;s screen capture functionality recorded both audio and video from the remote participants. During the virtual usability sessions, the primary mode of assessment was the think-aloud protocol paired with task-completion exercises. I gave participants a list of scenarios that required them to interact with the library&#x2019;s discovery tool. I took care in wording tasks to avoid leading the participants&#x2019; thinking or subtly directing them toward successful task completion. The tasks themselves mimicked common activities performed by users of the discovery tool. Outside of the tasks, I asked participants about their library instruction history and provided an opportunity for them to ask debriefing questions pertaining to the tasks. </p>
<p>There were eight total virtual usability sessions for US1 in February and March 2022, based on suggested best practices by <xref ref-type="bibr" rid="R20">Nielsen (2000)</xref>, who stated that only five usability test participants per round are required to get actionable results. I planned multiple rounds to facilitate iterative usability testing, allowing for adjustments and improvements between sessions. Each participant had the option to use a computer or mobile device of their choosing. They were also free to use their browser of choice. During the sessions, I asked participants to turn their cameras off before recording began to protect privacy.</p>
<p>Across all US1 sessions, no user participated more than once to avoid repeated testing, which is a &#x201C;threat to internal validity&#x201D; (<xref ref-type="bibr" rid="R26">Treadwell &amp; Davis, 2019</xref>, p. 394). I referenced recordings of the testing sessions when determining the statistical categories of task success, timing, and number of clicks. </p>
<p>For the task-completion portion of the testing sessions in US1, participants shared their screens and started on the homepage of the university&#x2019;s discovery service, PrimoVE. They completed three to five scenarios.</p>
</sec>
<sec id="S6"><title>User Study 2</title>
<p>This section of the methods is written from the perspective of the second author, who administered US2, the traditional usability study. </p>
<p>Conducted in the 2021&#x2013;2022 academic year, US2 focused on testing the library&#x2019;s website, not its discovery platform. This study was traditional in design, conducted in-person using a library conference room. A library intern and I jointly administered US2 , and a student assistant performed some data processing. The choice of researchers was a practical consideration: I was responsible for assessment at Stockton University and the intern was interested in taking part in an assessment project.</p>
<p>We gained institutional review board approval for the study, as we intended to publish the findings. As in US1, we anonymized all participant data after collection. </p>
<p>Recruitment for US2 happened through word-of-mouth advertising. I worked with the student assistant to identify eight undergraduate students who were willing to be study volunteers. Acceptance was based on willingness to participate; there was no vetting process beyond that. Participating volunteers received a voucher worth eight dollars redeemable at the campus food court.</p>
<p>US2 sessions happened in a library conference room and used a table, three chairs, and a laptop. We asked each volunteer to complete ten tasks within the library website. The library intern acted as interviewer, reading a script to each volunteer and introducing the study tasks. I served as notetaker, observing the volunteer as they attempted each task and recording handwritten notes. Zoom recorded a screen capture of all desktop mouse movement and any spoken audio from the volunteer. To maintain privacy, we disabled the laptop camera and encouraged volunteers to voice any thoughts, concerns, or frustrations as they worked. The tasks were aligned with high-engagement points of the website, such as building hours, the chat help interface, subject research guides, etc. </p>
<p>In total, US2 consisted of eight sessions held across two weeks during the fall semester. Following the sessions, we analyzed the data extracted from the recordings. Much like US1, we used the screen-capture recordings to determine task success, time to completion, and number of clicks involved in each task. We wrote a report that included an extensive list of recommended changes to website elements based on task completion difficulty and presented it to the library&#x2019;s technology team.</p>
<p>A second series of eight sessions with new volunteers occurred later during the fall 2022 semester. This was meant to act as a post-assessment method, testing whether the changes to the website improved usability for completing the tasks. </p>
</sec>
</sec>
<sec id="S7"><title>Data Analysis</title>
<p>In both US1 and US2, we tracked the following statistical categories for each task: success/failure, time to completion, and number of clicks. We kept notes for each session, including steps taken to complete each task, important processes we observed, and comments from participants. In US1, the first author also considered prior experience with Primo and information literacy sessions to see if it was predictive of how effective our interface would be in supporting participants&#x2019; successful task completion. </p>
<p>In both studies, we anonymized the data by assigning a number to each participant and storing the consent forms and numbers separately. In US1, all study data was stored in WPUNJ&#x2019;s Microsoft OneDrive; in US2, it was stored in Stockton University&#x2019;s Google Drive. We made these choices because each university already had a respective subscription, but also because it allowed us to share the anonymized data among ourselves and to store session data in the cloud. We deleted all recordings after 365 days, per each university&#x2019;s policy.</p>
<p>Evaluating the statistical categories helped pinpoint where our patrons were most often running into issues. In US1, the first author distributed a document providing recommendations based on the usability testing data at an internal library meeting and provided it to user education, discovery, and website maintenance teams. The internal library committee decided, as a group, which of the study&#x2019;s findings they could implement.</p>
<p>In US2, the second author created and shared a similar report of findings and recommendations with the library&#x2019;s technology team. This internal committee reviewed and approved all recommendations and oversaw updates to the website according to the recommendations.</p>
</sec>
</sec>
<sec id="S8"><title>Considerations for Practice</title>
<p>Librarians who are looking to implement usability testing, whether virtual or traditional, should know that success depends on thoughtful preparation. Testing the usability of a library website in any format is crucial to ensure that users can effectively navigate and use the resources available. Once you make the decision to conduct usability testing, the next step is to determine which modality best suits your goals. Each approach offers unique advantages and challenges, and understanding these will help you more effectively plan and execute your studies. See Table 1 for a summary of these considerations.</p>
<table-wrap id="T1" position="float">
<label>Table 1.</label><caption><p>Considerations for practice. For each topic, we offer key considerations for the in-person and remote modalities.</p></caption>
<table frame="hsides" rules="groups">
<colgroup>
<col align="left" valign="middle" />
<col align="left" valign="middle" />
<col align="left" valign="middle" />
</colgroup>
<thead>
<tr>
<th align="left" valign="top"><p>Topic</p></th>
<th align="left" valign="top"><p>In-Person Testing</p></th>
<th align="left" valign="top"><p>Remote Testing</p></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top"><p>Recruitment</p></td>
<td align="left" valign="top"><p>May limit participation to those who can travel, potentially narrowing sample diversity</p></td>
<td align="left" valign="top"><p>May broaden participation by removing geographic and scheduling barriers, leading to greater sample diversity</p></td></tr>
<tr>
<td align="left" valign="top"><p>Setting</p></td>
<td align="left" valign="top"><p>Allows for richer observation data (e.g., body language, facial expressions), which can be invaluable for interpreting user frustrations or confusion</p></td>
<td align="left" valign="top"><p>Relies on digital platforms that streamline data collection and analysis but require careful attention to privacy</p></td></tr>
<tr>
<td align="left" valign="top"><p>Costs</p></td>
<td align="left" valign="top"><p>Involves higher costs due to physical space, additional staff, and onsite logistics; incentives required</p></td>
<td align="left" valign="top"><p>Involves lower costs due to reduced staffing and no facility use; flexible setup from any location; incentives required</p></td></tr>
<tr>
<td align="left" valign="top"><p>Configuring Screen Sharing and Recording</p></td>
<td align="left" valign="top"><p>Works best in controlled environments using specialized equipment or close observation to isolate usability issues</p></td>
<td align="left" valign="top"><p>Works well for observing participants in naturalistic settings, yielding more authentic behavior and reducing participant burden</p></td></tr>
<tr>
<td align="left" valign="top"><p>Observation and Note taking</p></td>
<td align="left" valign="top"><p>Requires structured note-taking during live sessions to accurately capture nonverbal cues and user behaviors</p></td>
<td align="left" valign="top"><p>Allows for session recording and automated transcripts, which support detailed review but raise privacy considerations</p></td>
</tr>
</tbody>
</table>
</table-wrap>
<sec id="S9"><title>Recruitment</title>
<p>Recruitment proved to be the most significant challenge in US1, likely due to the largely remote nature of campus operations at the time. While the first author had some initial success with traditional methods of recruitment, such as sending out official university announcements, as the study progressed, it became difficult to solicit enough volunteers to achieve the ideal demographic range of students. Recruitment picked up again after adding five-dollar Amazon gift card incentives to the study; these were reflected in an addendum to the institutional review board application. </p>
<p>More broadly, recruitment methods and study modality can shape the diversity of the participant pool. In-person testing may limit participation to those who are able to travel to a designated location at a specific time, potentially narrowing the range of student experiences represented. By contrast, remote testing can remove geographic and scheduling barriers, making participation more accessible to students with work, caregiving, or commuting constraints. However, as US1 demonstrated, simply offering a remote option does not guarantee broad participation; additional outreach strategies and incentives may still be necessary.</p>
<p>Outside of incentives, the biggest drivers of participation in both studies were personal connections to the library and spreading the call for volunteers via word-of-mouth. In light of this, we would recommend first spreading word among everyone working in the library, including student workers, to assist with word-of-mouth recruitment. However, be careful not to rely on library workers for participants, as they are biased.</p>
<p>Regarding participant status, while US1 was open to all students, the volunteers were mostly undergraduate students. Perhaps researchers should explore additional methods for recruiting graduate students so this user group is properly represented in usability testing efforts.</p>
</sec>
<sec id="S10"><title>Setting</title>
<p>The fully remote usability testing of US1 yielded many benefits, including streamlining the scheduling and execution of sessions. While other librarians assisted with areas like recruitment, only one librarian was needed to conduct the testing. Conversely, US2 required two researchers in its traditional, in-person method: one interviewer and one notetaker. In addition to staffing differences, the physical setting of US2 allowed for richer observational data, including body language and facial expressions that can help researchers interpret moments of confusion or frustration.</p>
<p>Additionally, by &#x201C;meeting students where they are,&#x201D; remote usability testing carries the added benefit of having &#x201C;ecological validity.&#x201D; By this, we mean that the physical environment of the study setting&#x2014;their home, office, study space, etc.&#x2014;matches the space in which users usually perform research. In contrast, researchers conducting traditional usability studies typically ask participants to travel to testing labs or other controlled environments&#x2014;settings that differ significantly from familiar spaces where they normally use online library tools and services. Though US2 used a library conference room as testing space for their traditional study, it was complicated by the need for social distancing measures to address pandemic safety concerns. At the same time, digital platforms used in US1 streamlined recording and data capture, simplifying later analysis, though they required careful attention to privacy and recording protocols.</p>
<p>While remote participants have the benefit of not needing to travel, the primary downside of the home testing environment is a higher probability of user distraction. The first author asked participants in US1 to minimize disruptions, if possible, but it was not guaranteed. US2 sessions occurred in a closed environment, isolated from visual and auditory distractions that might affect the participants. For remote studies, we recommend that communication about the importance of minimizing disruptions occurs both in advance of and at the beginning of each session. </p>
</sec>
<sec id="S11"><title>Costs</title>
<p>The financial implications of modality were notable in both studies. Remote testing substantially reduces personnel and space-related expenses, and then sessions conducted from an office or home setup eliminated the need to reserve physical rooms or coordinate additional staffing for on-site observation. In contrast, US2 required a dedicated library space, which carried hidden costs, including space reservation, equipment setup, and, in this case, safety considerations (e.g., social distancing). Additionally, using two people&#x2014;interviewer and notetaker&#x2014;for US2&#x2019;s in-person sessions increased labor costs. These factors underscore how traditional, in-person usability testing often involves higher logistical and staffing costs.</p>
<p>As noted earlier, participant incentives also played a key role in recruitment for both studies. Although initial recruitment efforts were successful without incentives, additional funding was eventually required to sustain participation. This highlights a recurring cost consideration for practitioners planning usability studies regardless of modality. </p>
</sec>
<sec id="S12"><title>Configuring Screen Sharing and Recording</title>
<p>In both usability studies, only the screen-sharing portions were recorded, ensuring that we could reference sessions when compiling data. An important factor for usability studies is maintaining participant privacy. During US1, because Zoom displays the participant&#x2019;s full name, it was important to first &#x201C;rename&#x201D; them using either initials or, ideally, a participant number. This ensured we were able to track participants without violating privacy standards. US2 used a library laptop and staff Zoom credentials during all sessions, ensuring privacy. If you are conducting a usability study remotely, privacy can be ensured by asking participants to turn off their cameras before beginning the recording. For reference, the last line of the interviewer&#x2019;s script before they began to record the session was &#x201C;Once you turn your camera off and share your screen with me, I&#x2019;ll start recording.&#x201D;</p>
<p>Beyond privacy considerations, the configuration of screen sharing and recording also shaped the type of data collected in US1 and US2. US2, conducted in person, benefited from a controlled environment and closer observation of participants&#x2019; interactions, allowing usability issues to be more deliberately isolated. In contrast, US1, conducted remotely, allowed participants to engage with the website in their natural environments, which may have yielded more authentic behaviors and reduced participant burden. While this naturalistic context introduced greater variability, it also offered insight into how users navigate library websites under real-world conditions.</p>
</sec>
<sec id="S13"><title>Observation and Note Taking</title>
<p>This is one area where the testing modalities differ greatly. Since participants in US1 were instructed to turn their cameras off before the recorded testing portion began, the first author could not observe them during the testing sessions. This is one challenge that could not be overcome. As a result, note-taking in US1 relied heavily on screen activity and verbal comments rather than nonverbal cues. Direct observation was also difficult in US2, although the underlying reason differed. Masking and social distancing made it more difficult to observe small changes in facial expressions and, as a result, to guarantee administrator observations were always correct. In traditional in-person usability studies, structured note-taking during live sessions is typically used to capture body language, hesitation, and other behavioral indicators that supplement screen-based findings; however, pandemic precautions limited the extent to which these cues could be accurately interpreted in US2.</p>
<p>This is one major limitation in both studies, since nonverbal communication is what usability researchers typically rely on to supplement analysis. However, revisiting the recordings of sessions for additional analysis enhanced screen observation and note-taking. In the case of US1, session recordings supported detailed review of participant navigation patterns, though these tools required careful attention to informed consent and data security. Finally, always ensure informed consent, data security, and accessibility, regardless of modality.</p>
</sec>
</sec>
<sec id="S14"><title>Next Steps</title>
<p>Much work remains to identify the ideal recruitment procedures for virtual usability studies in academic libraries. For both of our studies, incentives were vital&#x2014;although that may not be practical at many institutions, especially if testing is done regularly. Successful alternatives have appeared in the literature, such as the concept of using &#x201C;guerilla usability testing&#x201D; (<xref ref-type="bibr" rid="R8">Davis &amp; Song, 2020</xref>) or creating a library student advisory board and recruiting participants through it (<xref ref-type="bibr" rid="R17">Kliewer et al., 2016</xref>). <xref ref-type="bibr" rid="R28">Valentine and West (2016)</xref> tested library workers during the summer due to ease of recruitment, although relying only on library workers is generally discouraged. Finally, although the first author attempted to recruit graduate students in US1, she was largely unsuccessful. So more work needs to be done on recruiting graduate students for usability studies, regardless of modality.</p>
<p>Because we derived great value from our cross-institutional collaboration, we would like to stress the importance of connecting with your peers. The New Jersey Academic Libraries Network provided us with a means of sharing tips, tricks, and best practices, which was especially vital during the isolating times of the COVID-19 pandemic. Through this network, we also connected with other librarians doing great work in this space, including the creator of the recently launched DIY Usability Testing Toolkit for academic librarians (<xref ref-type="bibr" rid="R27">Valenti, 2024</xref>). We aim to continue fostering connections within the academic library community so we can exchange innovative solutions for our challenges.</p>
<p>As remote and hybrid learning continue to expand, virtual usability testing remains a vital method for academic libraries to engage with their increasingly distributed patrons. While this approach offers efficiency and accessibility, traditional in-person testing still provides unique insights that virtual methods may not fully capture. As <xref ref-type="bibr" rid="R18">Krug (2010)</xref> notes, &#x201C;remote testing gives you about 80% of the benefits of a live test for about 70% of the effort&#x201D; (p. 136). Therefore, rather than viewing these methods as mutually exclusive, academic libraries&#x2014;especially those at institutions shifting toward online environments&#x2014;should adapt a hybrid approach that leverages the strengths of both virtual and traditional usability testing. This ensures a more comprehensive understanding of our diverse user needs. </p>
</sec>
<sec id="S15"><title>Conclusion</title>
<p>Overall, we each gained valuable insights from our respective studies. After sharing these with each other, we formally presented our findings to our respective internal user education, discovery, and website maintenance teams.</p>
<p> We remain committed to fostering a culture of usability testing within our libraries, allowing us to take a more proactive approach to regularly evaluating our web presences through iterative design. This ongoing commitment not only strengthens our digital services but also reinforces the value of user-centered design in academic library environments.</p>
<p>Finally, we hope this article supports other academic librarians in making informed decisions about conducting their own usability testing. By considering the two modalities we have explored&#x2014;along with their respective conditions, requirements, strengths, and shortcomings&#x2014;librarians can use this comparative framework to guide their own usability studies with greater clarity and confidence.</p>
</sec>
</body>
<back>
<ref-list><title>References</title>
<ref id="R1"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Alhadreti</surname>, <given-names>O.</given-names></string-name></person-group> (<year>2021</year>). <article-title>Comparing two methods of usability testing in Saudi Arabia: Concurrent think-aloud vs. co-discovery</article-title>. <source>International Journal of Human-Computer Interaction</source>, <volume>37</volume>(<issue>2</issue>), <fpage>118</fpage>&#x2013;<lpage>130</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://doi.org/10.1080/10447318.2020.1809152">https://doi.org/10.1080/10447318.2020.1809152</ext-link></mixed-citation></ref>
<ref id="R2"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Alharbi</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Mayhew</surname>, <given-names>P.</given-names></string-name></person-group> (<year>2015</year>). <chapter-title>Users&#x2019; performance in lab and non-lab environments through online usability testing: A case of evaluating the usability of digital academic libraries&#x0027; websites</chapter-title>. In <source>2015 Science and Information Conference (SAI)</source> (pp. <fpage>151</fpage>&#x2013;<lpage>161</lpage>). <publisher-name>IEEE</publisher-name>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://doi.org/10.1109/SAI.2015.7237139">https://doi.org/10.1109/SAI.2015.7237139</ext-link></mixed-citation></ref>
<ref id="R3"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Baird</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Soares</surname>, <given-names>T.</given-names></string-name></person-group> (<year>2018</year>). <article-title>A method of improving library information literacy teaching with usability testing data</article-title>. <source>Weave: Journal of Library User Experience</source>, <volume>1</volume>(<issue>8</issue>). <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://doi.org/10.3998/weave.12535642.0001.802">https://doi.org/10.3998/weave.12535642.0001.802</ext-link></mixed-citation></ref>
<ref id="R4"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Campbell</surname>, <given-names>J. L.</given-names></string-name>, &amp; <string-name><surname>Monkman</surname>, <given-names>H.</given-names></string-name></person-group> (<year>2021</year>). <chapter-title>The application of a novel, context specific, remote, usability assessment tool to conduct a pre-redesign and post-redesign usability comparison of a telemedicine website</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>J.</given-names> <surname>Mantas</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Stoicu-Tivadar</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Chronaki</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Hasman</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Weber</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Gallos</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Cri&#x015F;an-Vida</surname></string-name>, <string-name><given-names>E.</given-names> <surname>Zoulias</surname></string-name>, &amp; <string-name><given-names>O. S.</given-names> <surname>Chirila</surname></string-name></person-group> (Eds.), <source>Public Health and Informatics</source> (pp. <fpage>911</fpage>&#x2013;<lpage>915</lpage>). <publisher-name>IOS Press</publisher-name>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://doi.org/10.3233/SHTI210311">https://doi.org/10.3233/SHTI210311</ext-link></mixed-citation></ref>
<ref id="R5"><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><surname>Castillo</surname>, <given-names>J. C.</given-names></string-name>, <string-name><surname>Hartson</surname>, <given-names>H.R.</given-names></string-name>, &amp; <string-name><surname>Hix</surname>, <given-names>D.</given-names></string-name></person-group> (<year>1998</year>). <article-title>Remote usability evaluation: Can users report their own critical incidents?</article-title> <source>CHI &#x2019;98: CHI 98 Conference Summary on Human Factors in Computing</source> (pp. <fpage>253</fpage>&#x2013;<lpage>254</lpage>). <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://doi.org/10.1145/286498.286736">https://doi.org/10.1145/286498.286736</ext-link></mixed-citation></ref>
<ref id="R6"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Clapp</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2024</year>, <month>February</month> <day>29</day>). <source>Improving library website usability with AI [Mini-conference presentation]</source>. <chapter-title>Library 2.0 AI and Libraries: Applications, Implications, and Possibilities</chapter-title>, <publisher-name>United States</publisher-name>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="uri" xlink:type="simple" xlink:href="https://www.library20.com/ai-libraries-cfp/improving-library-website-usability-with-ai">https://www.library20.com/ai-libraries-cfp/improving-library-website-usability-with-ai</ext-link></mixed-citation></ref>
<ref id="R7"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Currie</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Harvey</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Bond</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Magee</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Finlay</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2022</year>). <article-title>Remote synchronous usability testing of public access defibrillators during social distancing in a pandemic</article-title>. <source>Scientific Reports</source>, <volume>12</volume>, <fpage>14575</fpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="uri" xlink:type="simple" xlink:href="https://www.nature.com/articles/s41598-022-18873-7">https://www.nature.com/articles/s41598-022-18873-7</ext-link></mixed-citation></ref>
<ref id="R8"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Davis</surname>, <given-names>R. C.</given-names></string-name>, &amp; <string-name><surname>Song</surname>, <given-names>X.</given-names></string-name></person-group> (<year>2020</year>). <article-title>Uncovering the mystery of how users find and use ebooks through guerilla usability testing</article-title>. <source>Serials Review</source>, <volume>46</volume>(<issue>3</issue>), <fpage>193</fpage>&#x2013;<lpage>201</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://doi.org/10.1080/00987913.2020.1806648">https://doi.org/10.1080/00987913.2020.1806648</ext-link></mixed-citation></ref>
<ref id="R9"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>DeStasio</surname>, <given-names>J. G.</given-names></string-name>, &amp; <string-name><surname>Jeitner</surname>, <given-names>E.</given-names></string-name></person-group> (<year>2024</year>). <article-title>Revise, redUX, re-cycle: Iterative website usability studies in an assessment cycle</article-title>. <source>Performance Measurement and Metrics</source>, <volume>25</volume>(<issue>1</issue>), <fpage>43</fpage>&#x2013;<lpage>66</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://doi.org/10.1108/PMM-08-2023-0025">https://doi.org/10.1108/PMM-08-2023-0025</ext-link></mixed-citation></ref>
<ref id="R10"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Godfrey</surname>, <given-names>K.</given-names></string-name></person-group> (<year>2015</year>). <article-title>Creating a culture of usability</article-title>. <source>Weave: Journal of Library User Experience</source>, <volume>1</volume>(<issue>3</issue>). <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://doi.org/10.3998/weave.12535642.0001.301">https://doi.org/10.3998/weave.12535642.0001.301</ext-link></mixed-citation></ref>
<ref id="R11"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Haggerty</surname>, <given-names>K.C.</given-names></string-name>, &amp; <string-name><surname>Scott</surname>, <given-names>R.E.</given-names></string-name></person-group> (<year>2019</year>). <article-title>Do, or do not, make them think? A usability study of an academic library search box</article-title>. <source>Journal of Web Librarianship</source>, <volume>13</volume>(<issue>4</issue>), <fpage>296</fpage>&#x2013;<lpage>310</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://doi.org/10.1080/19322909.2019.1684223">https://doi.org/10.1080/19322909.2019.1684223</ext-link></mixed-citation></ref>
<ref id="R12"><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><surname>Hartson</surname>, <given-names>H.R.</given-names></string-name>, <string-name><surname>Castillo</surname>, <given-names>J.C.</given-names></string-name>, <string-name><surname>Kelso</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Neale</surname>, <given-names>W.C.</given-names></string-name></person-group> (<year>1996</year>). <article-title>Remote evaluation: The network as an extension of the usability laboratory</article-title>. <source>CHI &#x2019;96: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems</source> (pp. <fpage>228</fpage>&#x2013;<lpage>235</lpage>). <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://doi.org/10.1145/238386.238511">https://doi.org/10.1145/238386.238511</ext-link></mixed-citation></ref>
<ref id="R13"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Hill</surname>, <given-names>J. R.</given-names></string-name>, <string-name><surname>Brown</surname>, <given-names>J. C.</given-names></string-name>, <string-name><surname>Campbell</surname>, <given-names>N. L.</given-names></string-name>, &amp; <string-name><surname>Holden</surname>, <given-names>R. J.</given-names></string-name></person-group> (<year>2021</year>). <article-title>Usability-in-place&#x2014;remote usability testing methods for homebound older adults: Rapid literature review</article-title>. <source>JMIR Formative Research</source>, <volume>5</volume>(<issue>11</issue>), <fpage>e26181</fpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://doi.org/10.2196/26181">https://doi.org/10.2196/26181</ext-link></mixed-citation></ref>
<ref id="R14"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Hong</surname>, <given-names>J. I.</given-names></string-name>, <string-name><surname>Heer</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Waterson</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Landay</surname>, <given-names>J. A.</given-names></string-name></person-group> (<year>2001</year>). <article-title>WebQuilt: A proxy-based approach to remote web usability testing</article-title>. <source>ACM Transactions on Information Systems</source>, <volume>19</volume>(<issue>3</issue>), <fpage>263</fpage>&#x2013;<lpage>285</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://doi.org/10.1145/502115.502118">https://doi.org/10.1145/502115.502118</ext-link></mixed-citation></ref>
<ref id="R15"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Jacobs</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>DeMars</surname> <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Kimmitt</surname>, <given-names>J.M.</given-names></string-name></person-group> (<year>2020</year>). <article-title>A multi-campus usability testing study of the new Primo interface</article-title>. <source>College &amp; Undergraduate Libraries</source>, <volume>27</volume>(<issue>1</issue>), <fpage>1</fpage>-<lpage>16</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://doi.org/10.1080/10691316.2019.1695161">https://doi.org/10.1080/10691316.2019.1695161</ext-link></mixed-citation></ref>
<ref id="R16"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Kaiser</surname>, <given-names>J.-N.</given-names></string-name>, <string-name><surname>Marianski</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Muras</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Chamunorwa</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2022</year>). <chapter-title>Popup observation kit for remote usability testing</chapter-title>. In <person-group person-group-type="editor"><string-name><given-names>A. L.</given-names> <surname>Simeone</surname></string-name>, <string-name><given-names>R.</given-names> <surname>Ramakers</surname></string-name>, &amp; <string-name><given-names>C.</given-names> <surname>Gena</surname></string-name></person-group> (Eds.), <source>Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia</source> (pp. <fpage>233</fpage>&#x2013;<lpage>235</lpage>). <publisher-name>Association for Computing Machinery</publisher-name>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://doi.org/10.1145/3490632.3497871">https://doi.org/10.1145/3490632.3497871</ext-link></mixed-citation></ref>
<ref id="R17"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Kliewer</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Monroe-Gulick</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Gamble</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Radio</surname>, <given-names>E.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Using Primo for undergraduate research: A usability study</article-title>. <source>Library Hi Tech</source>, <volume>34</volume>(<issue>4</issue>), <fpage>566</fpage>&#x2013;<lpage>584</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://doi.org/10.1108/LHT-05-2016-0052">https://doi.org/10.1108/LHT-05-2016-0052</ext-link></mixed-citation></ref>
<ref id="R18"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Krug</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2010</year>). <source>Rocket surgery made easy: The do-it-yourself guide to finding and fixing usability problems.</source> <publisher-name>New Riders</publisher-name>.</mixed-citation></ref>
<ref id="R19"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Marzec</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Piotrowski</surname>, <given-names>D. M.</given-names></string-name></person-group> (<year>2023</year>). <article-title>Remote usability testing carried out during the COVID-19 pandemic on the example of Primo VE implementation in an academic library</article-title>. <source>The Journal of Academic Librarianship</source>, <volume>49</volume>(<issue>3</issue>), <fpage>102700</fpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://doi.org/10.1016/j.acalib.2023.102700">https://doi.org/10.1016/j.acalib.2023.102700</ext-link></mixed-citation></ref>
<ref id="R20"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Nielsen</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2000</year>, <month>March</month> <day>18</day>). <source>Why you only need to test with 5 users.</source> <publisher-name>Nielsen Normal Group</publisher-name>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="uri" xlink:type="simple" xlink:href="https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/">https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/</ext-link></mixed-citation></ref>
<ref id="R21"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Nuccilli</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Polak</surname>, <given-names>E.</given-names></string-name>, &amp; <string-name><surname>Binno</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2018</year>). <article-title>Start with an hour a week: Enhancing usability at Wayne State University Libraries</article-title>. <source>Weave: Journal of Library User Experience</source>, <volume>1</volume>(<issue>8</issue>). <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://dx.doi.org/10.3998/weave.12535642.0001.803">https://dx.doi.org/10.3998/weave.12535642.0001.803</ext-link></mixed-citation></ref>
<ref id="R22"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Relawati</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Zamroni</surname>, <given-names>G. M.</given-names></string-name>, &amp; <string-name><surname>Primanda</surname>, <given-names>Y.</given-names></string-name></person-group> (<year>2022</year>). <article-title>Unmoderated remote usability testing: An approach during COVID-19 pandemic</article-title>. <source>International Journal of Advanced Computer Science and Applications</source>, <volume>13</volume>(<issue>1</issue>). <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://dx.doi.org/10.14569/IJACSA.2022.0130135">https://dx.doi.org/10.14569/IJACSA.2022.0130135</ext-link></mixed-citation></ref>
<ref id="R23"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Sexton</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2022</year>). <article-title>Convenience sampling and student workers: Ethical and methodological considerations for academic libraries</article-title>. <source>The Journal of Academic Librarianship</source>, <volume>48</volume>(<issue>4</issue>), <fpage>102539</fpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://doi.org/10.1016/j.acalib.2022.102539">https://doi.org/10.1016/j.acalib.2022.102539</ext-link></mixed-citation></ref>
<ref id="R24"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Thompson</surname>, <given-names>K. E.</given-names></string-name>, <string-name><surname>Rozanski</surname>, <given-names>E. P.</given-names></string-name>, &amp; <string-name><surname>Haake</surname>, <given-names>A. R.</given-names></string-name></person-group> (<year>2004</year>). <chapter-title>Here, there, anywhere: Remote usability testing that works</chapter-title>. In <source>Proceedings of the 5th Conference on Information Technology Education</source> (pp. <fpage>132</fpage>&#x2013;<lpage>137</lpage>). <publisher-name>Association for Computing Machinery</publisher-name>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://doi.org/10.1145/1029533.1029567">https://doi.org/10.1145/1029533.1029567</ext-link></mixed-citation></ref>
<ref id="R25"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Thomsett-Scott</surname>, <given-names>B. C.</given-names></string-name></person-group> (<year>2008</year>). <article-title>Web site usability with remote users: Formal usability studies and focus groups</article-title>. <source>Journal of Library Administration</source>, <volume>45</volume>(<issue>3&#x2013;4</issue>), <fpage>517</fpage>&#x2013;<lpage>547</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://doi.org/10.1300/J111v45n03_14">https://doi.org/10.1300/J111v45n03_14</ext-link></mixed-citation></ref>
<ref id="R26"><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Treadwell</surname>, <given-names>D.</given-names></string-name> &amp; <string-name><surname>Davis</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2019</year>). <source>Introduction to communication research: Paths of inquiry</source> (<edition>4th</edition> ed.). <publisher-name>SAGE</publisher-name>.</mixed-citation></ref>
<ref id="R27"><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><surname>Valenti</surname>, <given-names>A.</given-names></string-name></person-group> (<year>2024</year>). <article-title>Usability Testing best practices for academic library websites &amp; DIY Usability Testing Toolkit</article-title>. <source>Weave: Journal of Library User Experience 7</source>(<issue>1</issue>). <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://doi.org/10.3998/weaveux.4396">https://doi.org/10.3998/weaveux.4396</ext-link></mixed-citation></ref>
<ref id="R28"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Valentine</surname>, <given-names>B.</given-names></string-name> &amp; <string-name><surname>West</surname>, <given-names>B.</given-names></string-name></person-group> (<year>2016</year>). <article-title>Improving Primo usability and teachability with help from the users</article-title>. <source>Journal of Web Librarianship</source>, <volume>10</volume>(<issue>3</issue>), <fpage>176</fpage>-<lpage>196</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://doi.org/10.1080/19322909.2016.1190678">https://doi.org/10.1080/19322909.2016.1190678</ext-link></mixed-citation></ref>
<ref id="R29"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>van Wyk</surname>, <given-names>B.</given-names></string-name></person-group> (<year>2023</year>). <article-title>Library and information services&#x2019; reflections on emergency remote support and crisis-driven innovations during pandemic conditions</article-title>. <source>IFLA Journal</source>, <volume>49</volume>(<issue>3</issue>), <fpage>610</fpage>&#x2013;<lpage>619</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://doi.org/10.1177/03400352231166747">https://doi.org/10.1177/03400352231166747</ext-link></mixed-citation></ref>
<ref id="R30"><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Yi</surname>, <given-names>Y. J.</given-names></string-name>, <string-name><surname>Hwang</surname>, <given-names>B.</given-names></string-name>, &amp; <string-name><surname>Kim</surname>, <given-names>D.</given-names></string-name></person-group> (<year>2022</year>). <article-title>A model for mobile curation services in academic libraries</article-title>. <source>The Electronic Library</source>, <volume>40</volume>(<issue>1&#x2013;2</issue>), <fpage>99</fpage>&#x2013;<lpage>117</lpage>. <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="doi" xlink:type="simple" xlink:href="https://doi.org/10.1108/el-09-2021-0178">https://doi.org/10.1108/el-09-2021-0178</ext-link></mixed-citation></ref></ref-list>
</back>
</article>
