Introduction

For over fifteen years, libraries have researched the existence of accessible digital library materials and found that money, time, and organizational priorities form barriers to libraries’ efforts to make their materials, including websites, more accessible. Barriers to auditing do not mean that libraries want to exclude users, but rather that they might lack the knowhow or process to do so. The process we followed to identify WCAG 2.1 failures within our own library website builds upon prior research aimed at helping librarians and library staff make their websites more accessible.

We built our four-step process out of constraints on this project. First, we had no dedicated budget for this project. Second, even though our website has over 300 web pages associated with it, we could only assign one employee and one student assistant to this project—which they completed along with their regular duties. As such, our process for completing the audit included using free-to-use tools in combination with manual evaluations to alleviate some of the repetition. Third, we had no specific deadline to complete this project, but we wanted to complete before summer so that our webmaster could implement changes between academic semesters. Finally, we completed this project during the global pandemic, which affected it surprisingly little. In this article I talk in more detail about the constraints we worked with, explain our process, and reflect on challenges we faced during the audit.

Literature Review

Current Legal Reality

To start on a sober note, because of the lack of accessible digital materials, students have brought many court cases against higher education institutions, and some mention the libraries as playing a part in the discrimination. For example:

  • The resolution agreement between the University of Montana and the U.S. Department of Education, Office for Civil Rights (2014), specifically stated that their library’s website and “search engine”—used to search the journal articles, collections, and the like—must be compliant with the WCAG version of that time.

  • The settlement between Penn State University and the National Federation of the Blind (2012) had similar language.

  • The resolution agreement between the University of Phoenix and the U.S. Department of Education, Office for Civil Rights (2015) said that the library resources would be subject to an accessibility audit and must be compliant with the WCAG standard at that time.

  • The Consent Decree between Atlantic Cape Community College and National Federation of the Blind had similar language and added, “ACCC is not responsible for the inaccessibility of third-party databases, but shall, upon request, provide timely equally effective alternate access to the content requested” (2015, p. 7).

While only a few court cases explicitly mention libraries, the findings from them show that we can be complicit in discriminating against students by failing to provide accessible digital materials.

Past Research on Accessibility of Library Materials

The process we followed to identify WCAG 2.1 failures builds upon prior research aimed at helping librarians and library staff make their websites more accessible. Spina (2019) wrote about the need for such audits but acknowledged that achieving accessibility requires “significant investment in and prioritization of web accessibility” (para. 29), which may not always be possible due to limits on time and money. In a review of accessibility-related literature, Kimura (2018) found that people historically limited audits to technology with a “persistent lack of direct engagement with users with diverse abilities” (p. 431). Peters and Bradbard (2010) also examined the barriers to audits from an organizational perspective and concluded that money, time, and organizational priorities can put such audit efforts on hold.

Barriers to auditing do not mean that libraries want to exclude users, but rather they might lack the knowhow or process to do so. As Rosen put it, “accessibility is unique in that people often agree on its value…but may disagree on its meaning” (2017, para. 4). Ng (2017) focused on the need for libraries to incorporate accessibility into the holistic web design. Ng cited Shneiderman and Hochheiser definition of incorporating accessibility as “[implying] that diverse users…can successfully apply technology to get what they need in life” (2001, p. 367, as cited in Ng, 2017). Continuing the thread, Kimura (2018) posed the question of “what if [accessibility] barriers were never erected in the first place? Born-accessible technologies and environments are the logical outcome of a user-centered, universal design process that places accessibility at the core” (p. 432). As more libraries move toward a user-centric model of services, some sought to answer this question through user experience (UX) research.

For well over fifteen years, libraries researched making digital library materials accessible, and UX research helped them identify places for improvement by involving the perspective of users. Over the past two decades, libraries moved toward integrating user-centered design into general projects (Luca & Narayan, 2016), specific collections (e.g. digital image collections) (Comeaux, 2008), and service models (Godfrey, 2015; Logan & Everall, 2019). However, Kumar and Owston (2016) argued that only conducting accessibility evaluation with specific disabilities in mind “…may result in the perception that consideration of e-learning accessibility is necessary only in special cases and is not an essential proactive consideration for all students when e-learning environments are designed” (p. 264). So including perspective from a variety of users through UX methods can help libraries identify potential issues with their website.

Beyond library websites, courts disagree on whether libraries must ensure accessibility of vendor products. Researchers such as DeLancey (2015), Oud (2016), and Stewart et al. (2005) focused on evaluating the accessibility of vendor-supplied documentation, since libraries often use vendor products to provide core functions. They generally concluded that vendors “are not fully aware of the accessibility issues in their own products” (DeLancey, 2015, p. 109). Ng (2017) and Spina (2019) pointed out libraries have a responsibility to ensure that their websites are accessible as part of the whole user experience. In this article, I will form a bridge between Spina’s research on organizational barriers to doing audits and Ng’s research on implementing accessible changes by detailing the process we used at my library to go from needing an audit to having the documentation of what to change.

Project Context

Two situational circumstances gave us the necessary organizational space to begin this project. First, a peer university’s library had a rather public lawsuit against them for a lack of website accessibility. Second, we got new leadership that prioritized user-centric practices as work rather than just as a vague concept. Having spoken with users with various disabilities as part of other routine usability testing, members of our UX unit saw how simple changes could make our website easier for them to access. As a note, we cannot include direct comments from users because we do not seek IRB approval for routine improvements to organizational resources. However, most of our users’ recommendations matched what the WCAG standards require.

Scope of Our WCAG Audit

We audited for both WCAG A and AA level standards because common court rulings require compliance with them. However, courts do not currently require the AAA level standards because users normally can access current digital services and resources on websites fulfilling the other two levels. However, courts usually consider WCAG’s lowest level, A, insufficient by itself to meet ADA and Section 508 legal requirements. We may address the AAA level standards in the future; however, we wanted to focus our improvement efforts on standards that apply across the board.

Our website is a complex mix of web pages, PDFs, and links to resources hosted on vendor-supplied digital resources, but we only audited web pages and PDFs with our university URL stem. As with many other libraries, we use multiple vendor digital resources including our catalog and our study room reservation systems. However, we audited only digital materials that our website librarian could change or request our university Information Technology Division to change during this study. Because vendor products can be non-compliant, and court rulings disagree with whether libraries are responsible for the accessibility of vendors’ products, we will plan follow-up studies on vendor products we currently use.

Project Constraints

As Peters and Bradbard (2010) and Spina (2019) noted, barriers to doing accessibility audits usually include money, time, and organizational priorities. We faced the same barriers when starting this project, though each impacted the project in a different way.

From the start, while we had organizational approval to do this project, we had no assigned budget for the project. Because of my familiarity with website coding and WCAG standards, we decided that we could save money by not hiring a company to do the audit for us. This would preserve our unit’s budget, which we mainly use for UX research incentives. We also only used free-to-use tools, so we could avoid incurring costs to the unit’s budget. This served us well when, in the middle of the project, we moved to remote work because of the COVID-19 pandemic, and our university requested we make no new purchases because of uncertainty of what the future held.

We had no pressing outside factors determining when we needed to complete certain steps, so we set our own timeframe for completing the audit and implementing the changes. However, we did want to get it done before summer break, so the website librarian could implement changes over the slower summer semester. We proposed this project to our website librarian and other invested units in December 2019. The website librarian compiled an initial list of web pages and linked PDFs for us to audit, which she gave to our unit in the middle of January 2020, and we began our audit at the end of January. We initially slated this project for completion by the end of March 2020, but the COVID-19 pandemic and subsequent adjustment to workflows resulted in us extending the timeline by almost a month. Because I had to balance other job responsibilities, I could only devote roughly two hours per day to this project. Combined, the student assistant and I worked on this project for approximately 10–20 hours per week from late January 2020 to April 2020, meaning we spent between 120 and 240 hours to complete this audit of over 300 web pages and PDFs and to write the report.

We had one full-time staff member and one part-time student assistant assigned to this project in addition to their other duties. Leadership considered auditing our website for accessibility as falling within our scope of practice as a UX unit. Additionally, legal cases against peer institutions removed some organizational resistance. However, there were organizational barriers to training more full-time staff or faculty to assist with this, so we could not have more fully trained auditors working on this project. Nielsen (1994) recommended having three to five evaluators inspect the user interface and then aggregating their answers. However, he also said only one or two people auditing does not mean that the audit is useless, but rather that one runs the risk of not uncovering some major issues (1994, para. 2). To get things done on a timely basis, the student assistant and I audited different sections—assigned based on ability—though I spot-checked the student assistant’s work for completeness and accuracy.

Methodology

From the beginning of this process, we wanted to identify accessibility barriers that might prevent users from accessing content on our website. We wanted to do this with the fewest costs and within the limits of our current personnel. As the full-time staff member involved with this project, I knew how to code websites and understood WCAG 2.0 standards. The student assistant did not know either, so they required on-the-job training.

Accessibility Audit Methods

As our unit had never done an audit like this before, to start we turned to Nielsen’s foundational 1994 white paper about how to conduct a heuristic evaluation. While Nielsen’s white paper was not specifically about accessibility auditing, his three specifications about what evaluators should do helped us figure out where to start:

  • Explain [why something fails] with reference to the heuristics or to other usability results.

  • Try to be as specific as possible.

  • List each usability problem separately (1994, para. 11).

But these recommendations applied more to reporting findings than doing the auditing, so we also needed to figure out how to do the audit. And honestly, this is where things got a little messy. We knew we wanted to evaluate our website’s compliance with the WCAG standards, but we did not have a predefined way to do so. Because we did not have another point of reference, we went point-by-point through the WCAG standard list mimicking Nielsen’s heuristics evaluation’s method, and we evaluated our website using a mix of our expertise and web-based tools.

Based on research after the fact, our method matched what others had done more recently in non-library fields; for example, Alsaeedi (2020) summarized four different approaches commonly used to evaluate web pages:

  1. Automated. “…runs accessibility evaluation tools on the website to gather accessibility violations against predefined guidelines” (p. 3).

  2. Manual. “[Uses] human experts to examine web pages to identify violations of accessibility guidelines” (p. 4).

  3. User testing. “[Identifies] accessibility issues while disabled users are interacting with the content of web pages” (p. 4).

  4. Hybrid. “…combines automated and manual evaluation (human experts or with the aid of disabled people)” (p. 4).

Under this framework, we followed the hybrid approach. Alternatively, Brajnik et al. (2011) categorized accessibility audits into five groups:
  1. Inspection methods based on evaluators inspecting a web page for its accessibility. Inspection for conformance to pre-set standards such as WCAG fall under this method.

  2. Automated testing where evaluators use automated accessibility tools to check conformance of a web page.

  3. Screening techniques where the evaluators use the website in such a way to simulate conditions that are typical for people with disabilities.

  4. Subjective assessment where the evaluators recruit users to explore/use the website on their own and provide independent feedback.

  5. User testing where the evaluators ask users to complete close- or open-ended tasks on the website while observing them (p. 249–250).

Under their classification, we primarily used inspection methods supplemented with automated testing and screening techniques—such as determining color contrast by using gray-scale settings.

We wanted to use a process that allowed us to understand where we precisely fell short of meeting minimum WCAG 2.1 standards since those standards are used in U.S. courts of law as a marker of accessibility compliance. When examining the differences between auditing approaches, Kumar and Owston (2016) recommended conducting “student-centered accessibility evaluations” in addition to using automated tools because automated tools missed problems that arose from students’ experiences. We did consider conducting accessibility evaluations that would involve watching users interact with the website; however, we decided to delay such evaluations until we had fixed all technical issues that might prevent us from uncovering larger structural issues. Our decision to combine both tools and manual evaluation matches the same method of audit Hong et al. (2008) used when they compared web accessibility of US and Korean government websites. Unlike them, though, we did not intend to compare the severity of issues on our website to other websites; instead, we wanted to simply identify their existence to then fix them.

Terminology

In the following sections, I use some coding jargon, so here’s brief glossary:

  • Elements: shorthand for any piece of or group of information on a web page plus its associated markup.

  • Structural markup, or just “markup”: web-based computer code that tells a computer how to display information on a web page. Throughout this article, this term loosely refers to all types of code, including HTML, CSS, SVG, and JavaScript.

  • Source code: the version of the code that end users of websites only see the results of, for example: seeing a border around an image but not the markup that tells a browser which kind of border to display.

As a note, I use several WCAG specific words and phrases throughout this article that are used as titles for sections with WCAG rather than predefined terms. These include words and phrases such as “1.4.10 Reflow” and “2.4.1 Bypass Blocks.” For explanation of what these titles refer to, please check out the World Wide Web Consortium’s definitions on their How to Meet WCAG (Quick Reference) (https://www.w3.org/WAI/WCAG21/quickref/) web page.

Tools

Our website has many web pages, so at certain times we needed tools to help us audit things in bulk or audit for WCAG sections that require line-by-line analysis of code. Also, neither of our auditors were Blind nor color blind, so we used additional tools to assist us with seeing our web pages in a way somewhat similar to our users. When selecting tools, we chose to use only open-source free-to-use tools because this project had no allocated budget. This may be a familiar situation for many readers, so hopefully this brief description of tools we used during our audit provides a starting point.

We used the “inspect” feature found in most browsers for all WCAG sections that required evaluating formatting and markup code—of which there are too many to list here. This feature allowed us to see the underlying source code. Using this tool required that we understood all HTML and some CSS markup, since we used it to check markup usage against standard usage.

We also used the WAVE Web Accessibility Evaluation tool (https://wave.webaim.org) to see what web pages looked like without formatting to determine read order and whether certain content would be available for screen readers to access without the additional formatting. While this tool explains itself through help text, I recommend understanding basic HTML markup and the different ways one can code images on a website before using it.

We used our work computers’ grayscale color filter combined with WAVE to identify whether elements on our web pages had enough contrast—relevant to WCAG sections 1.4.3, 1.4.6, and 1.4.11. Grayscale color filters allowed the auditors to visually “see” web pages in a way that somewhat mimics the experience of users who cannot see color; if an element “disappeared” in grayscale, then that element had insufficient color contrast and would need to be visually redesigned for usability.

Finally, we used the World Wide Web Consortium’s Nu HTML checker (https://validator.w3.org/nu/) to validate current HTML code against accepted use—the scope of section 4.1.1 of WCAG 2.1. This section of WCAG 2.1 is very detailed, and it is outside of the scope of this article to provide a more comprehensive explanation about it or the Nu HTML checker. However, because of the tool’s comprehensiveness, we decided to use it to reevaluate the website after we had fixed the issues identified during the first audit. Libraries with personnel who understand web code might consider using this tool more extensively to identify problems on a page-by-page basis.

As a side note, we did not find a URL-based validator for svg markup. While we use svg markup on a few of our web pages, we did not have the appropriate form of svg to use existing non-URL-based validators. However, we learned that most of the material generated in svg markup—building maps—did not comply with WCAG 2.1 standards, so we evaluated whether we needed that markup or if we could use a more accessible format for that content.

Process for Auditing

We designed our process around mostly monetary and personnel constraints with an unlimited timeframe for completion (see table 1). While not every institution has the same constraints, it may help to compare to ours:

  • Money: it cost us zero dollars to do this project because we used in-house employees and free online tools.

  • Time: we had no set deadline, allowing us to work on this project between other duties. It took us over 140 hours to complete.

  • Personnel: one full-time staff member and one part-time student assistant—both of whom had other unit-related duties to do each week—worked on this project.

We began the audit in January 2020 when we received an initial list of web pages and linked PDFs from our web librarian as an Excel spreadsheet.

Table 1:

Our WCAG auditing process.

Step Name Short Description
1. Preparing Gather and organize the complete list of pages to audit.
2. Training Learn the basics of HTML and how to read/navigate the WCAG website.
3. Auditing Analyze web pages and web elements for compliance with WCAG standards.
4. Reporting Organize findings and write them up in such a way as to make it easy to identify what needs changing where.

Preparing

Before we began auditing, we needed to make sure that our Excel list of web pages and linked PDFs included all web pages and PDFs, and we needed to sort them. To do this, we:

  1. Defined our scope of what web pages and PDFs to include: would we be auditing for only one department of our library, or would we be auditing the whole website? We decided to audit all web pages with our university’s URL stem, which means that we omitted all web pages hosted on vendor products, such as Springshare’s LibGuides.

  2. Checked systematically to see if the Excel list included all the web pages. We did this by starting on our homepage main navigation and clicking on every link, and then clicking on every link on the subsequent web pages. This took several hours, but we uncovered over 200+ web pages and PDFs not included on the initial “analytics generated” list from our web librarian.

  3. Categorized the web pages in Excel, which helped later when evaluating WCAG sections that only apply to specific web elements instead of entire web pages—such as forms and images. We added columns in our Excel spreadsheet for the following:

    1. Contains videos

    2. Contains forms

    3. Contains images

    4. Contains table(s)

    5. Contains elements with unknown accessibility

At this point, we asked our web librarian to delay making any changes to the website until we had finished the audit because changes in content or addition of new web pages would require us to backtrack and audit them again. Unfortunately, because of the size of our website and the fact that many people can edit various web pages, we had to backtrack a few times as employees changed or added web pages.

Training

As we began auditing the web pages and PDFs for compliance with the WCAG sections, I trained the student assistant on basics of HTML markup and on how to read and navigate the WCAG website. I trained the student assistant section-by-section, which allowed us to focus on specific elements one at a time. For example, I asked the student assistant what they knew about structural markup—section 1.3.1 Info and Relationships; they had little prior knowledge, so we began working through each technique and failure. I referred heavily to w3schools (https://www.w3schools.com) during my explanations, and I would recommend it as a resource for those needing to teach themselves. Because I understood WCAG, having worked with version 2.0 previously, and the student assistant had never worked with it, I coached them through auditing the first one or two web pages per standard. Later, I spot-checked their work after they finished auditing a section.

The WCAG website’s short “how to meet [this section]” gives a good overview of each section. I encouraged the student assistant to take note of the “techniques” prefixes within each section, since the WCAG website categorizes pass or fail techniques based on markup and file type. Also, their in-depth web pages for each technique give good examples of what they are and are not talking about, which made it easier for us to audit standards we had not worked with before.

Auditing

Once we completed some initial training, we began auditing. To make auditing simpler, we broke it into four phases:

  1. Audit for simple WCAG sections: These sections generally applied to very specific elements on our web pages—such as videos or images—and required no prior knowledge of code. For us, this included all of section 1.2 Time-based Media and many image-based sections, such as 1.4.1 Use of Color and 1.4.7 Low or No Background Audio.

  2. Audit for highly repetitive sections: Because our university uses a system-wide template and backend content management system, some issues appeared on every page. So we reduced the number of web pages that we needed to check when auditing sections that applied to all pages. These included sections such as

    • 1.4.4 Resize text

    • 1.4.10 Reflow

    • 2.4.1 Bypass Blocks

    • 3.1.1 Language of Page

  3. Audit for tedious sections: These sections required some knowledge of web code and could be different on every web page, so we had to systematically check more than 300 web pages and PDFs for these sections. Generally, the WCAG techniques pages had sufficient examples for us to understand them without having to refer to other websites. For us, many sections fell under this phase including 1.3.3 Sensory Characteristics and 2.1.1 Keyboard.

  4. Audit for difficult sections: Technical reasons made these sections more difficult to audit for. This included section 4.1.1 Parsing, which we had to postpone because the tool we used flagged a lot of code for review that we had previously identified. So we found it unnecessarily redundant to audit for this section until other changes were made.

While all sections needed to be evaluated, we found that they generally did not require auditing in a specific order. Additionally, because all sections intersected with one another at some level, what we learned on one section made it easier to audit other sections.

We used these phases as guidelines and had no specific end date because we audited while working on our other job responsibilities. However, audit phases 1 and 2 combined took about half the time to audit as phases 3 and 4 combined. Phase 3 was especially slow to audit because it involved highly repetitive motions of copying and pasting web URLs, right clicking in the browser window to bring up the inspect element, and then scrolling through the code to evaluate it—for each web page and PDF. After a while, we felt pain in our wrists and shoulders because of the repetitive motions. For the sake of well-being, I highly recommend building in periodic breaks during that phase to alleviate carpal-tunnel pain.

Reporting

During the audit, we wrote up our findings in a Word document organized by WCAG 2.1 section—see the appendix for examples that relate to our UX web pages. This allowed us to not only keep track of what we had done but also to keep things organized for our web librarian. Report lengths will vary depending the number of web pages and problems; our report ran to about thirty-five single-spaced pages including front material discussing the need for such an audit.

In each section of the report, we summarized the problems that applied to many or all pages, and then we reported problems that uniquely impacted a specific page in a specific way. Because the noncompliance problems could be summarized by WCAG standard, and we had over 300 web pages, we decided to keep the standard-by-standard organization for the final report. For each standard, we organized problems by three categories: Global, Semi-Global, and Page Specific. We did not have many global issues because those issues usually applied to the whole website. We had many semi-global issues that affected sets of web pages, typically those authored at the same time by one person, so issues transferred from one page to the next. Finally, we also had many page-specific issues, as employees often alter web pages and add new one-off web pages.

We do not claim to have identified every non-compliant issue on our website, but patterns emerged from the hundreds of unique compliance issues. In our audit report, we summarized these major patterns with the goal that our web librarian would use them to create better guidelines for our overall website. We hope that good guidelines will make it easier to create accessible web pages from the start and make it easier to communicate the importance of accessibility to skeptical colleagues.

Reflections and Recommendations

Even with the barriers that we faced, we still uncovered hundreds of accessibility problems with our website. These problems ranged from mild inconveniences—decorative images having generic alt text—to major accessibility barriers—library maps unreadable by screen readers. However, we could solve many of these problems at the local level without taking them up with university web masters. As of the writing of this article, the library has solved many problems at the local level and addressed reoccurring problems on a case-by-case basis as employees continue to create new content for the website.

We started this project before COVID-19 shut-downs and completed it during a state-wide work-from-home order, but the COVID-19 pandemic had surprisingly little impact on this project. As we had no dedicated budget, we faced no worries about budget shortfalls. Also, early on our organization assured everyone that we would still have jobs, at least through summer. If anything, because most of my job involves gathering user data within our physical spaces, I had more time to dedicate to this project while under the work-from-home order because I could not be on campus conducting user testing. All of this auditing could be completed off-campus and without access to the content management system used to create and host the website. So we audited from our kitchen tables with, mostly, as much ease as if we had been on campus.

We had no deadline to finish the project, but we learned some lessons that sped up the process:

  1. We asked our web librarian not to change things during the audit. But we should have asked everyone not to change things during the audit or to notify us beforehand.

  2. We categorized our web pages so we could skim them when auditing for standards that apply to specific web elements—such as videos.

  3. We read WCAG’s own detailed explanations about “satisfactory techniques” and “failures” before auditing for a specific standard. This saved time by eliminating potential misunderstanding ahead of time.

  4. We audited in order of familiarity with the standards. This saved time by letting us gradually learn more complex concepts rather than jumping in all at once.

Beyond these successes though, I would have done a few things differently if I started the audit now. First, I would have spent more time at the beginning training the student assistant on basic web coding, while having refreshers as we audited specific sections. While it may have added roughly 2–3 weeks of extra time on the project, I would have required the student assistant to complete several w3schools HTML tutorials:
  • Six general HTML tutorials: Basic, Elements, Attributes, Headings, Images, and Tables.

  • All five HTML Forms tutorials.

We did not need in-depth knowledge of CSS, but we did encounter some “style” code intermixed with HTML, so I found myself explaining how the markup interacted.

Second, I would form a better plan for retesting and follow-up. As it stands, we simply handed off the report to our website librarian with no solid plans to implement the necessary changes. This was certainly an oversight on our part, especially in light of the number of problems we had across our website. My unit implemented changes on pages that we can edit (see the appendix for descriptions of problems and how we fixed them); however, I would recommend assigning responsibility for who will make which changes prior to starting the project.

Third, I would pace myself and any assistants better. The repetitive motions took a toll on our wrists, backs, and eyes after a few weeks of auditing, so we should follow ergonomics best practices in the future. Our institution’s occupational safety group recommends:

  1. Take short breaks every 20 minutes and longer breaks every hour. These breaks, which include doing something other than continuing to sit at our desks, reduce strain overall.

  2. Keep your keyboard at the appropriate ergonomic height for typing (arms at 90° angles, not raised).

  3. Use a rest for your wrist when using a mouse to reduce overall fatigue.

  4. Sit in an appropriately sized office chair to reduce shoulder and back strain.

At my office, I use a fully customized ergonomic workstation; however, I failed to model good habits with taking regular breaks to reduce eyestrain. The student assistant, while working in the office, used a non-adjustable office chair at a low desk on a laptop computer with only the built-in keyboard and trackpad. After using a non-ergonomic setup at home during COVID-19 work-from-home orders, I realized I should have provided the student assistant with at least an external keyboard and mouse which they could adjust for better ergonomics while auditing.

Conclusion

While we built our process out of specific constraints, we went from needing an accessibility audit to having a list of specific problems to fix. Auditing for accessibility does not need to be mired down by organizational barriers, nor does it need to be relegated to a wish list of good ideas to implement sometime in the future. Libraries can complete this kind of audit with no dedicated budget, so long as the organization agrees to let personnel work on it alongside their normal duties. Libraries can identify problems as a sequence of small projects based on individual WCAG standards. While the process could look different at every library, I hope that by sharing our process you too can bridge the gap between the need for change and identifying concrete places to begin that change.

Appendix: Example Results from the WCAG Audit

1.1.1

Non-Text Content (Level A)

The Points of Failure

Semi-Global Problem: The following web pages have decorative images that are not marked up in such a way that assistive technology can ignore:

Table 2:

Example of our results from the WCAG audit. This table relates to Section 1.1.1 Non-Text Content (Level A)

Page URL Reason it fails

https://www.depts.ttu.edu/library/user-experience/index.php

and

https://www.depts.ttu.edu/library/user-experience/

Content contained in the first image has no accessible text alternative provided. The second decorative image is not marked up in such a way that assistive technologies can ignore it.
https://www.depts.ttu.edu/library/user-experience/fall2018_survey.php Content contained in the images have no accessible text alternative provided (see the fall 2017 survey for an example of using alt text to provide an acceptable text alternative).
https://www.depts.ttu.edu/library/user-experience/love_letters_results_2019.php Content contained within the chart and graph images have no accessible text alternative provided. The topmost decorative image is not marked in such a way that assistive technology can ignore it.
https://www.depts.ttu.edu/library/user-experience/studentboard.php If image is non-decorative it does not quite convey the main information that the image does (specifically number of advisors, identity, etc.). If image is decorative, it is not marked up in such a way that assistive technology can ignore it.
https://www.depts.ttu.edu/library/user-experience/ux_cafe.php All PDF reports listed on this page have unknown accessibility.
How We Solved It

In order for these web pages to pass this section of WCAG, all non-text content should have an alt-text which accurately describes the non-text content. If the content is purely decorative, then it should be formatted in a way that a screen reader can skip through it.

We reviewed all non-text content on our pages, as listed above, and added alt text where necessary to provide a text alternative that presents the same information as the content. For purely decorative images, we set the alt text to NULL.

3.2.1

Language of Parts (Level AA)

The Points of Failure

Semi-Global Problem: PDFs currently fail this standard, since they lack appropriate tagging. Once PDF articles get tagged correctly, then screen readers will be able to appropriately identify the language they should use to read aloud different parts of the documents.

How We Solved It

We retagged all of our PDFs using the accessibility feature in Adobe Acrobat and manually checked for correct read order.

References

Alsaeedi, A. (2020). Comparing web accessibility evaluation tools and evaluating the accessibility of webpages: proposed frameworks. Information, 11, 40. https://doi.org/10.3390/info11010040https://doi.org/10.3390/info11010040

Brajnik, G., Yesilada, Y., & Harper, S. (2011). The expertise effect on web accessibility evaluation methods. Human-Computer Interaction, 26(3) 246–283. https://doi.org/10.1080/07370024.2011.601670https://doi.org/10.1080/07370024.2011.601670

Comeaux, D. J. (2008). Usability studies and user-centered design in digital libraries. Journal of Web Librarianship, 2(2–3) 457–475. https://doi.org/10.1080/19322900802190696https://doi.org/10.1080/19322900802190696

DeLancey, L. (2015). Assessing the accuracy of vendor-supplied accessibility documentation. Library Hi Tech, 33(1) 103–113. https://doi.org/10.1108/LHT-08-2014-0077https://doi.org/10.1108/LHT-08-2014-0077

Godfrey, K. (2015). Creating a culture of usability. Weave, 1(3). https://doi.org/10.3998/weave.12535642.0001.301https://doi.org/10.3998/weave.12535642.0001.301

Hong, S., Katerattanakul, P., & Lee, D. (2008). Evaluating government website accessibility: Software tool vs human experts. Management Research News, 31(1) 27–40. https://doi.org/10.1108/01409170810845930https://doi.org/10.1108/01409170810845930

Kimura, A. K. (2018). Defining, evaluating, and achieving accessible library resources: A review of theories and methods. Reference Services Review, 46(3) 425–438. https://doi.org/10.1108/RSR-03-2018-0040https://doi.org/10.1108/RSR-03-2018-0040

Kumar, K. L., & Owston, R. (2016). Evaluating e-learning accessibility by automated and student-centered methods. Educational Technology Research and Development, 64(2) 263–283. https://doi.org/10.1007/s11423-015-9413-6https://doi.org/10.1007/s11423-015-9413-6

Logan, J., & Everall, K. (2019). First things first: Exploring Maslow’s hierarchy as a service prioritization framework. Weave, 2(2). https://doi.org/10.3998/weave.12535642.0002.201https://doi.org/10.3998/weave.12535642.0002.201

Luca, E., & Narayan, B. (2016). Signage by design: A design-thinking approach to library user experience. Weave, 1(5). https://doi.org/10.3998/weave.12535642.0001.501https://doi.org/10.3998/weave.12535642.0001.501

Ng, C. (2017). A practical guide to improving web accessibility. Weave, 1(7). https://doi.org/10.3998/weave.12535642.0001.701https://doi.org/10.3998/weave.12535642.0001.701

Nielsen, J. (1994, November 1). How to conduct a heuristic evaluation. Nielsen Norman Group. https://www.nngroup.com/articles/how-to-conduct-a-heuristic-evaluation/https://www.nngroup.com/articles/how-to-conduct-a-heuristic-evaluation/

Oud, J. (2016). Accessibility of vendor-created database tutorials for people with disabilities. Information Technology and Libraries, 35(4) 7–18. https://doi.org/10.6017/ital.v35i4.9469https://doi.org/10.6017/ital.v35i4.9469

Peters, C., & Bradbard, D. A. (2010). Web accessibility: An introduction and ethical implications. Journal of Information, Communication and Ethics in Society, 8(2) 206–232. https://doi.org/10.1108/14779961011041757https://doi.org/10.1108/14779961011041757

Rosen, S. (2017). Accessibility for justice: Accessibility as a tool for promoting justice in librarianship. In The Library With The Lead Pipe. http://www.inthelibrarywiththeleadpipe.org/2017/accessibility-for-justice/http://www.inthelibrarywiththeleadpipe.org/2017/accessibility-for-justice/

Settlement between Penn State University and National Federation of the Blind, OCR Case 03-11-2020 (2012). https://accessibility.psu.edu/nfbpsusettlement/https://accessibility.psu.edu/nfbpsusettlement/

Spina, C. (2019). WCAG 2.1 and the current state of web accessibility in libraries. Weave, 2(2). https://doi.org/10.3998/weave.12535642.0002.202https://doi.org/10.3998/weave.12535642.0002.202

Stewart, R., Narendra, V., & Schmetzke, A. (2005). Accessibility and usability of online library databases. Library Hi Tech, 23(2) 265–286. https://doi.org/10.1108/07378830510605205https://doi.org/10.1108/07378830510605205

United States District Court for the District of New Jersey, Civil Action CV 15-3656 (JEI)(JS) (2015). http://www.atlantic.edu/documents/nfb_lanzailotti_atlantic_cape_consent_decree.pdfhttp://www.atlantic.edu/documents/nfb_lanzailotti_atlantic_cape_consent_decree.pdf

University of Montana. Resolution Agreement, OCR Reference 10122118 (2014). http://www.umt.edu/accessibility/docs/AgreementResolution_March_7_2014.pdfhttp://www.umt.edu/accessibility/docs/AgreementResolution_March_7_2014.pdf

University of Phoenix. Resolution Agreement, OCR Case 08-15-2040 (2015). https://www2.ed.gov/about/offices/list/ocr/docs/investigations/more/08152040-b.pdfhttps://www2.ed.gov/about/offices/list/ocr/docs/investigations/more/08152040-b.pdf