07/14/2023
Image
Digital Illustration of a robot analyzing a computer screen.

Tech Notes

The term “Digital Accessibility” may at first seem daunting. For many, accessibility can be an entirely new concept, and so, when faced with a situation where you have to apply it, one may be at a loss on where to start. What is the best way to learn? Upon doing general research to familiarize yourself with this new concept, you will most likely encounter advertisements on several “automated testing tools,” and you might think your predicament has a quick fix. That, however, is a false comfort. Looking further into it, you’ll note that the automated testing tools make many claims about what accessibility issues they can help identify and fix, but there are also a lot of gaps in those claims, as various communities of assistive technology users would point out. In this case, the devil is in the details, so let’s dive into how automated testing can indeed help you as tools but should not be considered automated fixes.

The key word to focus on here is “automated.” For automation to work, there must be a set of instructions that are analyzed as “True” or “False.” For each accessibility issue documented, an automated testing tool can compare what is on the web page to its database. If there is no match, it fails. If there is a match, it passes. Due to this, there is no variation or nuance, and the tools cannot apply any sort of context necessary to truly evaluate different web pages. Since there is such a wide variety of options and layouts on how digital content is designed, setting it to a fixed standard will be impossible, hence why manual testing is necessary in determining if all users can easily use the content in an inclusive and contextualized manner.

For that, the World Wide Web Consortium (W3C) has created the Web Content Accessibility Guidelines (WCAG) that have been accepted internationally as the standard for making an interface accessible. While not technically law, it gives a set of guidelines and conformance that a designer and/or developer should apply to make their content accessible. But how can automated testing tools help with that, and where do they fall short?

What Works, and What Doesn’t Work?

Here are some key accessibility issues automated testing tools promise to help with and their limitations. Note that this is a brief illustration, not a detailed review.

Webpage Language

When a screen reader user visits a web page, the content will be read in the language they have set in their screen reader settings. If there is anything on the webpage that is different from that selected language, the screen reader Text to Speech (TTS) will have a different pronunciation, which may be confusing. And so, if there is no language set, that will be a flagged issue. However, if the wrong language is set, this will be missed during an automated test.

Title

When you open a webpage, there’s a title at the top of the web browser. Having a meaningful and descriptive title is even more important for screen reader users as, without it, they have to go into each tab and navigate its content to figure out what it is. Automated testing tools can quickly flag a missing title as an issue, but it is up to the creator to determine if the title is descriptive and accurate enough.

Alternative Text

Alt text is a description creators can add to an image as an html attribute, so those who can’t see the image can understand what is there. When testing, it can be daunting to check for alt text in every single image, and automated testing can help with that, but only by identifying whether the alt text exists or not. It cannot tell if the alt text is descriptive enough, or if an image is decorative and thus shouldn’t have alt text at all. So it can help with a quick evaluation, but further investigation is required to catch any issues. You can read more about alternative text here.

Color Contrast

If a web page has poor color contrast, it creates an unpleasant experience for everyone, but for those who have color blindness or low vision, it can limit how usable the webpage is at all. Automated testing tools can quickly compare colors that are side by side and determine if the color contrast matches the WCAG guidelines. This is a very quick way to audit an entire website and have the results quickly. Some testing tools can even determine if the font size is large enough and match the WCAG guidelines on proper font size and color contrast in this regard. However there are times where an automated testing tool will technically pass a contrast but is still visually difficult to read. A follow-up visual inspection is always good after a color contrast check.

Broken Links

All websites will encounter broken links at some point in their lifetime, and automated testing tools can quickly identify and flag any broken links. An added feature on some testing tools is identifying any redundant hypertext, which is best practice to remove for a good user experience. If a web page has ten “Click here” or “Download Now” hypertexts, it may be confusing for a screen reader user who does not have any other context. When it comes down to descriptive titles, headings, links, etc, automated testing tools will not be able to identify if a description is good enough and relay the correct information.

HTML Tags

For an assistive technology user, if any content has an improper HTML tag, they will get the wrong feedback, which may cause confusion. For example, assistive technology users use headings as a quick way to navigate a large amount of content so they can arrive at what they want to read quickly. While, stylistically, having the entire page with a heading tag can be a quick way to center and bold text, a screen reader will announce “heading” at every line, which takes away quick navigation to move around a webpage. And that is just one of many similar examples which provide quick visual solutions but incorrect information for assistive technology, like listing items with a paragraph tag instead of an unordered/ordered list. Automated testing will not catch these types of issues, and when left unchecked, this can create a very bad user experience.

Tab Order

When using a keyboard, or switch control, content is set in a linear reading order. For example, if you visit a product page and start pressing the “Tab” key, you should visually see the focus indicator move across the page in the correct order. However, there are times when a developer will need to change the tab order so that the content is understandable and easy to access. An automated testing tool will not be able to determine where or how the correct tab order needs to be done on a page. You can read more about Focus Order here.

Name, Role, Value

When using assistive technology, it’s critical for every interactive element to have an associative name, role, and value to provide key information to the user. Automated testing tools will only be able to determine if these elements are present, but not if the information is correct or accurate. You can read more about Name, Role, Value here.

The Happy Path

For the best user experience, design must be simple and intuitive. The “happy path” is the path a user would take from the start point to the end point of a task with as few clicks as possible in a way that is understandable and easy to follow. Too many steps in a process may complicate things at the expense of the user experience. Using assistive technology is not easy and can take time. Automated testing tools will not be able to determine what is the “Happy Path” or what is the best user experience. Real world testing and customer feedback will be the only way to determine what and how to achieve the best user experience design.

Final Verdict

Overall, automated testing tools are meant to assist, not to lead. Ultimately, these tools can only identify a small percentage of possible accessibility issues, so they are great to use either as a starting point or final spot check. But to ensure a holistic accessible experience for your users, it’s critical to work with skilled, knowledgeable accessibility testers to do manual testing to catch issues and provide best practices and suggestions for your products. So, automated testing tools are great to have in your toolbox, as long as it’s understood they are not meant to be solely relied on, and that they’re great at quick checks for missing features, but not useful for anything that requires context on your website or mobile app.

About AFB Talent Lab

The AFB Talent Lab ​​aims to meet the accessibility needs of the tech industry – and millions of people living with disabilities – through a unique combination of hands-on training, mentorship, and consulting services, created and developed by our own digital inclusion experts. To learn more about our internship and apprenticeship programs or our client services, please visit our website at www.afb.org/talentlab.