Testing methods
Accessibility testing involves the use of two methods: automatic and manual. Manual testing can be complex and requires more than one technique for accurate, comprehensive testing, but we'll get to that in a bit.
What is automated testing? Heading link
Automated accessibility testing is the process of using software to determine a site or app’s accessibility without human intervention. Automated testing compares the target code with clearly defined pass/fail criteria, allowing for a quick and reliable estimation of its WCAG compliance.
What is manual testing Heading link
Manual accessibility testing is the process of evaluating the accessibility of a web site or application by directly interacting with its interface via hardware input devices like a keyboard or mouse. This method contrasts with automated testing, which does not require a human being to simulate user input. Use of both manual and automatic methods is required to ensure truly comprehensive and accurate results, as neither method is capable of identifying all potential issues on its own.
Why is manual testing important?
- Manual testing, which involves interacting directly with page content, complements automated testing results by identifying issues that automated scans may have missed.
- Automated scans can detect up to 40% of all accessibility issues at best. This number will inevitably increase as technology improves, but for now it’s an accepted best practice to complement automated testing with thorough manual testing techniques.
- By simulating user interactions, manual testing provides insights into how people with disabilities navigate and interact with a website.
What are the types of manual testing?
There are three primary types of manual testing, each of which is equally important. When performing manual accessibility testing, it is strongly recommended that devs use all three of the following techniques:
- Visual Inspection
- Keyboard-only testing
- Screen reader testing
In-depth procedures can be found in the “How to test” section of this site.
Screen readers? What do they actually do? Heading link
- Screen readers
readthe screen. - Parse underlying code in order, left to right & top to bottom, once the DOM loads.
- Use text-to-speech tech to dictate pre-specified html elements.
- Tag examples: <h1>, <p>, <a>, <li>, <img>.
- Attribute examples: title, alt, label, aria-label, aria-expanded.
- Most screen readers by default will ignore formatting tags like <strong>, <em>, <u>, <s>, <del>, <ins> and will read formatted markup as-is. This is of significant semantic importance. As an example, any site visitor using a screen reader on this page will hear "Screen readers read the screen" when they reach the top bullet point of this list. We can see the word "read" has a strikethrough going through it and know that negation is implied, but this stylistic emphasis won't be communicated to screen reader users. This information is more relevant to content editors, but it's good for you as a dev to know as well.
- Announce elements placed off screen, but not if hidden with CSS.
- Allow the user to move forward & backward between elements, interact with elements.
- List important page elements (in the rotor) like headings, links, landmarks.
- Not just for the web. Screen reader software installed at the operating system level can be used to announce other types of document content in other apps, such as PDF or Office documents.