how-to-do-accessibility-testing-image
Accessibility Testing Test Automation

How to Do Accessibility Testing A Complete Guide for QA Professionals

When people hear “accessibility,” they often think of screen readers or users with visual impairments. However, this limited view significantly underrepresents the true scope and importance of digital accessibility.
As QA professionals, it’s crucial to move beyond common misconceptions and understand what accessibility entails, because only then can we test for it effectively.

New to accessibility testing? > Check out our first blog: Everything You Need to Know About Accessibility Testing – a beginner-friendly introduction covering the what, why, and how of accessibility.

Common Misconceptions About Accessibility

One of the most common misconceptions about accessibility is that it only applies to blind users or those who rely on screen readers. In truth, accessibility ensures everyone’s digital experiences are usable, regardless of their permanent, temporary, or situational limitations.

Accessibility encompasses a wide spectrum of user needs, including:

  • Motor disabilities – such as users who cannot use a mouse and rely entirely on keyboard navigation or assistive devices like switch controls.
  • Cognitive impairments – including conditions like dyslexia, ADHD, learning disabilities, or memory-related challenges that affect how users process and understand content.
  • Auditory impairments – affecting users who are deaf or hard of hearing and rely on captions or transcripts to access audio content.
  • Temporary impairments – such as a broken arm, recovering from eye surgery, or trying to use a phone in a bright or noisy environment.

When accessibility efforts focus solely on one type of disability, like blindness, the result is a narrow, incomplete approach. This not only overlooks millions of users with diverse needs but also undermines the broader goal of building truly inclusive digital products.

Debunking “Accessibility = Screen Readers”

While screen readers like NVDA or JAWS are important tools, they are just one piece of the puzzle. Accessibility is not a single-feature checkbox – it’s a comprehensive user experience that ensures inclusivity for various needs. This means testing for:

  • Proper keyboard focus for those who can’t use a mouse
  • Color contrast for users with visual contrast sensitivity
  • Captions and transcripts for users with hearing impairments
  • Clear layout and structure for users with cognitive challenges

By understanding that accessibility is a broad, inclusive practice, QA professionals can better plan, test, and advocate for truly accessible digital products.

Accessibility Metrics and Measurement Models

Accessibility isn’t just a checklist- it’s a measurable quality of your digital product.
For QA teams, having defined metrics and models helps track progress, identify gaps, and ensure inclusive user experiences across devices and platforms.

Let’s break down how accessibility is measured and what “success” looks like.

Why Accessibility Needs to Be Measurable

To improve accessibility, we must first measure it. Just like Performance or Security testing, accessibility should have:

Quantifiable Outputs Accessibility should produce data points you can track:

  • Violation counts from tools like Axe, Lighthouse, or Pa11y
  • Severity levels (critical, serious, moderate, minor)
  • Accessibility scores (e.g., Lighthouse score out of 100)
  • Pass/Fail status for defined test cases
    This gives QA teams a baseline to measure progress and make accessibility part of regular reporting.

Repeatable Testing Processes

  • Just like Functional testing, accessibility testing should be:
    • Consistent across builds and environments Repeatable in CI pipelines and manual test cycles Automation wherever possible, but supported by structured manual testing
    This allows issues to be caught early and ensures accessibility regressions are prevented during releases.

Benchmarks for Compliance

  • Standards like WCAG 2.1 Level AA provide the industry-accepted benchmark for accessibility. Measuring against these gives you:
    • A legal and ethical safety netA quality framework for inclusive designA shared goal that product, design, and QA teams can align
  • With defined benchmarks, it becomes easier to decide whether a product is “accessible enough” for launch or if it needs improvement.

Defining Your Accessibility Success Criteria

It’s important to set clear, practical accessibility goals. These success criteria help QA teams know when a product is “accessible enough” for release, and where improvements are needed.
Here’s how to define what a “pass” looks like:

1. WCAG 2.1 Level AA Compliance

This is the most commonly accepted benchmark across industries. Ensuring your product meets Level AA means:

  • Sufficient color contrast
  • Clear focus indicators
  • Accessible forms, links, and media
  • Consistent navigation and predictable behavior

2. Zero Critical Issues in Automation Scans

Tools like Axe, Pa11y, or Lighthouse can flag errors. A strong success threshold is:

  • Zero critical or serious issues
  • Accepting minor issues only if justified and documented
  • Verifying that false positives are excluded

 3. Full Accessibility of Key User Flows

Ensure major actions (like login, checkout, or form submission) are:

  • Fully operable with keyboard-only navigation
  • Understandable and usable via screen readers (NVDA, VoiceOver)
  • Clear in layout and labels for all assistive tech

This type of manual validation confirms what automation can’t catch.

 4. Lighthouse Score Threshold

Use Google’s Lighthouse audit as a quick performance check. A good benchmark is:

  • Score of 90 or above in the Accessibility category

Reviewed alongside actual usability (not just a number)

Automation Accessibility Metrics

Automation tools give you fast, repeatable metrics – great for continuous testing.

Examples of Common Automation Metrics:

  • Lighthouse Accessibility Score (0 – 100): Measures issues like contrast, ARIA, label usage.
  • Axe Violations: Lists rule violations, their severity, and affected elements.
  • HTML_CodeSniffer Results: Detects WCAG failures in real time.
  • Pa11y/Accessibility Insights Reports: Useful for dashboards and trends over time.

QA Tip: Automation tools are essential, but catch only 30–40% of issues. Use them as a starting point, not a final verdict.

Manual Accessibility Metrics

Some issues require human judgment – this is where manual testing comes in.

MetricWhat to Check
Screen ReaderLogical reading order, live region updates, meaningful labels
KeyboardTab order, focus visibility, keyboard traps
ContrastWCAG-compliant contrast ratio (4.5:1 for normal text)
Focus IndicatorsClear visual focus on interactive elements
ZoomUI integrity at 200% zoom or more
ErrorsForms should have clear, accessible error messages

ARIA: Making Dynamic Content Accessible

Modern web applications often rely on dynamic content – pop-ups, modals, expandable menus, tabs, and more. While these features enhance interactivity, they often break accessibility unless handled correctly. That’s where ARIA (Accessible Rich Internet Applications) comes in.

What is ARIA?

ARIA is a set of attributes added to HTML elements to make web content more accessible to users who rely on assistive technologies like screen readers. It doesn’t change the look of the page but provides extra context and behavior to help interpret interactive content.

Core ARIA Roles and Properties

Some of the most common and essential ARIA attributes include:

  • aria-label – Provides an accessible name when no visible label is available. Example: <button aria-label="Close">X</button>
  • aria-hidden – Hides content from assistive tech, useful for decorative elements. Example: <span aria-hidden="true">*</span>
  • aria-live – Announces dynamic content changes without page refresh.
    Ideal for chat updates or error messages.
  • aria-expanded – Indicates the current state (expanded/collapsed) of dropdowns or accordions. Example: <button aria-expanded="true">Menu</button>
  • aria-describedby / aria-labelledby – Links elements to their description or label for screen readers

When to Use ARIA – and When Not To

Use ARIA only when native HTML elements can’t achieve the same functionality. For instance:

  • Use a native <button> instead of a <div role="button">
  • Use <fieldset> and <legend> for grouped form fields instead of aria-labelledby

Why? Because native elements come with built-in accessibility features, while ARIA requires you to manually recreate them, which is error-prone.

Common ARIA Anti-Patterns

Avoid these misuses that often create more harm than help:

  • Overusing aria-hidden="true" and hiding important content
  • Using ARIA to force behavior instead of fixing poor HTML structure
  • Adding ARIA without proper keyboard support
  • Using ARIA on elements that don’t need it (e.g., static content)

Building Semantic, Screen Reader-Friendly Components

To make components accessible:

  • Start with semantic HTML (<button>, <nav>, <form>, etc.)
  • Use ARIA to fill in gaps, especially in custom UI elements
  • Test regularly using screen readers (NVDA, VoiceOver) and tools like Axe or Accessibility Insights

Quick Example: Accessible Accordion

<button aria-expanded="false" aria-controls="section1">Details</button>
<div id="section1" hidden>
         <p>This is hidden content revealed when expanded.</p>
</div>

Hands-On: Accessibility Testing Step-by-Step

Accessibility testing isn’t just about running a tool and fixing the output – it’s about systematically validating how users of all abilities experience your product. Here’s how QA professionals can set up and begin effective accessibility testing.

Set Up Your Accessibility Testing Environment

Before you start testing, it’s important to prepare your system with the right tools and configurations. Accessibility issues may behave differently across devices, browsers, and platforms, so a well-rounded setup matters.

Recommended OS & Browser Combinations

  • Windows + Chrome: Great for tools like Axe DevTools, Lighthouse, and NVDA screen reader.
  • macOS + Safari: Native support for VoiceOver; essential for Apple users.
  • Linux + Firefox/Chrome: For open-source compatibility testing.
  • Mobile Testing:
    • Android: Use TalkBack and Chrome.
    • iOS: Use VoiceOver and Safari.
    Tip: Test across at least one desktop + one mobile device to ensure full coverage.

Manual Testing Techniques

Manual accessibility testing complements automation scans and is essential for evaluating real user experiences. Here are the fundamental manual testing methods:

Keyboard-Only Navigation Testing

  • Ensure that every interactive element (links, buttons, form fields) is reachable and operable using only the keyboard (usually Tab, Shift + Tab, Enter, Space, arrow keys).
  • Focus should be visible and clear – users relying on keyboards need to know where they are on the page.
  • Verify logical tab order – navigation should follow a meaningful sequence.
  • Check for keyboard traps where the focus gets stuck and cannot be moved away.

Screen Reader Testing

  • Use popular screen readers to navigate and interact with your web pages.
  • Test with:
    • NVDA on Windows
    • VoiceOver on macOS/iOS
  • Listen for proper reading order, clear and descriptive alt text on images, correct use of headings, landmarks, and ARIA roles.
  • Check that form fields have associated labels and that error messages are conveyed clearly.

Color Blindness Simulation

  • Use simulators or browser extensions to view your UI through the eyes of users with different types of color vision deficiency (protanopia, deuteranopia, tritanopia, etc.).
  • Ensure color is not the sole method of conveying important information (e.g., error states or required fields).
  • Verify contrast ratios meet WCAG minimums (4.5:1 for normal text, 3:1 for large text).

Automation Testing Workflow for Accessibility

Automation tools help quickly identify many accessibility issues and fit well into your testing pipeline. Here’s a simple workflow with popular tools:

  1. Lighthouse (Chrome DevTools) Built into Chrome DevTools, Lighthouse runs fast audits to check your page’s accessibility and gives a detailed report with scores and suggestions.
  2. Axe DevTools (Browser Extension & CLI) Axe is a developer-friendly tool available as a browser extension and command-line interface. It finds detailed accessibility issues and can be integrated into automation test suites and CI pipelines.
  3. Pa11y An open-source command-line tool to run accessibility tests on URLs or files. It’s lightweight and great for adding accessibility checks in your CI/CD workflow.
  4. Tenon A powerful API-based accessibility testing service that returns detailed reports, suitable for integrating automation accessibility checks into your development process.
  5. Wave and Siteimprove
    • WAVE: Browser extension providing visual feedback by marking accessibility issues directly on the page.
    • Siteimprove: Enterprise tool offering continuous automation scanning and compliance tracking.

Common Accessibility Defects & How QA Can Catch Them

Accessibility defects can significantly impact users with disabilities. Here are some of the most frequent issues QA testers should look out for, along with practical tips on how to detect them:

Missing or Incorrect Labels

Issue: Form fields or interactive elements without proper labels confuse screen reader users.

How to catch:

  • Check that every input has an associated <label> or accessible name (via aria-label or aria-labelledby).
  • Use screen readers (NVDA, VoiceOver) to verify that labels are read correctly.

Incorrect Focus Order or Missing Focus Indicators

Issue: Keyboard users rely on logical tab order and visible focus outlines to navigate.

How to catch:

  • Perform keyboard-only navigation to verify tab order is intuitive.
  • Ensure focus styles (outline or highlight) are visible on all interactive elements.

Keyboard Traps or Inaccessible Interactions

Issue: Elements that trap keyboard focus or require mouse-only interaction block keyboard users.

How to catch:

  • Tab through pages to identify if keyboard focus gets stuck inside any component.
  • Confirm all interactive elements can be operated with the keyboard alone.

Insufficient Color Contrast

Issue: Text or UI elements with low contrast are hard to read for users with visual impairments.

How to catch:

  • Use contrast checker tools (like WebAIM Contrast Checker or Axe).
  • Simulate color blindness to ensure information is not conveyed by color alone.

Missing or Irrelevant Alt Text

Issue: Images without descriptive alt text or with meaningless text confuse screen reader users.

How to catch:

  • Inspect all images for meaningful alt attributes.
  • Verify decorative images use empty alt (alt="") so screen readers skip them.

Misuse of ARIA Attributes

Issue: Incorrect or unnecessary ARIA roles can confuse assistive technologies.

How to catch:

  • Review ARIA usage for validity (use tools like Axe or manual code inspection).
  • Confirm ARIA roles match the element’s purpose and don’t conflict.

Layout Breaks on Zoom or Responsiveness

Issue: Zooming in (up to 200%) or using different screen sizes should not hide or overlap content.

How to catch:

  • Test the zoom functionality in browsers and check that the content remains readable and functional.
  • Use responsive design testing tools or manually resize the browser window.

Inaccessible Form Error Messages

Issue: Error messages not properly linked to form fields are missed by screen readers.

How to catch:

  • Verify error messages are programmatically associated with the input fields (e.g., using aria-describedby).

Confirm that errors are announced when they appear using screen readers

Accessibility Testing in Agile & DevOps (QA-Focused)

Integrating accessibility into Agile and DevOps ensures that accessibility is not an afterthought but part of every sprint and release cycle. Here’s how QA teams can lead this effort:

Add Accessibility to the QA Definition of Done (DoD)

  • Include accessibility checks as mandatory criteria before any user story or feature is marked complete.
  • Ensure automation scans pass, manual keyboard and screen reader tests are done, and critical defects are fixed.

Integrate Tools like Axe & Lighthouse into CI Pipelines

  • Automate accessibility testing by running Axe CLI or Lighthouse audits on every build.
  • Fail builds if new accessibility issues are introduced, enforcing quality gates early.

Automate Accessibility Gates Before Release

  • Use automation accessibility reports as “gates” that must be passed before code merges or releases.
  • Combine automation testing with manual validation on critical flows to catch issues that automation may miss.

Manual Screen Reader and Keyboard Tests per Sprint

  • Assign QA to perform hands-on manual tests regularly during each sprint to validate the real user experience.
  • Focus on critical user journeys, forms, navigation, and error handling with assistive technologies.

Role of QA in Agile Ceremonies

  • Daily Standups: Raise accessibility blockers early.
  • Sprint Planning: Estimate time for accessibility testing and fixes.
  • Sprint Reviews: Demonstrate accessibility improvements and get feedback from stakeholders.

Retrospectives: Discuss what worked for accessibility and what can be improved.

Top Accessibility Testing Tools Comparison

When it comes to ensuring websites are accessible to everyone, including users with disabilities, several powerful tools stand out in the accessibility testing landscape. One of the most widely used tools is axe DevTools by Deque, known for its seamless browser integration, accurate WCAG violation detection, and compatibility with test automation frameworks like Cypress and Selenium. For quick audits directly in Chrome, Lighthouse is a built-in and open-source option that checks not only accessibility but also performance, SEO, and best practices making it great for rapid assessments.

WAVE by WebAIM is another popular browser extension offering visual feedback on accessibility issues, ideal for beginners due to its simple interface. However, for more robust automation and continuous integration, Pa11y shines with its command-line interface and CI/CD pipeline support, although it requires a bit of setup. Tools like Tenon provide powerful API-driven accessibility testing with customizable rules, perfect for large-scale applications, though it comes at a cost.

For manual, guided inspections, Accessibility Insights helps testers walk through specific user flows and catch complex issues that automation may miss. Meanwhile, AChecker is a simple, online tool that performs HTML-based scans against selected accessibility guidelines, though it may feel outdated compared to modern solutions.

Enterprise users might prefer comprehensive platforms like Siteimprove or Deque WorldSpace, offering in-depth dashboards, team collaboration features, and integrations with tools like Jira or Azure DevOps. Lastly, to test real-world usability, screen readers such as NVDA, JAWS, or VoiceOver allow teams to experience websites the way many users with visual impairments do an essential step in creating inclusive digital experiences.

Challenges and Limitations in Accessibility Testing

While accessibility testing is essential, there are some common challenges and limitations that QA teams often face:

Limitations of Automation

  • Automation tools can detect many issues quickly, but cannot catch all accessibility problems, especially those involving context, meaning, or user experience.
  • Issues like logical reading order, meaningful alternative text, or usability for assistive tech often require manual validation.

False Positives and False Negatives

  • Some tools may report issues that are not actual problems (false positives), causing unnecessary work.
  • Others might miss critical issues (false negatives), giving a false sense of compliance.
  • Combining multiple tools and manual checks helps reduce these errors.

Language and Content Complexity

  • Pages with dynamic content, complex language, or multimedia present extra challenges for accessibility testing.
  • Ensuring accurate ARIA roles, captions, and transcripts often requires domain expertise and additional manual effort.

Time and Resource Constraints

  • Thorough accessibility testing requires dedicated time and skilled testers familiar with assistive technologies.
  • Tight deadlines and limited budgets can lead to incomplete testing or overlooking accessibility altogether.
  • Integrating accessibility early in development and automating where possible can help mitigate this.

Conclusion: Building a Culture of Inclusive Quality

Accessibility starts early with a shift-left mindset – integrating it from the beginning of development. Empower your entire team with accessibility awareness so it becomes a shared responsibility. Foster a culture of continuous improvement to keep making your products inclusive and high-quality for all users.

Coming Up Next:

Automation Accessibility Testing – Tools, Claims & Realities

We’ll unpack what tools like BS (BrowserStack), Axe, and more really offer, and whether they can truly make accessibility effortless.
Stay tuned!

Witness how our meticulous approach and cutting-edge solutions elevated quality and performance to new heights. Begin your journey into the world of software testing excellence. To know more refer to Tools & Technologies & QA Services.

If you would like to learn more about the awesome services we provide,  be sure to reach out.

Happy Testing 🙂