Automated Accessibility Testing Catches Only 30% of Issues
Run an automated accessibility checker on your website and it reports 0 errors. You celebrate meeting accessibility standards. Then actual users with disabilities try your site and find it completely unusable.
Automated testing tools serve a purpose, catching technical violations like missing alt text, poor color contrast, or invalid ARIA attributes. But research consistently shows automated tools detect only about 30% of accessibility barriers that real users encounter.
The other 70% requires human judgment, understanding of assistive technology, and testing with actual users who have disabilities.
What Automation Catches
Automated tools excel at checking technical requirements that have clear rules. Images must have alt attributes. Color contrast must meet minimum ratios. Form inputs need labels. Headings must follow hierarchical order.
These rules are important. Violations create genuine barriers. But they’re the easy part of accessibility. Following technical rules doesn’t guarantee your site is actually usable.
An image can have technically valid alt text that’s completely unhelpful. “Image123.jpg” satisfies the requirement for alt text but tells blind users nothing. “Photo of a group of people” is barely better. Good alt text requires understanding context and intent, which automation can’t evaluate.
Color contrast calculators verify that text meets 4.5:1 ratios, but they can’t assess whether color combinations create visual confusion, whether color is the only way information is conveyed, or whether color choices work for different types of color blindness.
Semantic HTML structure can be technically correct but functionally confusing. Headings follow proper hierarchy but their text doesn’t describe content clearly. Navigation landmarks exist but their labels are vague.
Keyboard Navigation Problems
Automated tools check whether interactive elements can receive keyboard focus. They verify tab order isn’t explicitly broken. They confirm focus indicators exist. All technical boxes checked.
What they miss is whether keyboard navigation makes logical sense. Can users efficiently navigate to the content they need? Do focus indicators provide enough contrast and visibility? Are keyboard shortcuts discoverable? Does the tab order follow visual layout or jump around confusingly?
Custom interactive widgets often have keyboard support that technically works but practically fails. A custom dropdown might be keyboard accessible but require memorizing non-standard keyboard shortcuts. It passes automated tests but frustrates users who expect standard behavior.
Focus traps in modal dialogs are technically correct accessibility patterns, but if implemented poorly, they become confusing. Users can’t figure out how to escape. Automation verifies focus stays in the modal but doesn’t evaluate whether the implementation is understandable.
Screen Reader Experiences
Automated tools check for screen reader compatible markup. They verify ARIA attributes are used correctly. They confirm labels exist for controls. Then blind users test with actual screen readers and encounter serious problems.
Content that makes visual sense becomes incomprehensible when read linearly by screen readers. A sidebar that works fine visually interrupts the main content flow when read aloud. Automated tools don’t evaluate reading order, they just check that content exists in the DOM.
Dynamic content updates might have ARIA live regions technically configured but provide too much or too little information to screen reader users. An animation that visually indicates progress might announce every tiny state change, flooding users with useless updates. Or it might be silent, leaving users wondering if anything is happening.
Forms can have all required labels and instructions but still be confusing because error messages don’t clearly explain what went wrong and how to fix it. Automation verifies errors are announced to screen readers but doesn’t evaluate whether the messages are helpful.
Cognitive Accessibility
Almost nothing about cognitive accessibility can be tested automatically. Is your language clear? Is navigation predictable? Are processes simple enough for users with cognitive disabilities to complete?
Automated tools can’t assess reading level, sentence complexity, or whether instructions are understandable. They can’t evaluate whether visual layout reduces cognitive load or creates confusion.
Time limits might be technically configurable but still create pressure that overwhelms users. Complex multi-step processes might have all the right ARIA attributes but still be impossible for users with attention or memory difficulties to complete.
Error prevention and recovery are critical for cognitive accessibility. Automation can verify error messages exist but can’t evaluate whether processes prevent errors in the first place or make recovery straightforward.
Mobile and Touch Accessibility
Touch target sizes have minimum requirements that automation can check. But are controls positioned where they’re easy to reach? Do gestures have keyboard alternatives? Can users with motor impairments actually use touch interfaces effectively?
Screen reader behavior differs significantly between mobile platforms. Something that works perfectly with JAWS on Windows might be unusable with VoiceOver on iOS. Automated tools test against one screen reader simulation that may not reflect real usage.
Orientation and motion requirements affect users with vestibular disorders. Automation can check for media queries but can’t evaluate whether interfaces work in both portrait and landscape, whether motion can be disabled, whether animations trigger discomfort.
Context and Purpose
The biggest limitation of automated testing is lack of context. Tools don’t understand what your content means or what users are trying to accomplish.
An automated tool sees a form with properly labeled fields. It doesn’t know whether the form asks for unnecessary information, whether the workflow makes sense, whether completion requires abilities some users don’t have.
It sees a complex data table with proper headers and scope attributes. It doesn’t evaluate whether the table is the right way to present that information, whether users can extract meaning from it, whether a simpler presentation would be more accessible.
Links and buttons have descriptive labels that technically identify their purpose. But does the label make sense in context? Can users predict what happens when they activate controls? Automation checks syntax, not semantics.
What Actually Works
Use automated tools to catch the 30% they’re good at catching. Run them in continuous integration. Fix technical violations. But don’t stop there.
Manual testing by people who understand accessibility finds more issues. Someone reviewing your site can assess whether alt text is helpful, whether keyboard navigation makes sense, whether language is clear.
Testing with actual assistive technology reveals problems automation misses. Load your site in JAWS, NVDA, VoiceOver. Navigate with keyboard only. Try it with high contrast mode. Use browser zoom. These expose real usability problems.
Most importantly, test with actual users who have disabilities. People who use screen readers daily will find issues you’d never anticipate. Users with motor impairments discover interaction patterns that technically work but practically don’t. People with cognitive disabilities identify confusing workflows.
User testing doesn’t need to be expensive. Asking a few people with disabilities to test critical workflows provides more value than running automated tools on every page.
Accessibility is an Ongoing Process
Meeting automated testing standards is a starting point, not a destination. Accessibility requires ongoing attention, user feedback, and willingness to improve based on real usage.
Sites that rely exclusively on automated testing check a compliance box but often remain barriers to disabled users. Sites that combine automated testing with manual review and user feedback create genuinely accessible experiences.
The goal isn’t passing automated tests. It’s building websites that people with disabilities can actually use. Those are related but not equivalent objectives.