I’m currently exploring how we can enhance automated accessibility testing tools, especially well-known frameworks like Axe-core, Lighthouse, and Pa11y. These tools have significantly contributed to promoting inclusive web experiences, but I believe there’s still room for improvement, especially when it comes to real-world usability, coverage, and user support.
What limitations or pain points have you encountered when using Axe-core, Lighthouse, or Pa11y?
What features or improvements would you want in the next generation of accessibility tooling? Having a more user-friendly browser version, side-by-side issue-and-fix viewing, or support for multiple accessibility guidelines?
How can advanced LLMs contribute to improving accessibility tooling and support?
Feel free to drop your experiences, ideas, or examples in the comments!
Hi @mltum_2000 and fellow first time poster this week !
Its an interesting question, and raises something I often think about with accessibility automated tests, that they have their purpose and encourage shift left, but also limitations. They often only catch about 30-40% of issues, and can’t cover real life user experience. I would be really interested in how this could be improved.
I like the idea of a side-by-side view of the page and browser to support with reporting and better understanding of what automated tooling finds, helping to bridge the gap between the technical and real user.
I wonder if when the issue is raised we could possibly use LLMs to demonstrate what that technical error on aria-labels for example actually looks like for a real user, like a video is generated (from a database researched and verified by real users with disabilities and who use assistive technology) to demonstrate real impact.
This again might make it harder to ignore the result, or switch off the test as it is failing on the pipeline, because its not just technical wording or coding references, but more personal?
Definitely something to be developed carefully and with the right people involved, but I wonder if that is something that could help..
I ran into a case where an app did not have any headers, the scan did not flag this so this is something that might be useful.
For example being able to scan for page navigation flow, are there headers, does navigation flow in the correct order, example the same app had an odd flow that seemed to jump down a section and then flow to a section higher in the page. Its likely missing something related to view hierarchy so if there was a way for a scanner to also pick up on odd or missing hierarchy things that could be useful.
This last one is a bit more complex, a bit like A/B testing but option to have a specific accessibility view that could look very different and maybe even different features than the view optimised for the majority of users. Perhaps a scan to make streamline recommendations for a specific accessibility view, streamlined recommendations for talkback use for example or for clicker usage.
If there was really one tool with all the features that would be nice. I don’t mind it too much, but when I show people how I test for accessibility they find my collection of tools intimidating. Accessibility Insights for Web is going in the right direction by combining an automated scan (result not as nicely presented as axe DevTools), checklists and mini tools. But sometimes the mini tools don’t work and they don’t cover all possible options.
I think tools to support manual checks are really important to level up your accessibility testing. Usually they come in the form of single feature browser extensions or bookmarklets. Therefore I have a lot of them.
I think taking a step back, our accessibility testing is by no means advanced. It would be automated and manual testing using Axe-pro and an uneducated assessment using a free screen reading tool. It made us better and made us consider the accessibility earlier with dev using the axe. But we stopped using the screen readers as we found we were in danger of testing the screen reader, rather than our product.
However, its such a complex issue and I still don’t think its taken seriously enough. So definitely more tooling on prevention of known accessibility non-compliances when designing UI’s on the specifics around WCAG/EAA. But where LLM’s can help is the the data based on feedback from impaired users beyond WCAG/EAA Guidelines and warn about more subjective accessibility risks. It’ll never be as good as putting your product in front of impaired users, but tools should be able to use AI on such growing feedback in their models.