The article states “It’s clear that artificial intelligence can greatly enhance the automation and speed of accessibility evaluation and testing”, but that should come with a heavy caveat regarding the accuracy and thoroughness of the results. And I still wouldn’t agree with the statement. I have looked at the tools mentioned in the article and I’m pretty unimpressed.
Deque
The “computer vision” feature in Deque’s tool looks useful for some colour contrast measurements, especially where text has a non-uniform background. However, the user still has to decide what window sizes to do the test at and they need to put pages and components into all the relevant states. I don’t know if there’s a way to record all that so you only need to do it once for each page. The other so-called AI features in the video demo look trivial and don’t even look like they would save any time compared with using existing non-AI tools.
Accessibility Desk
Their Self Assessment tool doesn’t even work - I get the message “The reCAPTCHA was invalid”, but there isn’t a reCAPTCHA. The source code shows they are not using the frictionless v3 version, but they are using v2, which should be visible. It isn’t.
Stark
I haven’t looked at Stark’s tools because they aren’t aimed at testers. They look like a crazy expensive megalithic development environment that only deep-pocketed huge organisations would use, i.e. the kind of organisations almost none of us work for.
AI-powered image recognition
This is of no use at all to an accessibility tester. We can see what the image is, and the text alternative should almost never be a literal description of it. The appropriate text alternative depends entirely on the context in which the image is used.
Improved analysis of semantic structure
This might be useful as a safety net after testing a page properly, which is how we use tools at the moment. But I would never use it first. If the tool has a crawling capability, it might be useful for doing a partial test of a large number of pages. But if you have to run it manually on each page the benefit is far lower.
Intelligent simulation of keyboard navigation
I’m deeply sceptical of this one. Again, it might be useful as a safety net.
Automated analysis of content readability and understandability
This could be useful for some people, but it doesn’t help with aa WCAG audit unless you’re one of the few people test for level AAA conformance. Even then it’s not clear whether the tools are doing the specific test required by level AAA or if they are just doing a general readability test.
Who are AI-based tools useful for?
My current assessment is that AI tools are almost entirely useless for professional accessibility testers who know what they’re doing. They might be of some value to non-professionals who are “having a bit of a go” at accessibility testing, but they aren’t going to help those people do anything like an accurate WCAG audit. That may change over time, but not any time soon.
Automated crawling tools
AI can extend the capability of automated testing tools that crawl websites, but that will also result in more manual analysis to do and more false positives to remove. As such, it will probably increase the tester’s workload, not reduce it.