Such a good post from Natalie Stormann at Slack. I like how they go into real detail about their journey to automate some of their accessibility tests. The tools and frameworks they experimented with, the limitations they encountered and the comprises they had to make.
I think folks would value hearing your story of how you’ve gone about automating accessibility testing. What does your implementation look like? What were some of the issues getting to where you are now? What compromises have you made? What’s next?
As a side note, the post is a brilliant example of a well-written post. It’s a good oracle for anyone keen to contribute and have their words published on the MoT site.
My question from this is, how frequent is it to come across teams that actually do accessibility testing?
Under the time pressure of shipping the product asap, where we know they willingly cut corners in testing too, would a team or product owner take time for accessibility testing?
Or does it get baked in the design phase.
Or does it depend on the target users?
We use axe-core with Ruby/Java depending on the project. For the tests we reused our functional automation to get to all the pages/dialogs/states that can then be checked. You have to make sure that only the code you want tested is checked to not end up with reports of issues in the header in every result.
We hoped to be able to use more developer centric tools as well, but if you don’t write HTML directly and then apply a theme linting/unit testing is not really possible.
Having the results is only on step though. You need a workflow to get the issues fixed. Some teams are still working on that.
Some people are also struggling to understand that a good part of accessibility testing can’t be automated.