Accessibility; who should be testing what and when?

I often hear a lot of noise around A11y testing (and rightly so) … but one thing that never seems to get discussed is who should be testing what and when.

If we take security/pen testing as an example, I would imagine that most of us would agree that to complete pen testing successfully, you would need an expert who can be assigned to the team for some time, during which they will conduct their testing and report back to the team with any findings and recommendations.

Should the same approach be taken with A11y testing?

Yes, the team can probably run some A11y checks and maybe spin up some automation scripts using the Axe Core library or something. But is this enough? Or should the team be requesting 3rd party accessibility audits?

Granted, not every team/project would require such rigorous A11y testing and I understand this is a relatively vague question… But, I would be interested to hear how you and your teams have implemented A11y testing, how often you run your checks and how you decide if you have “done enough”.

4 Likes

Stage 1: We don’t use to do Accessibility Testing nor anyone in team used to ask for it.

Stage 2: We learned about it from community / conferences. Got a session done via experts.

Stage 3: We started trying some stuff from our learnings from Stage 2 and reporting few bugs.

Stage 4: Bugs started some discussions within the team. Some bugs accepted because they were obvious issues, some were moved to backlog, some were deferred.

Stage 5: Next project starts, we discuss on a11y as an important part in our test strategy. PO asks us about why we are mentioning it explicitly and then we tell them more on this.

Stage 6: We doing detailed charters on a11y testing and raising such issues under a unique tag.

Stage 7: We haven’t reached there yet… Work in Progress…

I think, by stage 10 we will have an expert in this field. Sometimes, micro efforts help with creating a ground for experts to come in…

Trying to do my best in whatever capacity that I have… I am glad to see discussions around a11y getting done more and more…

Also, thanks to @adystokes and @conor-a11y for being my a11y champions!

4 Likes

Having taught myself security and accessibility testing, I think accessibility is the more accessible :wink: one. It’s fairly easy to get somewhere with accessibility as a beginner. While it might look scary, there are a lot of easy tests anyone can do beyond just running axe-core automation:

There are even great browser extensions that will guide you through the whole process step by step for a full assessment like Accessibility Insights Downloads.

To answer your questions: Just running axe is not enough. There is also a lot of things you can do between that and getting a 3rd party audit. I think having an expert as a resource on hand is very helpful as support. But I’m not sure if a one-time audit given to people without the background to address afterwards is a great approach.

3 Likes

Interesting answers so far… to add a follow-up question -

Both answers discuss the team completing accessibility testing within the team. But for such a broad topic/testing type, should this not be outsourced to an auditor or professional whose job it is to test your application with a fresh pair of eyes and (hopefully) no unconscious bias?

If team members are completing accessibility testing themselves, are we running the risk of overlooking certain violations?

@sles12 - you mentioned that you taught yourself security and a11y. I have also done the same (to some extent)… But put me up against a seasoned pen-tester and I’m most certainly not at the same level… and I can’t help but feel the same about accessibility.

Further to this, I also question the amount of time and resources a team can provide for A11y … The accessibility insights extension, for example, lists around 50+ checks that can be attempted against web apps. How often should this be completed? because the time spent doing these manual checks is not free, and in a team where the ratio is, let’s say 4 devs to 1 tester, there simply may not be enough time to include a vast amount of manual testing (as we know, only around 30% of a11y can be automated) into your daily activities.

We can of course keep accessibility in mind while testing our applications. But is there more value in 3rd party auditing vs intra-team testing?

2 Likes

Hey !
So in my team, we put effort to be a11y compliant due to legal requirements.
We have both manual and automated way to be as close as possible to a compliant website.

First, accessibility starts with Design so we had a training session (all the teamates presence is required, not only Design ofc) to ensure their understanding about this subject. Nothing to automate here :stuck_out_tongue:
Then for developer, we add an axe-linter vscode extension in order to catch a11y issues asap.
For automation test, I try not to do any specific dev / folder for a11y as it should benefit to everyone. So i’m working as much as possible with w3c best practices when doing my automation (playwright is blind also no ?) for exemple : Using label to focus on an input etc…
Then we need manual test, a stupid test but very efficient is, try to do an e2e test without using your mouse. If you are stuck somewhere, that means your a11y is not so efficient.
Also install a voiceover and try to do an e2e. Can be a little game you do with your teammates, blind their eyes, and use a chronometer, the fastest to finish the e2e wins (nothing)?
Last part, but costly, is to do an audit from external companies.

This is how it works in my team. So far efficient, but we are still working on improvements !

1 Like

Very interesting approach and I like the idea of gamifying it!
How often would you repeat such steps?

Great to hear about your progress Rahul :tada:

1 Like

Hi Anna, keyboard accessibility is something I tried and found an entire class of bugs in one of my projects. Regarding contrast? It’s a little interesting because the first step could be calibration of the screen/monitor

I really align with @alex.borlido answer, and is something very similar to what we do in my organization (now there are also some discussions of having periodical 3rd party experts reviews).
However if $ and resources are a big issue, even just having automated tests is much better than nothing, and you can have it working relatively fast.
Axe-core claim that as avg. they detect 57% compliant issues (with no false positives). Obviously manual acc testing, and specially keyboard testing is needed (axe-core does not do a good job with keyboard violations).

@wadders88 I would definitely loose against most pen-testers. But my accessibility results just need a little polishing from our accessibility expert. It took me about 4 years to get there. I’m a QA Accessibility Lead at my company, so not everyone needs to get to that level.

Accessibility Insights for Web is for full assessments like VPATs. Those are usually performend once a year or every 2 years for key pages of an application. Accessibility testing is mostly for new stuff or UI changes. Like security it’s not necessarily something I do daily. In my company we have had good results with testers doing their own accessibility testing even in teams with tight deadlines and unfavorable ratios. It might not be the full coverage, but the basics are there. We have an accessibility release blocker list to make sure all teams are aligned for the basics. Every couple of years people from the design team do a full assessment.
Btw, I hear those 3rd party audits are really expensive.

1 Like

@ezeetester Hi, great to hear that keyboard testing helped you find so much.

Contrast testing is based on the colours in the code, not how it looks on your display. If you have Chrome you don’t even need an extra tool:
https://webaim.org/articles/contrast/devtools

1 Like

@sles12 that is exactly the sort of answer/reply I was looking for. Thank you.

2 Likes