I often hear a lot of noise around A11y testing (and rightly so) ā¦ but one thing that never seems to get discussed is who should be testing what and when.
If we take security/pen testing as an example, I would imagine that most of us would agree that to complete pen testing successfully, you would need an expert who can be assigned to the team for some time, during which they will conduct their testing and report back to the team with any findings and recommendations.
Should the same approach be taken with A11y testing?
Yes, the team can probably run some A11y checks and maybe spin up some automation scripts using the Axe Core library or something. But is this enough? Or should the team be requesting 3rd party accessibility audits?
Granted, not every team/project would require such rigorous A11y testing and I understand this is a relatively vague questionā¦ But, I would be interested to hear how you and your teams have implemented A11y testing, how often you run your checks and how you decide if you have ādone enoughā.
Stage 1: We donāt use to do Accessibility Testing nor anyone in team used to ask for it.
Stage 2: We learned about it from community / conferences. Got a session done via experts.
Stage 3: We started trying some stuff from our learnings from Stage 2 and reporting few bugs.
Stage 4: Bugs started some discussions within the team. Some bugs accepted because they were obvious issues, some were moved to backlog, some were deferred.
Stage 5: Next project starts, we discuss on a11y as an important part in our test strategy. PO asks us about why we are mentioning it explicitly and then we tell them more on this.
Stage 6: We doing detailed charters on a11y testing and raising such issues under a unique tag.
Stage 7: We havenāt reached there yetā¦ Work in Progressā¦
I think, by stage 10 we will have an expert in this field. Sometimes, micro efforts help with creating a ground for experts to come inā¦
Trying to do my best in whatever capacity that I haveā¦ I am glad to see discussions around a11y getting done more and moreā¦
Having taught myself security and accessibility testing, I think accessibility is the more accessible one. Itās fairly easy to get somewhere with accessibility as a beginner. While it might look scary, there are a lot of easy tests anyone can do beyond just running axe-core automation:
There are even great browser extensions that will guide you through the whole process step by step for a full assessment like Accessibility Insights Downloads.
To answer your questions: Just running axe is not enough. There is also a lot of things you can do between that and getting a 3rd party audit. I think having an expert as a resource on hand is very helpful as support. But Iām not sure if a one-time audit given to people without the background to address afterwards is a great approach.
Interesting answers so farā¦ to add a follow-up question -
Both answers discuss the team completing accessibility testing within the team. But for such a broad topic/testing type, should this not be outsourced to an auditor or professional whose job it is to test your application with a fresh pair of eyes and (hopefully) no unconscious bias?
If team members are completing accessibility testing themselves, are we running the risk of overlooking certain violations?
@sles12 - you mentioned that you taught yourself security and a11y. I have also done the same (to some extent)ā¦ But put me up against a seasoned pen-tester and Iām most certainly not at the same levelā¦ and I canāt help but feel the same about accessibility.
Further to this, I also question the amount of time and resources a team can provide for A11y ā¦ The accessibility insights extension, for example, lists around 50+ checks that can be attempted against web apps. How often should this be completed? because the time spent doing these manual checks is not free, and in a team where the ratio is, letās say 4 devs to 1 tester, there simply may not be enough time to include a vast amount of manual testing (as we know, only around 30% of a11y can be automated) into your daily activities.
We can of course keep accessibility in mind while testing our applications. But is there more value in 3rd party auditing vs intra-team testing?
Hey !
So in my team, we put effort to be a11y compliant due to legal requirements.
We have both manual and automated way to be as close as possible to a compliant website.
First, accessibility starts with Design so we had a training session (all the teamates presence is required, not only Design ofc) to ensure their understanding about this subject. Nothing to automate here
Then for developer, we add an axe-linter vscode extension in order to catch a11y issues asap.
For automation test, I try not to do any specific dev / folder for a11y as it should benefit to everyone. So iām working as much as possible with w3c best practices when doing my automation (playwright is blind also no ?) for exemple : Using label to focus on an input etcā¦
Then we need manual test, a stupid test but very efficient is, try to do an e2e test without using your mouse. If you are stuck somewhere, that means your a11y is not so efficient.
Also install a voiceover and try to do an e2e. Can be a little game you do with your teammates, blind their eyes, and use a chronometer, the fastest to finish the e2e wins (nothing)?
Last part, but costly, is to do an audit from external companies.
This is how it works in my team. So far efficient, but we are still working on improvements !
Hi Anna, keyboard accessibility is something I tried and found an entire class of bugs in one of my projects. Regarding contrast? Itās a little interesting because the first step could be calibration of the screen/monitor
I really align with @alex.borlido answer, and is something very similar to what we do in my organization (now there are also some discussions of having periodical 3rd party experts reviews).
However if $ and resources are a big issue, even just having automated tests is much better than nothing, and you can have it working relatively fast.
Axe-core claim that as avg. they detect 57% compliant issues (with no false positives). Obviously manual acc testing, and specially keyboard testing is needed (axe-core does not do a good job with keyboard violations).
@w4dd325 I would definitely loose against most pen-testers. But my accessibility results just need a little polishing from our accessibility expert. It took me about 4 years to get there. Iām a QA Accessibility Lead at my company, so not everyone needs to get to that level.
Accessibility Insights for Web is for full assessments like VPATs. Those are usually performend once a year or every 2 years for key pages of an application. Accessibility testing is mostly for new stuff or UI changes. Like security itās not necessarily something I do daily. In my company we have had good results with testers doing their own accessibility testing even in teams with tight deadlines and unfavorable ratios. It might not be the full coverage, but the basics are there. We have an accessibility release blocker list to make sure all teams are aligned for the basics. Every couple of years people from the design team do a full assessment.
Btw, I hear those 3rd party audits are really expensive.
@ezeetester Hi, great to hear that keyboard testing helped you find so much.
Contrast testing is based on the colours in the code, not how it looks on your display. If you have Chrome you donāt even need an extra tool: https://webaim.org/articles/contrast/devtools
Just to focus in on the original question about who does what. Which I know has a solved flag but I think thereās some more parts to it, and apologies for not giving a fuller reply earlier.
Idea / Design phase. A11y testing can happen here by asking what does the keyboard journey look like alongside the expected āuser journeyā. Same applies to what does that user journey sound like. E.g. what does a screen reader user hear? Do we need extra information for it to make sense?
Dev / Coding phase. There are multiple tools that can be run manually or in CI/CD pipelines such as PA11Y. These give quick feedback and can pick up on coding or systemactic potential issues.
Testing phase. A combination of tools and manual inspection can be done by testers, even those not very familiar as explained in my article. Simple Tests For Accessibility Every Tester | Ministry of Testing
VPATs and external reviews by experts are more valuable if those things I have described happen as they focus, as do security testing when it has been considered throughout, on the expert level stuff and not low hanging fruit that should never get close to production.
Hope that helps answer and expands a little on the great answers from others.
First, A11y testing isnāt as complicated, complex, and broad as security/pen testing so your comparison isnāt relevant. You can ask questions about any type of testing (functional, non-functional, etc) but it would be strange. Do we need particular testers who focus on UI/UX testing, in most situations, no, we donāt. While we have UX experts (usually designers) who develop the design system, etc we do not need testers focused solely on testing that. As a QA engineer and tester you, obviously canāt be an expert in UX, A11y, and all types of non-functional tests but in most cases, you can learn it to a particular level which will be enough in most situations. And remember, any aspects of testing, quality and any non-functional requirements are not solely the testersā/QAsā responsibilities, the whole team developed the product and contributed to its quality, so speaking about A11y, as a tester, you may lack experience in testing it but probably you have requirements from BA/PO/Pdm/etc, you have UI/UX designer and devs who implements A11y features so you all test it and help each other to fill some gaps in your knowledge. Again. A11y is not rocket science in terms of testing it, in comparison with security/pentesting or performance testing itās much easier. I also believe that you donāt need to be a native speaker in all languages to perform L10n testing and you donāt need a particular expert for i18n testing