When evaluating accessibility to what extent do you use actual accessibility assistance tools?

This year with quite a few regulations coming in this year I’m looking at ramping up accessibility risk testing.

There are a reasonable amount of designing with accessibility in mind ideas out there and some decent accessibility evaluation tools, scanners, etc both at UI and code level but I’m particularly interested in that actual usage of accessibility assistance tools themselves as part of testing.

If I consider android basics Talkback, Switch access and Voice access for example to what level are you using these in testing.

I loaded up Switch access this week, connected the office ps controller and set a couple of buttons to “next” and “tap” and went through the basic flows on one of the apps I’m testing.

It was useful learning for me, confirmed that yep switches will work with the app and I can do the main flows however its left me questioning to what extent the use of real tools should be part of the test coverage.

As I’m not a regular user, whilst I could spot obvious really awkward flows, it was not immediately clear in regards to what recommendations I would make to improve access for switches just based on actual usage.

We will be leveraging from real user of these tools as part of testing and their feedback will at least at this point be better than mine even though its a “user vs pro tester evaluation”.

So my question comes back to how much usage of these tools would you use in your testing.

Talkback and Voice access may follow the same model and I’ll also do that quick evaluation but would you recommend doing more, if so what would goals and usage have you found of value?

2 Likes

It all depends on the level of accessibility you are aiming for. If you are doing an audit for WCAG conformance, you don’t need to use any of those assistive technologies. If you want to go for a higher level of accessibility, then yes, you should use them. The more the better - there is no “right” amount. But:

  • Assuming you have a fixed amount of time, any time you spend on accessibility testing is time you can’t spend on other things. You should stop doing accessibility testing once you get to the point where something else is more important than doing more accessibility testing. That point is entirely dependent on your context and will potentially vary from sprint to sprint.
  • There is no point testing with assistive technologies until you have done a WCAG audit and all the non-conformances have been fixed.
  • You should get proper (probably paid-for) training in the use of the assistive technologies you plan to use, otherwise you are just guessing how people will use them.
  • You should observe a lot of user testing sessions before doing any assistive technology testing, otherwise you are just guessing what will and won’t be a problem and what the solutions might be.

If you don’t do those things, you probably won’t do much harm, but you will miss a lot of issues and you will waste development time changing things that don’t make a difference.

Choice of assistive technologies
When working on public sector websites, we use JAWS, NVDA, Voiceover on iOS, Talkback, Windows Magnifier and Dragon voice recognition software because these are the assistive technologies mandated by GDS (the Government Digital Service). This includes testing with a Bluetooth keyboard on iOS and Android, but GDS do not require testing with switches.

When testing for private sector clients, some don’t want any assistive technology testing, but others want to go further and test with ZoomText, TextHelp Read&Write, Voiceover on macOS, Voice Access, Voice Control and other assistive technologies.

Comparisons
While JAWS and NVDA are similar, both in how they are used and in their behaviour, there are significant differences. The differences between Talkback and Voiceover are much greater, as are the differences between them and JAWS and NVDA. Voice Access and Voice Control are quite similar, but they are substantially different from Dragon.

Automated testing tools
At most, these tools find perhaps 25% to 30% of the WCAG non-conformances, with some tools finding substantially less than that. Take no notice of the self-serving guff from vendors claiming 57% or similarly high figures. Worse still, as code gets more complex, these tools increasingly find only the least important issues.

2 Likes

I happened to see this earlier today. Perhaps it sparks some ideas and reflections.

1 Like

I do use a screenreader when i do an accessibility audit. We test for WCAG 2.2 AA (and some AAA where possible). My process is to

  1. run axe devtools on the page to test. this finds a lot of issues, esp contrast problems which are a high priority
  2. i try to navigate the page by keyboard (another high priority requirement)
  3. after that, i use a screenreader (NVDA) to navigate the page.
  4. i go through the WCAG list to check on all remaining issues that have not been covered by the previous tests.

The reason why i added the screenreader testing to my list is that I find it hard to actually check on some of those WCAG requirements in another way. For example, WCAG requires that links have accessible labels. Some elements on a page are links or buttons that are actually icons. I need to make sure they have good accessible text. The screenreader will pick this up right away and i don’t need to do cumbersome code checking. And also, this way i know that the label is properly implemented and works in practice - a code check may well show me an accessible label but i have to be quite diligent to also check that it has been implemented correctly and there are so many ways that devs do this. i find it much easier to just listen to what the screenreader announces. Same goes for the WCAG rule that requires the accessible labels to be identical to the label displayed on screen.

Basically, i find that a screenreaer check reveals hidden issues i would otherwise find difficult or time-consuming to check by hand, and that might consequently get missed.

I would find it very interesting to try other assistive technologies but time constraints are an issue. It would surely reveal more issues. And also increase empathy with the users of these technologies, as i personally do not actually have any serious impairment when using the web.

2 Likes

This was/is something I’m which I’m still working out.

I’m going through this course.
W3Cx: Introduction to Web Accessibility.

One of the early things it emphasises is not to start your journey with the audit, to start with a deeper understanding of the challenges facing many users, the problems they need solving and the tools that help people alongside the value of having accessible products.

That’s sort of why I’ve jumped in to experience the tools, perhaps gain a level of empathy and a better understanding of the value the audit can provide with the audit then moving to a secondary means to a goal.

There is though another aspect to this but I’ll comment separately on that as it merits ideas on its own.

I’m also looking at the audit as part of an accessibility evaluation and still determining the level and approach of this.

If its a fairly mature mobile app that was not specifically designed for accessibility and I run a basic accessibility scan its highly likely to flag a lot of missing labels, missing header details, issues with touch size and a few contrast things.

Part of feels that developers could pick those up directly as a first round before we do a detailed evaluation.

That’s not always possible but it can then make it look like a lot of work, “50 different elements missing labels” versus “the app elements are missing a lot of labels” with a couple of examples and explanation of value .

Building it in seems more straight forward in the testing, quick checks at feature or view level building a developer habit on accessibility, this case it can make sense to be specific rather than general.

For those doing evaluations what sort of level do you look at and communicate at?

Would you use for example the WCAG-EM Report Tool as is with 55 different criteria or your own more focused to specific goals?

I’m looking at mobile specifically but I’ve not found a variation of that report tool for mobile so likely my own variation once I think/discuss through audience, goals and targets a bit more.

1 Like

I understand why the W3C says that, and it’s the standard UX “double diamond” approach - see The Double Diamond - Design Council. However, in my view it’s not necessary for the accessibility of the vast majority of websites, mobile apps and documents. WCAG encapsulates the majority of user needs, and any research you do is unlikely to reveal anything that WCAG doesn’t cover. In fact, it will only find a small subset of what WCAG covers. I do recommend the double diamond approach for usability, but the timescale of agile development projects usually makes it unviable (which is why website UX is so terrible these days).

The only time I would advocate doing accessibility research first would be if you are planning something novel. However, that is extremely rare.

When we hired out our UX lab, I watched numerous UX researchers doing this sort of up-front research and I always thought it was a complete waste of time. I could have told them everything they learned in a fraction of the time and cost. And the sample size was always so low that you couldn’t extrapolate the findings.

I’m all in favour of testers learning to use assistive technologies properly (i.e. not self-taught) and gaining empathy, but I recommend getting some initial experience outside of your project. When I started 20 years ago, I paid screen reader, screen magnifier and Dragon users to just sit with them and watch while they navigated a variety of websites with different content types and different levels of accessibility. This can now be done remotely, although in-person is better because you can see the keyboard, mouse and touchscreen interactions, which are important for understanding what the user is doing.

I attended the session a while back, and one of the sessions is related to Accessibility tools. This is more of awareness, and the tools that is working out for big companies with a big plus, and probably work out in other companies as well.

I will divide into 3 phase - Design phase, testing phase, user testing phase, and tools to follow. I understand this may or maynot not be a solution for you, but please feel free to adjust accordingly!

Here are the critical phases and tools to consider:

  1. Design Phase
    Universal Design Principles: Integrate accessibility from the start.

Tools: Use “Able” to visualize UI for various disabilities.

Adaptive Design: Create interfaces that adjust based on user interactions.

AI Insights: Leverage Adobe XD for real-time design suggestions, and utilize Figma add-ons like axeDev and Stark for enhanced accessibility checks.

User Feedback: Involve individuals with disabilities for valuable insights.

  1. Testing Phase
    Shift Left Testing: Incorporating accessibility early in the SDLC is crucial for catching issues before they escalate.

Comprehensive Testing: Implement four levels of testing:

Code Scanning: Use axeDev.

Manual Testing: Tools like CCA and standard Chrome plugins.

Screen Reader Testing: Ensure compatibility with popular screen readers.

Scenario Testing: Use Zoom In to simulate real user experiences.

Overall Governance: Utilize AMP Level Access for tracking and reporting, to achieve, use UserWay and EqualWeb for accessibility overlays.

  1. User Testing Phase
    User Involvement: Gather feedback from people with disabilities.

Iterate & Refine: Use tools like UsableNet and Fable to test with real users.

Personalized Experiences: Employ AI to create adaptive interfaces

Though there is no process as such in my company, but really would advocate for accessibility check.

My linkedin post: Lokesh Venkatesan on LinkedIn: #testflix2024 #testflix2024 #testflix #accessibility #inclusivedesign…

1 Like

I think assistive technology (AT) has a place in accessibility testing.

As @scoutb wrote screen readers are useful to pick up issues. Wrong labels will stand out as well as duplicate information. Inspecting the code to check labels is possible but for me more complicated. We are testing with NVDA or VoiceOver depending on operating systems.

While it is important to keep in mind that you are not the user group, getting an idea of how users can interact with the system is really interesting. I would suggest starting with watching experienced AT users. The course you are taking includes such videos. There are a lot on YouTube as well. In my work introducing accessibility to teams learning about those other navigation options and settings made a big impact on people. When you know how your software is used it makes it easy to understand why someone might need a skip link or proper labels. This is helpful to remember success criteria.

Working with different tools can give you insights into potential issues or blind spots in your test strategy. You need a way to ask the experts though. We have a User Experience team that conducts tests with people with disabilities. After a frustrating test with Windows voice access I suggested broadening our test pool to include people using tools like Dragon NaturallySpeaking. It turned out that previously they had only focused on visual disabilities.
(As a bonus you might find something useful for yourself. I love the extra dark mode on my phone.)

Even if it’s just about doing an accessibility assessment at least screen readers are used. We use them for our VPATs and so does Deque.

1 Like