How do we identify flaws in our accessibility tests?

I read this post the other day: The five types of people who produce inaccessible code

It shares:

There are roughly five types of people you’ll meet doing accessibility development work. They are:

  1. People who create inaccessible code, but do not realize they are doing so.
  2. People who create inaccessible code and realize they are doing so, but do not know how to fix it.
  3. People who create inaccessible code, and do not care about fixing it.
  4. People who create inaccessible code that they think is accessible.
  5. People who create inaccessible code that our industry thinks is accessible, but experientially is not.

I’ve never seen someone attempt to create such categories and it got me thinking. How as testing professionals do we test the validity of our accessibility tests? Are we testing the quality of our accessibility tests? When we run accessibility test scenarios what oracles do we use to test against? How reliable are those oracles? Who defines their reliability? Who has the authority to say if something is accessible or not?

I’m curious to get your thoughts on the article and my questions. Thanks.


I’ve just read this which feels related.

Perhaps there is a sixth type:

People who run “accessibility” tools that our industry think isn’t an accessibility tool yet choose to silence those who call that out. :frowning:

1 Like

I like to put a big emphasis on usability. There is behaviour that’s allowed under WCAG (or not clearly forbidden) that’s terrible for some users. If people push back on a change I’ll go looking for a rule, but mostly the team is trying to create a good experience for all users.
Attending axe-con and learning about the challenges cognitive disabilities can bring really opened my eyes to all the stuff we were not supporting. The basics and the rules are important. We need to also learn how real people experience and interact with programs in different ways, listen to their issues and work towards creating better products.

1 Like

Last year I wrote a serie of blog posts about Accessibility Poker, which is based on Planning Poker and WCAG, Website Content Accessbily Guidelines. It was a thought experiment.

The first blog posts is:

1 Like

Your question seems very strange to me and I suspect we have significantly different ways of doing accessibility testing. In my view, most types of testing (including functional and accessibility) are best regarded as an investigation - I am totally in the context-driven / exploratory testing school and abhor the brain-dead ISTQB approach that has poisoned our profession.

Coding is so complex these days and there are so many different ways to achieve any given result, that any kind of scripted or predetermined test has a high chance of giving the wrong result. I am not even sure what oracles you have in mind.

WCAG audits
When doing a WCAG audit, the WCAG success criteria specify the acceptable outcomes. Anyone can do that if they spend the time to learn what all the success criteria mean. They must also have a deep understanding of HTML and CSS and ideally, JavaScript. You can’t do the job properly without that.

Assistive technology testing
When I’m doing assistive technology testing, I draw on nearly 20 years’ experience of user testing with disabled people to assess what will and won’t be a problem and to what extent. People with less experience should be able to identify technical issues, but they won’t be able to identify the human factors issues - you can learn some basic principles, but you can’t beat experience.

So the bottom line is that I don’t write any tests and I don’t run any test scenarios. I conduct an investigation, and when I understand the code I assess it against the WCAG success criteria and my experience.

Who has the authority to say if something is accessible or not? I’m inclined to say it’s the person who knows most about it, which would usually be the person with the most experience. For individual components, there should rarely be any disagreement if people are well informed (and if they’re not, they shouldn’t be involved).

For larger components and whole applications, it’s not a binary accessible / inaccessible choice. Instead, you ought to be assessing the extent to which the application is accessible to people with different accessibility needs.

I think accessibility is a tricky one.

I feel like most people fall into the “don’t care / don’t have budget for that”. But for the people who do care about accessibility, it’s still hard to get it right.

Yes, you can follow WCAG as pointed out, but I think it will always be second rate. Think of how much time is spent researching customers who don’t have accessibility requirements and how much dedication our designers put into interfaces for people who have full eyesight and hand control. I’ve yet to see a company that puts that much effort into accessibility options.

Perhaps, we can settle for giving accessibility a special place in our design and testing workflows. Following WCAG is a good start, but what about having design reviews and customer interviews with people with accessibility needs. What about having a tester or two in a big org who themselves have accessibility needs. I feel like those could be some good places to start.

Your exposure to accessibility research depends very much on the sector you work in. UK central government departments and the digital agencies that work for them do a huge amount of this research. It wasn’t always the case, but the 2018 public sector websites accessibility regulations changed things overnight. The new law has real teeth and active monitoring by GDS, which the DDA and Equality Act never had.

The wider public sector
This sector doesn’t do so much research, but it does a lot of accessibility testing and remediation. There’s a huge legacy of existing websites, mobile apps and documents to fix first, and the culture and learnings will eventually feed into new developments.

Many of the largest companies like banks and supermarkets also do a lot of accessibility research, although it’s often undermined by corporate incompetence with one team undoing the good work done by another or failure to maintain the high standards that were achieved.

It’s the SME sector that’s the real problem and always has been. With very few exceptions, they are unaware of the issue, wouldn’t care anyway, don’t have the money, don’t see the benefits and (rightly) perceive the legal risk as negligible. In the daily struggle to survive, “doing the right thing” is nowhere on their radar.

Even in the organisations that do a lot of research, maintaining the standard over time is difficult. Budgets and people with the necessary skills are made available during the initial development projects and there are specific targets for accessibility. However, that often disappears when systems go into production, with nothing to prevent the subsequent creation of inaccessible web content and documents.

Don’t knock WCAG
WCAG conformance alone doesn’t make websites as accessible as they can be, but it’s an essential technical foundation and should be the starting point for all development teams. More advanced activities such as user research will be wasted if you don’t get the foundations right.

My frustration is that accessible design and development is pretty easy if you know what you’re doing, but almost everyone makes it difficult by making bad decisions. Top of the list is using JavaScript frameworks without first fixing the appalling code they create. Second is the assumption that all the accessibility issues can be fixed with one short round of testing and fixing at the end.

1 Like