Accessibility Reports


At my company we are in the process of writing an Accessibility Report. My question is after these have been created what do other company’s use them for?

For example, other than using it for its purpose we are using it as a guide to see what percentage of WCAG Guidelines we meet and we can feedback this to our customers.

Let me know what your company does as we are looking for other ways to use this information too.



To start on a negative note, calculating the percentage of WCAG Guidelines you meet isn’t useful because it’s got no statistical validity. In common with all other testing metrics, WCAG success criteria are not fungible. The only values that have any validity are 0% and 100% - anything in between is meaningless.

Possible uses of the report include identification of training needs and opportunities for process improvement. In pretty much any aspect of life I don’t believe it when people say things like “I will be more careful next time”, “I won’t do that again” or “I will try harder”. Behaviour doesn’t change unless you do something to change it.

I see this all the time, with companies and individuals making the same mistakes again and again. Even if an individual does improve, the improvement doesn’t happen across the team or organisation, and of course there is no way for new team members to avoid making those mistakes.

Only process change can do that, such as building (and actually using) a code library of accessible components, building automated accessibility tests into your CI process, getting an accessibility specialist to write coding guidelines at the wireframe or creative design stage, testing at the component / template stage etc.

In the case of multimedia, all the necessary accessibility features can be planned from the start (eliminating failures and reworking later), but it requires both training and process change. I just wrote a training course on the topic.

In the case of content creation, you might decide to create writing guidelines to ensure meaningful link text, front-loading of information, presentation of information as lists rather than long paragraphs etc.


Thank you so much for your answer, our customer has asked we are X% accessible this is where our idea came from to compute it via how many of the WCAG Guidelines we meet. Would you have any other suggestions ?

Sadly for me and so many others I have rarely been involved in a project where WCAG and Section 508 were priorities.

As Steve pointed out percentage of compliance isnt very meaningful except as a general “feel good/bad” metric. (0% or 100% compliance being the exceptions of course) if that 10% non-compliance is in a critical function or affects a particular ability area, that stat becomes effectively 0%

So you might consider breaking down the report to cover various workflows, features and compliance types. If the Accessibility Test was performed by a consulting company specializing in that work, I would hope their evaluation deliverable would go into much of that detail.

I have always refused to give percentage accessibility levels because it’s statistically invalid. I’ve been asked plenty of times, but it’s not negotiable - I absolutely will never do it. Frankly, I am astonished that anyone would think you could represent such a multi-faceted property as a single number.

For starters, the level of WCAG conformance is not necessarily correlated with the quality of the user experience, so which one do they want a number for. What are they going to do with it?

Secondly, you might fail a particular success criterion in ten different ways. What happens if you fix nine of them? You still fail it, so your numerical score doesn’t change even though the website is now significantly more accessible than before.

You don’t have to give your boss, client or other stakeholders everything they ask for. But you do need to be able to explain why you’re not going to do it. And if they ask for any kind of software testing metrics, the answer should almost always be no, I’m not doing that.

I run “a consulting company specialising in that work”. Our guiding principle is that test reports should contain as little information as possible, while meeting the client’s needs. Anything extra is waste that the client is paying for unnecessarily.

The issue of reporting on “various workflows, features and compliance types” rarely arises because we recommend incrementally testing small areas rather than doing one “big bang” test. The client fixes the issues where they were found and applies the fixes throughout the application (ideally by fixing code libraries).

It’s certainly possible to do a detailed impact analysis, showing how accessible each area is to each user group. But what are you going to do with that information? Most of our clients have to become fully WCAG conformant, which means fixing everything. The analysis isn’t even useful for prioritisation because other factors are more important, such as maximising the efficiency of development resources.

1 Like

We use VPATs to generate Accessibility Conformance Reports. They used to be mostly to answer customer questions about the accessibility of our tools. We are working on creating a workflow to turn the filled out VPATs into JIRA tickets.

Emailing the report to teams turned out to be a dead end. The information was often not shared with the whole team and the content was not easily understandable for team members. While the report probably gives a nice overview for clients they are completely useless for bug reporting. How do you address partial compliance with e.g. captioning? Bug reports need the exact pages that have videos with no or incomplete captions. This information is in the filled out VPAT.


Yeah I wouldnt expect much traction from an emailed report. Unless work items were attached, that is. By creating work items/defects/bugs, it helps illustrate the scope of the report and its effect on the product. Just the report in email is a passive thing. That makes it easy to ignore or for people to wait for it to become their problem. With actual work objects, it demands greater attention.


I’ll tell you what we do, with the preface that any non-conformance means that you aren’t compliant…

OK? Let’s move on from that.

If directly answering your question, you should use the report to create an accessibility statement and publish this on your website. It should list all of the issues and a timeline for when they will be fixed.

On the other hand we have a dashboard for an overview that can be bubbled up.

What we wanted to do was have an obvious and easily digestible idea of the quality of something. The failures can obviously go in a report, with the dashboard as the executive summary.

Each failure is assessed for its individual impact. That impact has a rating from negligible to critical.

From this any bugs are raised with the associated impact, which should inform their importance.

We then create a dashboard with the minimum amount of information needed. This dashboard uses a medal rank calculated based on the impact of the problems found.

There is one medal rank for each of the 4 principals. If a principal has only negligible or no issues at all it can be Gold. Silver is minor, bronze is moderate. Any critical issues mean no medal and there are big problems.

Splitting the ranking across the 4 principals lets you see easily where the strengths or weaknesses lie.

Say there’s gold Perceivable and bronze Understandable, it could mean that all the colours and fonts on the site are great, but non of the labels on your form are associated with the input fields.


Most organisations put all the non-conformances into a huge Word document with sections for each WCAG success criterion. I hate that format and so do the clients I have asked. I have seen these reports exceed 250 pages!

We use a spreadsheet instead, with each row containing one success criterion and each column containing the results for one page or component. We use both colour coding and text to convey the status of each cell in addition to the test results and recommendations. This has the advantage that you can look down a column to see everything about a page, or look across a row to see the results for that success criterion across the website. That’s especially useful for identifying common or similar issues on different pages.

There’s much, much more to it than that and we use a lot of VBA to perform special functions. For example, I’m working on a feature that will put all the non-conformances into a CSV that can be imported into Jira or another bug tracker for clients who want that.

One major drawback of the Word report is that it only records non-conformances. If nothing is recorded for a success criterion on a particular page, you don’t know if it was tested and passed, or if it wasn’t tested at all. In our spreadsheet report, we record passes, non-conformances and also if a success criterion is not applicable (and why). If a cell is empty, we know that test has not been done.

Thanks for sharing, @sles12.

I didn’t know what a VPAT was so I looked it up. I see it stands for “Voluntary Product Accessibility Template”.

Voluntary Product Accessibility Template (VPAT) is a template containing information regarding how an Information and communications technology product or service conforms with Section 508 of the Rehabilitation Act of 1973

1 Like

I am currently working on our strategy how to handle accessibility reports. For context, we are a company that makes websites for clients and increasingly these clients ask us to fill in a VPAT so they can publish a report. We have actually always done accessibility testing but these reports are a more recent development.

There are several steps to my process:

  1. I test the product and fill in a spreadsheet I created for that purpose. Each bug/issue gets reported. The focus is on the developers in this stage because we are still fixing issues. So I report the issue, give code samples, etc. I also state the WCAG rule that is being violated.
  2. Devs fix bugs, then i retest.
  3. Update the spreadsheet.
  4. Create the accessibility report: This is still based on the VPAT format. It is a big table where i enter for each rule whether the site passes or not. If not, i describe the issue in a language that hopefully helps people with access needs understand what existing issues there are with the site.

Note: Yes, ideally, each issue should be addressed. But there are many reasons why some remain (for a short or longer time). And having these listed and described helps people understand where they can expect difficulties in using the site. This is in my opinion the great benefit of accessibility reports (other than motivating the developers to fix the issues).


This has give me lots of ideas how to improve our processes. Thanks for sharing! I really like the idea of incremental testing. We are currently changing our processes to do this and also introduce more automated tests that run long before i am even given a component or page or even site to test.


I noticed today that Rachele DiTullio has an Accessibility Spreadsheet shared, and was updated today.

It has some very interesting ideas in there, I very much like the per component results.