At my company we are in the process of writing an Accessibility Report. My question is after these have been created what do other companyās use them for?
For example, other than using it for its purpose we are using it as a guide to see what percentage of WCAG Guidelines we meet and we can feedback this to our customers.
Let me know what your company does as we are looking for other ways to use this information too.
To start on a negative note, calculating the percentage of WCAG Guidelines you meet isnāt useful because itās got no statistical validity. In common with all other testing metrics, WCAG success criteria are not fungible. The only values that have any validity are 0% and 100% - anything in between is meaningless.
Possible uses of the report include identification of training needs and opportunities for process improvement. In pretty much any aspect of life I donāt believe it when people say things like āI will be more careful next timeā, āI wonāt do that againā or āI will try harderā. Behaviour doesnāt change unless you do something to change it.
I see this all the time, with companies and individuals making the same mistakes again and again. Even if an individual does improve, the improvement doesnāt happen across the team or organisation, and of course there is no way for new team members to avoid making those mistakes.
Only process change can do that, such as building (and actually using) a code library of accessible components, building automated accessibility tests into your CI process, getting an accessibility specialist to write coding guidelines at the wireframe or creative design stage, testing at the component / template stage etc.
In the case of multimedia, all the necessary accessibility features can be planned from the start (eliminating failures and reworking later), but it requires both training and process change. I just wrote a training course on the topic.
In the case of content creation, you might decide to create writing guidelines to ensure meaningful link text, front-loading of information, presentation of information as lists rather than long paragraphs etc.
Thank you so much for your answer, our customer has asked we are X% accessible this is where our idea came from to compute it via how many of the WCAG Guidelines we meet. Would you have any other suggestions ?
Sadly for me and so many others I have rarely been involved in a project where WCAG and Section 508 were priorities.
As Steve pointed out percentage of compliance isnt very meaningful except as a general āfeel good/badā metric. (0% or 100% compliance being the exceptions of course) if that 10% non-compliance is in a critical function or affects a particular ability area, that stat becomes effectively 0%
So you might consider breaking down the report to cover various workflows, features and compliance types. If the Accessibility Test was performed by a consulting company specializing in that work, I would hope their evaluation deliverable would go into much of that detail.
I have always refused to give percentage accessibility levels because itās statistically invalid. Iāve been asked plenty of times, but itās not negotiable - I absolutely will never do it. Frankly, I am astonished that anyone would think you could represent such a multi-faceted property as a single number.
For starters, the level of WCAG conformance is not necessarily correlated with the quality of the user experience, so which one do they want a number for. What are they going to do with it?
Secondly, you might fail a particular success criterion in ten different ways. What happens if you fix nine of them? You still fail it, so your numerical score doesnāt change even though the website is now significantly more accessible than before.
You donāt have to give your boss, client or other stakeholders everything they ask for. But you do need to be able to explain why youāre not going to do it. And if they ask for any kind of software testing metrics, the answer should almost always be no, Iām not doing that.
I run āa consulting company specialising in that workā. Our guiding principle is that test reports should contain as little information as possible, while meeting the clientās needs. Anything extra is waste that the client is paying for unnecessarily.
The issue of reporting on āvarious workflows, features and compliance typesā rarely arises because we recommend incrementally testing small areas rather than doing one ābig bangā test. The client fixes the issues where they were found and applies the fixes throughout the application (ideally by fixing code libraries).
Itās certainly possible to do a detailed impact analysis, showing how accessible each area is to each user group. But what are you going to do with that information? Most of our clients have to become fully WCAG conformant, which means fixing everything. The analysis isnāt even useful for prioritisation because other factors are more important, such as maximising the efficiency of development resources.
We use VPATs to generate Accessibility Conformance Reports. They used to be mostly to answer customer questions about the accessibility of our tools. We are working on creating a workflow to turn the filled out VPATs into JIRA tickets.
Emailing the report to teams turned out to be a dead end. The information was often not shared with the whole team and the content was not easily understandable for team members. While the report probably gives a nice overview for clients they are completely useless for bug reporting. How do you address partial compliance with e.g. captioning? Bug reports need the exact pages that have videos with no or incomplete captions. This information is in the filled out VPAT.
Yeah I wouldnt expect much traction from an emailed report. Unless work items were attached, that is. By creating work items/defects/bugs, it helps illustrate the scope of the report and its effect on the product. Just the report in email is a passive thing. That makes it easy to ignore or for people to wait for it to become their problem. With actual work objects, it demands greater attention.
Iāll tell you what we do, with the preface that any non-conformance means that you arenāt compliantā¦
OK? Letās move on from that.
If directly answering your question, you should use the report to create an accessibility statement and publish this on your website. It should list all of the issues and a timeline for when they will be fixed.
On the other hand we have a dashboard for an overview that can be bubbled up.
What we wanted to do was have an obvious and easily digestible idea of the quality of something. The failures can obviously go in a report, with the dashboard as the executive summary.
Each failure is assessed for its individual impact. That impact has a rating from negligible to critical.
From this any bugs are raised with the associated impact, which should inform their importance.
We then create a dashboard with the minimum amount of information needed. This dashboard uses a medal rank calculated based on the impact of the problems found.
There is one medal rank for each of the 4 principals. If a principal has only negligible or no issues at all it can be Gold. Silver is minor, bronze is moderate. Any critical issues mean no medal and there are big problems.
Splitting the ranking across the 4 principals lets you see easily where the strengths or weaknesses lie.
Say thereās gold Perceivable and bronze Understandable, it could mean that all the colours and fonts on the site are great, but non of the labels on your form are associated with the input fields.
Most organisations put all the non-conformances into a huge Word document with sections for each WCAG success criterion. I hate that format and so do the clients I have asked. I have seen these reports exceed 250 pages!
We use a spreadsheet instead, with each row containing one success criterion and each column containing the results for one page or component. We use both colour coding and text to convey the status of each cell in addition to the test results and recommendations. This has the advantage that you can look down a column to see everything about a page, or look across a row to see the results for that success criterion across the website. Thatās especially useful for identifying common or similar issues on different pages.
Thereās much, much more to it than that and we use a lot of VBA to perform special functions. For example, Iām working on a feature that will put all the non-conformances into a CSV that can be imported into Jira or another bug tracker for clients who want that.
One major drawback of the Word report is that it only records non-conformances. If nothing is recorded for a success criterion on a particular page, you donāt know if it was tested and passed, or if it wasnāt tested at all. In our spreadsheet report, we record passes, non-conformances and also if a success criterion is not applicable (and why). If a cell is empty, we know that test has not been done.
I am currently working on our strategy how to handle accessibility reports. For context, we are a company that makes websites for clients and increasingly these clients ask us to fill in a VPAT so they can publish a report. We have actually always done accessibility testing but these reports are a more recent development.
There are several steps to my process:
I test the product and fill in a spreadsheet I created for that purpose. Each bug/issue gets reported. The focus is on the developers in this stage because we are still fixing issues. So I report the issue, give code samples, etc. I also state the WCAG rule that is being violated.
Devs fix bugs, then i retest.
Update the spreadsheet.
Create the accessibility report: This is still based on the VPAT format. It is a big table where i enter for each rule whether the site passes or not. If not, i describe the issue in a language that hopefully helps people with access needs understand what existing issues there are with the site.
Note: Yes, ideally, each issue should be addressed. But there are many reasons why some remain (for a short or longer time). And having these listed and described helps people understand where they can expect difficulties in using the site. This is in my opinion the great benefit of accessibility reports (other than motivating the developers to fix the issues).
This has give me lots of ideas how to improve our processes. Thanks for sharing! I really like the idea of incremental testing. We are currently changing our processes to do this and also introduce more automated tests that run long before i am even given a component or page or even site to test.
Hi @sles12
I stumbled on the VPAT recently and the fact that it is a checklist immediately caught my attention.
Noting that although it has a WCAG version it is presumably a little biased towards the US market? Ie the US Rehabilitation Act of 1973. Would you recommend this as a resource for UK-based companies or might there be better options. @AdyStokes - I donāt think Iāve seen you mention this but would you recommend this format please?
Many thanks, Dan
Thanks for tagging me Dan, I missed this thread the first time around so interesting to read it now.
To answer your question first, yes it was created initially for the US in 2001 I think as a way for US companies to demonstrate conformance to section 508 of the Rehabilitation act. But it has come to be an accepted way of demonstrating conformance to WCAG (Web Content Accessibility Guidelines) which are not focused on a single contries laws or acts.
When I speak to companies who donāt know where to start I recommend getting someone in to create a VPAT (Voluntary Product Accessibility Template) as a first step. From their they can create tickets for the most impactful findings as well as creating a company Accessibility Statement. Both of those can be used to demonstrate where a company is on accessibility. In some sectors they are mandatory for supplying services, for example educational support software.
So in short, yes I would highly recommend using one but if you are not used to conducting accessibility testing or reviews be aware that automated tools cannot fill one in for you. I created a checklist with testing hints for WCAG a while ago. I havenāt reviewed it for some time but it should give a reasonable ideas of how to test each guideline. Please excuse my hosted site as it has quite a few accessibility issues itself. I really need to find the time to migrate it some day.
Thanks very much @AdyStokes - good to hear that this is a useful addition. In terms of āgetting people inā though, do you feel that accessibility testing (like fully-fledged penetration testing) is beyond the capability of the average test team though (or ideally IT department as a whole)?
How far would you expect teams to get without formal training I guess is the question, as that would help set expectations with line managers
To be honest thatās quite a big question as it is hard to define an āaverageā team but I will say that just like a penetration or security testing specialist, an accessibility specialist will find things an āaverageā team wouldnāt. That doesnāt mean the team canāt address, for want of a better expression, the low hanging fruit.
The top 6 findings of the WebAIM million homepage survey, every time are low contrast. Missing alternative text on images (which is not necessarily an issue. Not having form input labels. Empty links. Empty buttons. Missing document language. None of which are particulary hard to test for or fix.
Thereās so much help out there for testers, designers, developers etc. that poor accessiblity has not excuse apart from lack of awareness. Once you are aware not doing accessibility is choosing to exclude people in my opionion. It might sound harsh but it is true.
Absolutely agree with your point that the motivation for the project team as a whole should be about avoiding exclusion. In terms of āaverage teamā and an example of testing a website - thereās a divide between those who test with dev tools open and those who test purely with whatās in front of them. We have more of the former types of tester and I donāt think the latter group would be that useful in this instance, without training.
This is something we want to develop further though and perhaps it would be useful to go through the checklist ourselves as a measure of where we are?
Our developers of course would also need to incorporate these same measures into their code, since it will otherwise get very tedious testing something that was never designed to pass!
Again, thanks for your invaluable insight (and just to mention - I am aware of an A11Y list of tools for various aspects of testing - unhelpfully though, we have conflicting security requirements at work that block many of them!)