Hello folks, after listening to a really interesting talk at Accessibility Leeds recently by the Sky app team I was really interested by their use of a script (manuscript) to set expectations of the developers for a screen reader user.
I’m starting my first attempt for our application as we have new external developers and they don’t have much accessibility exposure. I’ve dabbled with both a manuscript style and a table style with Location (page), User Action (navigate, tab, select etc.) and Screen Reader Response (what the user hears including default language, edit, read only etc.)
I’d be interested to know if anyone else has used this technique or has any advice, templates etc. as it’s brand new to me and something I’ve not heard before. I have reached out to the Sky Team to see if they can share anything but not heard back as yet. Thanks all
Could we open this up to anyone who does Screen Reader testing/evaluation and what you do to confirm the spoken words are what is expected?
Having now seen the Sky scripts they are focused on what elements should do rather than what will be said. So it is possible that no one has looked at documenting the requirements in quite this way before.
Any help would be greatly appreciated, thanks.
I’m afraid I don’t have much advice to offer, but I’m interested in replies. I think a course on using a screen reader would be very beneficial, as I have used them to test, but I expect never properly.
It’s fairly discouraging as accessibility bugs are generally WONTFIX in my experience. How would a tester go about learning enough to have the confidence to advocate more for screen reader compatibility?
Hello Stacey, I’m not aware of any courses that are free but I learnt through doing and found a lot of great tips on WebAIM (web accessibility in mind) and I’ve popped a link to their keyboard shortcuts below. The thing to understand is the way the screen reader speaks context as well as the words on the screen. For example in a form field the screen reader would read out the Label, the type, e.g. input then the current status (blank / protected etc.) Once you understand that you can begin to follow what you are hearing.
The main thing I look for are variations from the experience a sighted person would have. Missing text or skipping over important information.
I’m not sure I can offer too much help on confidence but there’s plenty of business benefits, legal requirements and moral reasons to making your product accessible.
Some thoughts about screen readers:
a while ago someone wrote that he used a special extension to ensure accessibility on his blog. I was really interested and opened the web page. I had already NVDA installed, so I opened the Element List. Using this list I tried to make a mental model of the page using headers. I tried to figure out, what links, form fields, orientation points, and buttons were on the screen. Were the names descriptive? Were consequences of actions on this page clear to the user?
A fast way to check, whether my pictures in blog have alternative texts, is to quickly browse through the post with my screen reader. I only need to hear the first four words in the paragraph. If I hear something like jpg, then I missed one.
I met a few UX designers. I loved the way they designed applications for and with users. According to me there is a big gain, if UX and accesibility are combined.