Accessibility for beginners

Remaining Questions from the session

  1. What tools are there for testing desktop apps, i.e., not web-based UIs?
  2. Is there screen reading training available? For NVDA or some tool.
  3. A lot of effort goes into website design. Do you think there is any argument for separating the ownership of business logic and UI between organisations. Could there be an argument for webstore developers only developing an API whilst 3rd party specialists in accessibility develop the UI?
  4. Are there ways to improve the accessibility of small touch screens (e.g. smartphones) for motor-challenged individuals
  5. So many sites are using Captchas now. Does this conflict with accessibility and usability? If not what are the alternatives to Captcha?
1 Like

Hi,
Following this session, I wondered if there’s somewhere on the MoT site a place that has all kinds of resources for accessibility testing?
Thanks,
Alon.

There’s a 30 Days of Accessibility Testing challenge - https://www.ministryoftesting.com/dojo/lessons/30-days-of-accessibility-testing

Not necessarily resources, but I did the performance testing one a couple of years ago and it really helped me get to grips and feel more confident with performance testing. So much so that I led a performance testing initiative in my last company that is still going to this day. Definitely worth taking a look!

1 Like

In the first instance, maintaining standards is paramount so any desktop based design will still have to adhere to WCAG guidelines, as well as the Section 508 accessibility compliance standards if your desktop app is going to be for sale / available in the US too.

After that it comes down to mapping your desktop features (menus, hotkeys, dropdowns, workflows, help menus, voice assistance, assistive tech features) to the testing you want to do to verify their ease and intuitiveness of use.

Once you understand the following categories (this is a highly generalised list) across your application, you will be in a position to start identifying if a suitable tool exists or if this is better done in an exploratory manner representing the various user groups with accessibility needs:

  • Application wide elements - buttons, status indicators, drop downs, tooltips, sliders
  • Representative elements - Windows, panes, dialogues, ribbons, forms, tables, views, re-ordering
  • Critical paths - User journeys, high use features, shortcuts and hotkeys
  • Helpers for assistive tech which allow the application to integrate with screen readers, magnifying tools, text to speech editors and assistive keyboards

Some assistive tools you can run against desktop apps to see how well they integrate are:
Jaws or Windows native screen reader for MS apps (screen reading)
Macbook Accessibility Keyboard or Windows 10 On Screen Keyboard (assistive keyboards)
Dragon Naturally Speaking (voice to text conversion for text editors)

However, testing in this way relies on a deployed app so pushes testing down the road in terms of cost and time. If you want to test during the development cycles, exploratory testing and use of the following tools may get you most of the way depending on what elements of the main categories above you wish to focus on:
*Accessibility Insights for Windows
*Microsoft UI Automation enables Windows applications to provide accessibility information about user interfaces (UIs).
*Color Oracle is a colour blindness simulator for Windows, Mac, and Linux.

Hope this helps!

2. Is there screen reading training available? For NVDA or some tool.

I imagine the sites of the makers of the tools would be the best place to start in order to understand how to use the tools and find help pages and manuals.

I guess it depends on what you are trying to achieve here.

Forgive me if I got it wrong but I interpret this as a proposal that development, at a basic level, is split into 3 teams that looks after the accessibility parts of the UI, the main UI and the backend that drives both?

The problem with splitting the accessibility away from the main UI design is that you can’t if you are doing it right. It’s part of the basic UI design, baked into it from the first wireframe and proposed workflow through the UI.

I would also prefer accessibility to be kept as a discussion point among the whole of the delivery team for an application in order to educate those who were unfamiliar with the practice. Hiving it off and making it a “specialist” thing does nothing to inform those who are not aware of how important it is. It’s like hiving off performance or security focused engineering and making it this mythical (sometimes blocking) thing.

If we are working on the one product that we want to be accessible, we are working from the same codebase. There isn’t a layer of accessibility that you can put on top of the HTML, as I showed in the demo, a lot of accessibility lives in the HTML!

The basics as I spoke about them were:
Headings should be in HEADINGS tags < h1 >< /h1 > < h2 > < /h2 >
Paragraphs should be a < p >< /p >
Emphasis should be indicated with an < em >< /em >
The whole thing should be contained in a < section >< /section >
(Spaces added to the tags to get them to render on the site)

With different teams in place, delivering different parts of an app, prioritisation and communication can get in the way. If Georgina delivered something last week and I need to change it / rip it out this week to make the app more accessible, this can make for an awkward conversation. We are both working from the one codebase after all, better we work on making our app better as one informed group than 2 separate ones.

Hope this answers your question.

:slight_smile:

I feel like I am the only one but I am genuinely excited for fold out phones to become more robust and common as I feel they will solve a lot of problem for people with extended (tremors) / restricted mobility (arthritis) and make their interactions with their phones a lot easier.

In the meantime, I can talk about some design features I have seen within the software on the phones and supporting hardware which tries to help in this area.

External keyboards - I know. I know. They run over bluetooth, glitch, freeze and then we discover they have been trying to talk to the collar of the dog next door but if you don’t have the control to be able to manipulate the tiny smartphone keyboard, these are invaluable to have. Also good for us who are going short-sighted too.

Orientation - the ability to be able to landscape a phone and have the feature become bigger and easier to touch accurately is a blessing for motor based accessibility.

Zooming - no, not video calling, the OTHER zooming - focus. The ability to be able to zoom in on a feature to have its features increase in touchable area is a brilliant design feature. It started off a bit flakey in early android phones (the feature size scaled up but the touchable area did not :frowning: ) but since about Android 6.0.1 Marshmallow, the touchable area has scaled with the feature. This is really good news and the right direction to be moving in.

Sensitivity adjustment - if you have extended mobility, having an over-sensitive touchscreen can lead you down some rabbit holes you do not want to go anywhere near so in order to combat this some phones allow you to adjust the touchscreen sensitivity.

Swipe adjustment - in a way, a lot of the same advantages of sensitivity adjustment above. If you are not able to manage single thumb / finger swipes and want to just use a double swipe for all (or vice versa), you can change this on your phone to make it work for you.

Options other than “the hand” - A lot of work was done on mobile phone use to identify where most people used the most phone areas. These were known as the thumb zones (i kid you not!) and companies paid £10000s in order do the research in order to understand the design for neuro-mobile “typical” folk. That thinking is gone now and the design thinking is how to get maximum amount of folk using devices. Understanding hand-based mobility issues are a huge part of that (hence the death of the single fingerprint login) - we now have password, eye-reader and voice unlock too.

It’s hard not to explain my feelings about traditional captchas without the use of some very extreme expletives. I would like to think there is a dedicated circle of hell where the very worst people go to forever try to click on all the squares with a building in it, never once succeeding.

They are an affront to usability, accessibility and sometimes my sanity! Never mind that what is clear to you is not clear to the program behind the captcha, it is that sometimes they simply don’t work so you are left stuck without recourse or access to the resources that we need.

I get it, I get we have situations were we need to verify our apps are interacting with a human and not a malicious script or bot.

However, some good news, a couple of years ago, Google recognised the general inaccessibility (and pain-in-the-backsideness) of relying on user interactions to drive access via this tech. This was after 3 publicly released iterations trying to make it better.

It probably helped that the WGAG have written extensively about problems with the tech and proposed suitable alternatives so the latest version is a non-interactive one that builds a profile of your requests to work out if you are likely to be a robot or not. What is needed now is for those who have implemented the 1st 2 versions of the solution to update their sites to allow them to use v.3 captchas.

Myself and @ruarig did a Power Hour on Accessibility Testing last year which has lots of answers to questions and links to resources. Check it out through the link below.

1 Like

I feel like I am the only one but I am genuinely excited for fold out phones to become more robust and common as I feel they will solve a lot of problem for people with extended (tremors) / restricted mobility (arthritis) and make their interactions with their phones a lot easier.

In the meantime, I can talk about some design features I have seen within the software on the phones and supporting hardware which tries to help in this area.

External keyboards - I know. I know. They run over bluetooth, glitch, freeze and then we discover they have been trying to talk to the collar of the dog next door but if you don’t have the control to be able to manipulate the tiny smartphone keyboard, these are invaluble to have. Also good for us who are going short-sighted too.

Orientation - the ability to be able to landscape a phone and have the feature become bigger and easier to touch accurately is a blessing for motor based accessibility.

Zooming - no, not video calling, the OTHER zooming - focus. The ability to be able to zoom in on a feature to have its features increase in touchable area is a brilliant design feature. It started off a bit flakey in early android phones (the feature size scaled up but the touchable area did not :frowning: ) but since about Android 6.0.1 Marshmallow, the touchable area has scaled with the feature. This is really good news and the right direction to be moving in.

Sensitivity adjustment - if you have extended mobility, having an over-sensitive touchscreen can lead you down some rabbit holes you do not want to go anywhere near so in order to combat this some phones allow you to adjust the touchscreen sensitivity.

Swipe adjustment - in a way, a lot of the same advantages of sensitivity adjustment above. If you are not able to manage single thumb / finger swipes and want to just use a double swipe for all (or vice versa), you can change this on your phone to make it work for you.

No thumb - A lot of work was done on mobile phone use to identify where most people used the most phone areas. These were known as the thumb zones (i kid you not!) and companies paid £10000s in order do the research in order to understand the design for neuro-mobile “typical” folk. That thinking is gone now and the design thinking is how to get maximum amount of folk using devices. Understanding hand-based mobility issues are a huge part of that (hence the death of the single fingerprint login) - we now have password, eye-reader and voice unlock too.