MOTBucks March Meetup - Louise Gibbs

Hi Everyone,

The next MOTBucks event is next week. We have the awesome @lgibbs presenting a new talk called “Lets Save the Titanic”. It’s on 24th March and starts at 8pm.

Come and join us!

We are looking for future speakers too! So get in touch with myself or @wildtests if you are interested.

2 Likes

Thanks again Louise!! :clap::clap::clap:

Questions we didn’t get time for at the meetup:

  1. What are some of the main accessibility concerns you find in software you work with? - Edward

  2. What is the most hazardous category in your final diagram? Unknown unknowns perhaps? - Edward

  3. What are some strategies for balancing speed vs safety in software development? - Stu

  4. If we do another episode, will you come on Screen Testing? - Dan Billing

1 Like

I just finished re-watching this presentation (I caught part of it LIVE), and wanted to add my take on the question from Simon regarding testing tasks that relate to hitting the iceberg. I’ve always been fascinated by all things about space, so the first thing that came to mind is the (fault injection) simulation testing that NASA conducts for space missions. It’s similar to Netflix’s “Chaos Monkey” in that the simulation activities prepare people in advance for situations and issues that could occur in the future. It’s a way to test not only the equipment (hardware and software) and users (astronauts), but also the support staff (Mission Control) responsible for coordinating a response to mission critical issues.

Another technique borrowed from the military and made famous by NASA is the use of a “Tiger Team”. Tiger Teams are cross-disciplinary teams of specialized experts brought together to solve or investigate specific problems or critical issues, such as the Apollo 13 explosion in the service module. This differs from the simulation approach in that this is the team that deals with the catastrophe AFTER it occurs, and guides the organization in handling it. It brings to mind the team from “The Phoenix Project” that was always putting out fires created with every deployment.

1 Like

The only comment I can make regarding the poor lifeboat training aboard the Titanic is that, while I don’t recall anything about the lifeboat training from the single cruise I’ve been on, I can remember EVERY STEP from my skydiving training nearly 25 years later, after making my very first jump solo.

1 Like

Pretty interesting. The Tiger Team almost sounds like a Triage Team in ERT parlance.

Whoa! That should have been pretty scary!

I would be glad to talk about ‘Empathy, and how helps towards Software Quality’!

Thanks Venkat, we are taking a break for a few months, but definitely will consider your talk on our return…

The main concern I have are businesses and organizations not recognizing the value in catering for users with accessibility needs. Businesses tend to care more about profit than the ethics of excluding groups of users from their website. If certain users are excluded, then they will be not be spending money and the business will be losing out.

I used to be in a wheelchair (only for about 2 years), and I remember cases where I was not able to browse a shop because there wasn’t enough room. The shop assistants offered to bring items to me, but all I wanted to do was browse and make my decisions independently without a shop assistant towering over me. The result, I didn’t spend money in those shops. If I wasn’t spending my money there, then it is likely others were also not spending money there.

With the pandemic, a lot of us would have been forced to do more shopping online. There are fewer alternatives available. Businesses could be losing out on an opportunity to gain new customers. In a lot of cases, customers just want to be able to use an application independently without help. They don’t want to be treated like children who need help completing basic tasks. By treating all customers with respect and providing a positive user experience, customers are more likely to remain loyal in the future.

Here is a picture of the diagram referenced in this question

I actually think the Known Knowns could be the most hazardous category.

The other sections allow us to acknowledge the fact that we don’t know everything, and encourages us to intentionally seek out that information.

However, icebergs are constantly changing. They melt, they change shape, and sometimes they even flip over. It is common for documentation not to be kept up to date and requirements to change. This can lead to confusion when developing and changing the application, and severely impact our ability to continue our testing.

We need to keep on track with the Known Knowns, check they are still relevant and correct. People often talk about the limitations of Automated Tests, how they only check the same thing over and over again. That isn’t a bad thing. It gives us that early warning system when something unexpected has changed.

The first thing we need to do is understand what the business priorities are.

If there is a deadline, why does it exist? What might happen if we don’t complete the work before that deadline? Whatever the reasons, we should make every attempt to meet the deadlines set by the business.

If we don’t think we can meet that deadline, then there are things we can do to mitigate the risks.

  1. Prioritize testing around the most essential and highest risk areas. Again, it is worth speaking to the business to fully understand what these might be. If you can’t get all the testing done in time, you can still continue testing after its been released. If it turns out there is an issue, its better late than never.
  2. Ask the team for help. Get the team to swarm on this ticket (that includes developers) so the testing gets done.
  3. Start testing earlier. Start planning earlier, before the dev work is complete, so that you are prepared for what testing needs to be done. Also, asking developers for an earlier (but incomplete) version can give you the opportunity to provide some early feedback and give you a preview to what the final build will look like.

I wrote an article for TestProject a few years ago about reasons we might need to skip tests which cover some of the ideas mentioned above.

@danielbilling

Oooh, yes. It’ll be good to get back into listening to podcasts again. I used to listen to them while driving to work, so stopped listening when I started working from home. I hope you do create some more episodes.