How to Articulate the value of QE?

Hey all.

So I have been driving myself mad the past few weeks on this.
My manager came to me a while back and said that I need to come up with a way to show the value and purpose of our quality team. Basically saying he needs a way to justify why we are paying for a team of testers/QE/SDET’s and what they are bringing to the table as well as how do we quantify this?

As a career long tester I know the value we bring to a product, a team or company. However this is the first time in my career that I have needed to in detail justify my existence. I am familiar with some metrics that companies like to pull like # of defects/bugs or test cases created or automated.
However the value of those metrics are low when we consider what value we can bring.

So I was curious if others have come across this before and how they were able to articulate this to management.

It can be a variety of things. The most concrete are deliverables. Reports, presentations, blogs, etc demonstrating your expertise and what you bring to the table. Teaching and training are also valid and valuable ways to demonstrate expertise. Business people value this dedicated person that is thinking about the risks of new products and attempting to minimize that risk. It can be difficult to articulate that and using defect counts is a way but sharing a story about how curiosity led to deeper analysis and that led to avoiding a bigger problem reaching production is valuable. How does a doctor demonstrate their value to you when you get a physical? You might get a report and explanation of how your habits need to change to maintain or return to health. You could think of QA as being a doctor to the business as one example.

1 Like

Hello @crashed!

Testing is always optional. When we look at it from this point of view, you might approach your question by asking what happens to our products if we do not assess risk or test.

I like what @alanmbarr has described. I especially like sharing stories. While defects make for interesting stories sometimes, I hope there are also stories motivated by risk such as security risks, incorrect information risks, unavailable risks, or incorrect calculation risks.
@alanmbarr also alludes to a partnership with business. I think that a partnership with collaboration is very valuable. The value is in one person focused on business value and another focused on an independent, unbiased look at multiple what-if scenarios. Together, they create great products. Might there be a history that could demonstrate and support testing value?

Joe

Thanks for the replies @alanmbarr and @devtotest

The problem I am finding is management is very analytical. So they want reports and numbers to show things, sure I can come up with these things but the effort to do all this is somewhat meaningless in the long run and gives a false sense of security.

The issue is more about that they do not understand what it is we do as a craft and would argue devs can do the same.
We all know that is not the case, sure developers can test and they should. However they have a different view and are generally too close to the product to find some of the issues.

I can wax poetic all day long to them about the things we do, especially when I have them integrated in a scrum team. Also the scrum teams can validate that, however its somewhat subjective and there are no numbers I feel I can easily draw back to each scrum team or the quality team as a whole.
So just looking for ideas on how best to articulate this and gather the data based on others experiences.

What have you tried?

You could reach out the people who hired you and your teammates.
Probably they have some idea of why they wanted a team of testers.

This probably won’t help in the short term - especially as so much of the world is in lockdown and not using systems just at the moment - but I would start collecting news stories of major corporate IT failures!

If you can point to some high-profile failures, plus some back-of-envelope metrics with your best guess as to cost of fixing, impact on sales and impact on public profile of the company, this can back up your argument.

Then you could point to these well-documented high-profile cases and say “This is what I think this failure of testing cost Company X. How much do you think it would cost us? What price would you put, Mr. CEO, on your not having to make that embarrassing statement of failure to the press? The Government? The shareholders?”

The failure of the control software in the Boeing 737Max has been put down to failures in testing. The reputational damage to that company might turn out to have existential impact. Ask the question: can we afford not to test properly and in depth?

1 Like

So far I have tried to drive some stats from the number of bugs found in the system and create some dashboards to present that.

This is a good start but where we don’t have 100% testing coverage on all parts of the product its not very telling. I need to work on a way to break it down to see what the numbers look like on areas where we do have this coverage but the tooling is not super great on this.

Outside of that I have tried to define things around doing regular trainings with the development teams but we can only do so many of those as they take time and take us away from our objective of testing/automating.

The problem with that is they dont know. They are looking to me to help them figure this out and honestly this is where I am stumped.
Like I say, I know the value we as testers bring to a product but putting it into terms that make sense and can show the value is a bit harder I am finding.

When ever this question pops up I get reminded of a thing we implemented in an organisation. The service was somewhat good because it was easy to calculate the cost of a bug. A bug that caused a service outage would cost $1.000.000 / hour or something. So what we did was to have a classification saying how much of an outage a bug would cause. Instead of reporting “number of bugs found” we reported amount saved. This was in effect not because we needed to the defend the value of testing but to help testers spend the time on the right things. And just transforming a “Priority 1 / A” bug into a $1.000.000 bug makes people understand value better.

2 Likes

@ola.sundin This is really great actually.
We currently classify bugs into an impact or severity, using a general guideline of the number of potential users impacted by it. The higher the number of impacted users, the higher the priority.
Maybe by putting a dollar value on each one will help us in putting this into terms of money saved by avoiding these expensive bugs. Then at the end of the month/quarter/year we can look at the numbers and see that the money saved is more than the cost of the team (hopefully this is the case).

Thx for the suggestion.

1 Like

@ola.sundin I love the idea of describing something as a “million-dollar bug”!

1 Like

Walk them through a scenario where they have to approve delivery of the software and all they’ve been given is an email that simply says “Here is your software”, followed by a link. The only questions you can answer relate to what features were added and how the software was built. Ask them if they’d be comfortable approving that release to thousands or millions of customers.
Now the scary thing is, some managers would be perfectly happy. In that case, no justification you can provide will help. But prudent, realistic, professional managers will have lots of questions relating to qualities of the software.
Even more ridiculous scenario; ask them if they’d be happy to pass a snake to their toddler, blindfolded. Because that’s in effect what you’re doing without someone performing some sort of inspection. Now, the snake might be harmless, but you don’t know until you look.

1 Like

@rocketbootkid These are some really good analogies here. I often do something similar and use automobiles as a comparison because of how build and motor vehicle and software are very similar and need to be high quality but also are comprised of many subsystems.

My only issue is moving from the analogical view to something where I can better prove the impact that quality has. Be it less defects, lower number of hot fixes or incidents, etc.
Just some way to prove we have an actual impact. Problem is, some of those numbers take time to see the trend and I am being asked to prove the impact after only months with a few people and a rather large system that we have no quality insight on.

Other people in the thread are giving good ideas, but I would like to go a bit on the “ask them” direction.

Try asking the story behind your hiring. Maybe there is a story about a developer who was overwhelmed by work, testing his own code, or they saw that you have a particular skill that is valuable to the company.

The stories behind a hire usually get lost in time, but they can be valuable to understand why you are in a company.

It’s not meant to be your sole argument or point, but it can be one of them.

1 Like

Thanks for the reply. They know why they hired me, to help them increase the quality of the products.
However they don’t know what that means exactly.
I am trying to get them to understand the value and how we help teams build better products.
They still need to tie this back to some numbers to prove that, this is where I am having trouble.

@crashed, there have been a lot of great comments and suggestions. We’ve been working through a similar situation, a new VP came in and is very focused on the analytical side. The VP fully realizes the value of testing and we’ve had several great conversations on the topic. His push for analytics is to help the teams focus on the risks and continue to move forward. Not knowing the full story I’d make a couple of suggestions that hopefully help.

  • Start with existing customer data, as you mentioned what issues escaped to customers, and how long did they take to resolve, MTTR. Working with you team how can you address these areas, and track that it is working.

  • Code and test coverage, with the emphasis, can help show the team(s) are focused on testing areas of risk.

  • With engineers embedded in the scrum teams how is this improving the team’s efficiency with ensuring stories are meeting acceptance criteria in a timely manner.

Combining actual data with the impact defects can have on customers can help not only justify the existence of group, it can help with making decisions on areas to focus on going forward.

Hope this is helpful.

@rob_lee
Thanks so much for replying. Its great to hear from someone that has been in a similar place as I currently am.
A lot of what you listed are things I am starting to work on. The bigger one being code and test coverage and analyzing what we have, increasing the number and the quality of the tests we are doing.
I have engineers in scrum teams currently and in speaking with the teams the feedback has been great and they feel there is value, but management cant tie that back to any specific metrics that explicitly show the quality teams impact, especially in a shot time frame.

Your response is very helpful and validating of my current efforts. :slight_smile:

@crashed, the potential tie back from the scrum teams is the ability to complete work/stories/features quicker, while ensuring they meet the acceptance/done criteria. The additional piece is, based on your description of the team, I assume there is test automation which helps provided regression testing. Again allowing the teams to focus on delivering value to customers. So if there is a way to show even minimal improved turn around in feature work, and shorter development cycle based on test automation the two metrics together can point to a larger return on investment.

A single statistic means little, the impact of a good testing team is seen across many different metric in the development cycle. If you can start to lay this out it will provide positive mapping to build upon, and guide the teams activities going forward.

1 Like