Testing is getting in the way of quality

Amazing talk here, from 2011:

https://youtu.be/yOrPuMCdVXA?si=0QaZGS8NYip4mvk9

My question is: what has changed in the last 13 years?

2 Likes

A few things I think, some organisations have used ways other than testing to mitigate risk. Deploy and release strategies such as dark launches, feature toggles, plus observability, tracing, chaos engineering and the like. Cindy Sridharan’s blog is deep source of these:

However, the majority of organisations still use testing to mitigate risk, when its just one technique to do so. Testers do their bit to propagate this too, mostly unknowingly I think.

To my mind, if you embraced multiple methods of mitigating risk then you would achieve better shippable quality, which testing alone just can’t do.

1 Like

So, shift-right testing?

1 Like

There were a few challenges with this talk and a few others like “no testers required”.

Companies like google and microsoft are not really representative companies of where most testers work, their infrastructure, tooling and access to a much broader community can be very different. Sure lessons to learn but not great idea to start testing like google if you are not a google type company.

They also for me did not make clear what testing was to them at the start of the presentations, you’ll notice a lot of mentions of testing artifacts and test cases so a lot of the premise was potentially based on known risks and scripted testing.

When that’s the view of testing, I tend to agree with most of the points raised but its only a subset of testing in my view and not so closely linked to how many people view testing these days.

There is some great points on the value being the activity of testing not the artifacts, this has a lot of backing from many different groups of testers. This is an interesting part, some of the groups it was designed to challenge and it turns out they on the same page.

It does not entirely miss a very different basic fundamental view of testing, the risk discussions, exploration, discovery and investigations but in some ways it flags this as different from testing they are highlighting as slowing down development, ie only the testing on the knowns with the verification model to get their point on waste across.

Those other things the moved to a group called specialists, in the presentation they called out security, accessibility and internationalization so these specialisations were not the ones they were highlighting as getting in the way of testing.

Those risks specialisations though along with a plethora of other risks are what others fundamentally call testing in general.

So what you can do is agree with everything in the presentations provided you grasp that its likely talking mainly about the known risk side of things and scripted testing.

But at the same time the testing a lot of us actually do, the risk based stuff, the investigations and the unknowns continue as before.

Not much has changed, those different views remain prevalent and there is still a lot of the things and ideas there that could help teams improve particularly if they are focused on verification type testing.

2 Likes

I think the best thing it does is challenge the assumed orthodoxy of QA. I think that often the focus on rapid delivery can be lost amidst the testing process.

Users dont care about test plans, or test cases - or even defect reports. They care about getting their problems solved.

If your users dont care about these things, why are many in QA so focused on them? It feels like a silo culture where the emphasis is on the goals of the silo, rather than the goals of the stakeholders.

I’d much rather a testing specialist works with a developer to get defects removed as soon as they are identified, rather than generating reports and documentation. This is an alien and rejected concept to many. I remain curious about the approach seeming to be relatively unchanged since 2011, including the failures.

No arguments from me on that part, all scripted testing in my experience can be done very very well by developers and in most cases more efficiently.

Though I accept good reasons at times for automation engineer teams to do this on the understanding they lose a bit of efficiency.

Its just not what my view of testing is all about though and that’s what made the fundamental premise misleading.

I think that many struggle so much with testing, that being able to look at quality is somewhat of a fantasy.

I have met very few testers who can even explain what they mean by “quality”, even though it’s their actual job. There are numerous definitions, the most common and toxic one being the ISTQB’s.

What has changed in the last 13 years? Software development practices have got worse, testing practices have got worse, project management has got worse and software quality has gone down the toilet. This is remarkable given that it was pretty poor to start with. And James Whittaker has left our profession and now runs his own brewery and pub.

1 Like

There is one aspect here that is often overlooked which is the customer’s tolerance for bugs. What James describes is absolutely true for direct to consumer software where the power of an individual user is tiny.
In an enterprise context the power dynamics can be quite different with customers caring a lit more about certain kinds of bugs and also having the clout to complain in an effective manner. If you have that kind of environment, then the risk profile changes and a software vendor needs to be more risk averse.
The other aspect of testing that get often overlooked is the benefit of a second pair of eyes that are capable of challenging assumptions and asking questions. That doesn’t have to be a tester, but testers can bring a different worldview to the table.

1 Like

What can be done to reverse this trend? What can be improved? What new practices can be introduced? What existing practices need to be adopted by more/all teams?

Do you have a reference for the ISTQB definition? What do you feel is so toxic?

Sometimes focusing on users and the quality of the work being done feels like going against the team, manager, or company (who in their mind focus as well on the same thing).

ISTQB on Quality:
The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.

https://istqb-glossary.page/quality/

  • Burn ISTQB to the ground.
  • The testers in the top 2% to 25% should learn from those in the top 1%.
  • The bottom 75% of testers should go and do something else. There’s no need to replace them because their absence will not be noticed.
  • Learn about critical thinking, risk, value etc. Stop this endless fascination with the latest shiny tool.
  • Invest at least 10% of your time in personal development. Do it in your own time if the company won’t let you do it in their time (and look for a new job with a company that will).
  • Insist that your company gives you time off and pays for you do do valuable training courses, attend conferences, meet-ups etc. ÂŁ2k a year should cover it.
  • Read the vast amount of fantastic stuff written over the last 30 years by Bach, Kaner, Bolton, Alan Richardson and Gerry Weinberg. James Whittaker is a Marmite character, but I like how he challenges orthodoxy and makes you think even if you don’t agree with him.
  • Be assertive. Don’t agree to do stupid things just because someone in authority says to do it. What do they know? Turns out they don’t know much in most cases.
  • Don’t drop into a groove of doing the same stuff in every sprint. Constantly think about the most valuable thing you could be doing.
  • Make risk visible to stakeholders. From a tester’s perspective, the definition of Done is not that everything works properly and there are no bugs. That’s what you have been pretending, but it ain’t so. The definition of Done is that they are not going to give you any more time to do testing in this sprint. That’s fine, it’s their choice. Now it’s your responsibility to tell them what works, what doesn’t and what risks remain.
  • Don’t work unpaid overtime. Testing is infinite, so you can never finish. If your company don’t value the extra hours of testing enough to pay for them, don’t do them. If the product ships with a load of bugs, maybe they will think about paying you overtime in future.
  • Developers, shut the hell up about testing unless we ask your opinion (hint - we won’t).
  • Product managers, listen to your customers. No one wants a new release of an important product (i.e. all the ones I use) every week, two weeks or even every month. We want stability instead of constantly wondering what new bug is going to bite us in the ass. 3 or 4 releases a year is plenty. If you’re developing unimportant products (e.g. social media), do whatever you want because we expect it to be shit and no one cares.
1 Like

The ISTQB definition primarily focuses on the specified requirements, which are a tiny fraction of the actual requirements. This was always the case, but it is even more so now that requirements are captured as ambiguous stories, lacking any significant detail. When was the last time you saw a proper data dictionary?

The focus on verification of expected behaviours is a fundamental flaw. Testing should be an investigation to find out things you don’t know. Verifying expected behaviours is part of that. Revealing unexpected behaviours is also important, but the ISTQB definition (and indeed their entire testing methodology) does not address this.

The definition also mentions user needs and expectations, which is good. But there is no mention of value or all the other stakeholders.

My preferred definition is “Quality is value to some person who matters”. This takes into account all the stakeholders, not just the product owner and user. It also acknowledges that software can have value even if it doesn’t meet its specified requirements. It may even have value precisely because it doesn’t meet its specified requirements.

Testing is the responsibility of testers and quality is the responsibility of the whole team, I would say testing is not getting in the way of quality but it is contributing to it. People usually think if a software is tested and qa has given sign-off it means it has quality, but that’s not the case.

Suppose the client asked for a search box and gave only 1 week for the addition of this element PM accepted the requirement without consulting the team, however, the task was more than 1 week but since the whole team got just 1 week so they built the product with some missing functionalities, qa gave the sign-off and the client rejected the product so can we still say that the product has quality, nope? Who was at fault? The team tried to build the product within the given time frame but couldn’t succeed due to the complexity of functionality, the client shared their requirement but the PM couldn’t execute their task properly which affected the overall quality of the product.
Testers did what they were asked to do in 1 week and even Dev did but when things are not clear it affects the overall quality of the product.
Similarly, if Dev doesn’t do unit testing and integration testing it will impact the testing process and time duration for testing allotted to testers.

So everyone should do their task properly then only quality is achieved, and testing contribute to quality.

Agree that testing != quality. However, you said that testing is the responsibility of testers: but then spoke about dev doing testing. How do you square that circle?

I don’t like the assumption (“qa has given sign-off”) that experts in testing should be quality gatekeepers. Gates and gatekeeping are always bad. Always. Neither do I like the implied business, dev and test silos.

What you describe doesnt feel like teamwork. Why was the PM not challenged?

Nope, i’ve already said that I consider these activities of other categories than testing.

Otherwise, everything is testing right? Building a feature is a test of whether or not people want that feature. It’s endless and unhelpful for those in testing roles or teams attempting to test software well.

I have heard plenty of ‘testing is dead/bad/in the way/about to be automated/LLM’d out of a job’ talks. They still get the attention, more than positive stories of testing being part of a range of activities to shine a light on quality.

Imagine the uproar if someone worked out that everything we did was to get feedback on whether it was the right thing or not.

Imagine.

Thankfully nobody has worked that out. We can focus all of our activities on supporting test silos.

So… For those dozing at the back:

  • Unit testing is not testing
  • Integration testing is not testing
  • Only things that testers do is testing

Glad we got that sorted.

But wait. What if someone worked out that it would be better to test all the way through the process? And what if that meant testing became an accountability rather than a role? And that meant that software could be delivered faster with higher quality?

Sounds awful, doesn’t it? Let’s protect testers at all costs.

Why wouldn’t you want to protect testers? Are we such lesser beings with little to offer?

I value the role of the tester on a development team, it appears that you do not, or a significant amount less than I do. Thats fine, you aren’t the first and won’t be the last. I think you wouldn’t have posted the link to the talk above if you did value the role.

You can have testers and team accountability for testing, I’ve been doing it my whole career. Some teams don’t need testers and never have and thats fine. In the real world where I exist, most teams aren’t there though, and a good tester with the right attitude can help teams get closer to it. It certainly won’t happen on its own and it doesn’t happen by not valuing the role of a tester.

1 Like