Amazing talk here, from 2011:
https://youtu.be/yOrPuMCdVXA?si=0QaZGS8NYip4mvk9
My question is: what has changed in the last 13 years?
Amazing talk here, from 2011:
https://youtu.be/yOrPuMCdVXA?si=0QaZGS8NYip4mvk9
My question is: what has changed in the last 13 years?
A few things I think, some organisations have used ways other than testing to mitigate risk. Deploy and release strategies such as dark launches, feature toggles, plus observability, tracing, chaos engineering and the like. Cindy Sridharanâs blog is deep source of these:
However, the majority of organisations still use testing to mitigate risk, when its just one technique to do so. Testers do their bit to propagate this too, mostly unknowingly I think.
To my mind, if you embraced multiple methods of mitigating risk then you would achieve better shippable quality, which testing alone just canât do.
So, shift-right testing?
There were a few challenges with this talk and a few others like âno testers requiredâ.
Companies like google and microsoft are not really representative companies of where most testers work, their infrastructure, tooling and access to a much broader community can be very different. Sure lessons to learn but not great idea to start testing like google if you are not a google type company.
They also for me did not make clear what testing was to them at the start of the presentations, youâll notice a lot of mentions of testing artifacts and test cases so a lot of the premise was potentially based on known risks and scripted testing.
When thatâs the view of testing, I tend to agree with most of the points raised but its only a subset of testing in my view and not so closely linked to how many people view testing these days.
There is some great points on the value being the activity of testing not the artifacts, this has a lot of backing from many different groups of testers. This is an interesting part, some of the groups it was designed to challenge and it turns out they on the same page.
It does not entirely miss a very different basic fundamental view of testing, the risk discussions, exploration, discovery and investigations but in some ways it flags this as different from testing they are highlighting as slowing down development, ie only the testing on the knowns with the verification model to get their point on waste across.
Those other things the moved to a group called specialists, in the presentation they called out security, accessibility and internationalization so these specialisations were not the ones they were highlighting as getting in the way of testing.
Those risks specialisations though along with a plethora of other risks are what others fundamentally call testing in general.
So what you can do is agree with everything in the presentations provided you grasp that its likely talking mainly about the known risk side of things and scripted testing.
But at the same time the testing a lot of us actually do, the risk based stuff, the investigations and the unknowns continue as before.
Not much has changed, those different views remain prevalent and there is still a lot of the things and ideas there that could help teams improve particularly if they are focused on verification type testing.
I think the best thing it does is challenge the assumed orthodoxy of QA. I think that often the focus on rapid delivery can be lost amidst the testing process.
Users dont care about test plans, or test cases - or even defect reports. They care about getting their problems solved.
If your users dont care about these things, why are many in QA so focused on them? It feels like a silo culture where the emphasis is on the goals of the silo, rather than the goals of the stakeholders.
Iâd much rather a testing specialist works with a developer to get defects removed as soon as they are identified, rather than generating reports and documentation. This is an alien and rejected concept to many. I remain curious about the approach seeming to be relatively unchanged since 2011, including the failures.
No arguments from me on that part, all scripted testing in my experience can be done very very well by developers and in most cases more efficiently.
Though I accept good reasons at times for automation engineer teams to do this on the understanding they lose a bit of efficiency.
Its just not what my view of testing is all about though and thatâs what made the fundamental premise misleading.
I think that many struggle so much with testing, that being able to look at quality is somewhat of a fantasy.
I have met very few testers who can even explain what they mean by âqualityâ, even though itâs their actual job. There are numerous definitions, the most common and toxic one being the ISTQBâs.
What has changed in the last 13 years? Software development practices have got worse, testing practices have got worse, project management has got worse and software quality has gone down the toilet. This is remarkable given that it was pretty poor to start with. And James Whittaker has left our profession and now runs his own brewery and pub.
There is one aspect here that is often overlooked which is the customerâs tolerance for bugs. What James describes is absolutely true for direct to consumer software where the power of an individual user is tiny.
In an enterprise context the power dynamics can be quite different with customers caring a lit more about certain kinds of bugs and also having the clout to complain in an effective manner. If you have that kind of environment, then the risk profile changes and a software vendor needs to be more risk averse.
The other aspect of testing that get often overlooked is the benefit of a second pair of eyes that are capable of challenging assumptions and asking questions. That doesnât have to be a tester, but testers can bring a different worldview to the table.
What can be done to reverse this trend? What can be improved? What new practices can be introduced? What existing practices need to be adopted by more/all teams?
Do you have a reference for the ISTQB definition? What do you feel is so toxic?
Sometimes focusing on users and the quality of the work being done feels like going against the team, manager, or company (who in their mind focus as well on the same thing).
ISTQB on Quality:
The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.
The ISTQB definition primarily focuses on the specified requirements, which are a tiny fraction of the actual requirements. This was always the case, but it is even more so now that requirements are captured as ambiguous stories, lacking any significant detail. When was the last time you saw a proper data dictionary?
The focus on verification of expected behaviours is a fundamental flaw. Testing should be an investigation to find out things you donât know. Verifying expected behaviours is part of that. Revealing unexpected behaviours is also important, but the ISTQB definition (and indeed their entire testing methodology) does not address this.
The definition also mentions user needs and expectations, which is good. But there is no mention of value or all the other stakeholders.
My preferred definition is âQuality is value to some person who mattersâ. This takes into account all the stakeholders, not just the product owner and user. It also acknowledges that software can have value even if it doesnât meet its specified requirements. It may even have value precisely because it doesnât meet its specified requirements.
Testing is the responsibility of testers and quality is the responsibility of the whole team, I would say testing is not getting in the way of quality but it is contributing to it. People usually think if a software is tested and qa has given sign-off it means it has quality, but thatâs not the case.
Suppose the client asked for a search box and gave only 1 week for the addition of this element PM accepted the requirement without consulting the team, however, the task was more than 1 week but since the whole team got just 1 week so they built the product with some missing functionalities, qa gave the sign-off and the client rejected the product so can we still say that the product has quality, nope? Who was at fault? The team tried to build the product within the given time frame but couldnât succeed due to the complexity of functionality, the client shared their requirement but the PM couldnât execute their task properly which affected the overall quality of the product.
Testers did what they were asked to do in 1 week and even Dev did but when things are not clear it affects the overall quality of the product.
Similarly, if Dev doesnât do unit testing and integration testing it will impact the testing process and time duration for testing allotted to testers.
So everyone should do their task properly then only quality is achieved, and testing contribute to quality.
Agree that testing != quality. However, you said that testing is the responsibility of testers: but then spoke about dev doing testing. How do you square that circle?
I donât like the assumption (âqa has given sign-offâ) that experts in testing should be quality gatekeepers. Gates and gatekeeping are always bad. Always. Neither do I like the implied business, dev and test silos.
What you describe doesnt feel like teamwork. Why was the PM not challenged?
Nope, iâve already said that I consider these activities of other categories than testing.
Otherwise, everything is testing right? Building a feature is a test of whether or not people want that feature. Itâs endless and unhelpful for those in testing roles or teams attempting to test software well.
I have heard plenty of âtesting is dead/bad/in the way/about to be automated/LLMâd out of a jobâ talks. They still get the attention, more than positive stories of testing being part of a range of activities to shine a light on quality.
Imagine the uproar if someone worked out that everything we did was to get feedback on whether it was the right thing or not.
Imagine.
Thankfully nobody has worked that out. We can focus all of our activities on supporting test silos.
So⌠For those dozing at the back:
Glad we got that sorted.
But wait. What if someone worked out that it would be better to test all the way through the process? And what if that meant testing became an accountability rather than a role? And that meant that software could be delivered faster with higher quality?
Sounds awful, doesnât it? Letâs protect testers at all costs.
Why wouldnât you want to protect testers? Are we such lesser beings with little to offer?
I value the role of the tester on a development team, it appears that you do not, or a significant amount less than I do. Thats fine, you arenât the first and wonât be the last. I think you wouldnât have posted the link to the talk above if you did value the role.
You can have testers and team accountability for testing, Iâve been doing it my whole career. Some teams donât need testers and never have and thats fine. In the real world where I exist, most teams arenât there though, and a good tester with the right attitude can help teams get closer to it. It certainly wonât happen on its own and it doesnât happen by not valuing the role of a tester.