I ran into this with an online discussion recently and it reminded me that I need to step back and clarify and that online is not the best place for these deeper discussions. I thought Iâd share my take as it may reduce a bit of confusion for others as confusion can lead to unintentional conflict.
I was part of a no tester model back in the 90âs and I am seeing some of those models re-surfacing often in the form of revolving around robust acceptance test criteria. Some of the thinking goes if all acceptance tests pass then you have zero bugs and since all your acceptance tests are clear you can have 100% automation covered by developers as they build the product.
On the face of it, its not a model that seems to benefit from having testers, if you go deeper and broader than the acceptance criteria and find something, its something for the backlog and not a bug from this models perspective, if you start cluttering that backlog with your testing it can then do more harm than good so that no tester idea can start to make sense.
I can see this model having benefits, there are a lot of good developer practices encouraged there and its likely going to be step forward for a lot of teams.
On the other hand with my tester bias part of me sees this as a step back to the 90âs and a risk we repeat that cycle, no testers, too many testers and back to no testers without evolving.
The zero bugs and 100% automation can also be hard to get my tester head around, how can I show the absence of all bugs and lets say apple release a new phone model today that the team does not have access too so that 100% goes out the window too.
Understanding the model is key to removing the confusion, in this case it based the known acceptance criteria. I personally might not be a fan of the model but its that model that they are basing there no tester required argument and I sort of accept it holds water from that specific models perspective.
It can though still cause confusion and potentially conflict.
A well-rounded model description youâve put in front of us.
This is so annoying, I have experienced it many times.
But the âzero bugsâ isnât always the goal of it. And many people can interpret it differently(within the team or outside).
It can also be:
that we covered most of what we knew about
a bug that escapes is a minor/trivial one and we should improve our initial analysis (NOT testing).
any problem that appears, as a bug, with that part of the product we just released, is a feature we havenât implemented and thatâs fine.
Oh, the number of times I went rogue and did actual testing and the number of friends and enemies I made due to this
Confusion and chaos were caused: youâre testing outside the scope, youâre not doing your job properly, youâre looking too deep into the product, where did all these bugs come from (we need a process change: more automation, add metrics for automation coverage, more detailed specs, do bdd, more strict agile policies, quality gates, âŚ), why havenât we caught these issues before, and so on.
I donât believe in perfect acceptance criteria. The are created by people and people are fallible. Iâve been in too many 3 amigo meetings where important points would have been forgotten if it hadnât been for the testers. Iâve tested too many storys where the seemingly ok ACs didnât cover all situations when put to the test.
Just as an example: User with permission can do something, user without it canât. Seems simple enough. But then it turns out users can inherit permissions from groups or something like that. This is usually where I find the bugs.
I add to this:
(wirten) Acceptance criteria are artifacts of communication.
What really interesting is, is what the vision of the people (mostly project manager) is. In their heads.
ACs are a tools to communicate that, but only one out of many. Mostly for memorizing the basics.
But for details you have to talk. Again, again and again.
Shared vision is the key concept here for me.
About what to achieve and what the actual state is. The later is much of a testers work. And the later can influence the first as concrete details may make it necessary of the vision of what to achieve.
First off, does your acceptance criteria include âqualitativeâ aspects of the product, like performance, security and reliability? If it really does, then you can not easily claim to have that 100% automation of which we all dream. It really hangs on how much stakeholders care about various aspects of the product.
Itâs also however very important for engineering to not get involved too much with guessing what those qualitative benchmarks should be either. I guess I am arguing for the product team to actually do some UAT of their own? Because if they do, and they still allow someone who is not âinvestedâ is saying itâs secure enough to release because I wrote the code and Iâm never biased, then that frees up engineering to do what they do best. So I can see where zero testers comes from. But zero bugs requires the product ownership team to onboard, and to date, Iâve never seen that happen in my 15 years as a tester and 15 years prior, as a coder.
Oi vey.
This plots squarely into my experience that QA is grossly misunderstood by most organizations because so few QA disciplinarians have risen through the engineering ranks. This is because there is a bias toward development engineers in promotion; there is an obvious direct line to company profit in their work. Where we in QA are instead an unpleasant cost to be mitigated. Truth be told we in QA havent done much to demonstrate the effect on the bottom line that healthy active QA has.
But that is drifting off target and a soapbox I should step away from.
The thing is, in my experience, Developers and Product Owners dont like writing acceptance criteria Its taken me repeated efforts across multiple jobs over long durations to get this stuff (AC) fully articulated. Its a reluctant activity; like a cat going for a swim, They can do it. But they hate it. and so they dont do it very well. Product wants to wave hands about and talk about how cool the thing will be and how it will lead to a wonderful place where money is printed. Developers think in terms of solving the problems and puzzles that Product just handed to them. Thus AC are never complete. Tests derived strictly from those AC are never complete. But everyone will feel good because 100% of those AC derived tests pass. Until something not covered by those tests inevitably creates tech debt.
Tech debt increases because no one is down in the guts of the thing checking to see that everything works as intended and asking the silly questions no one thought to ask. This is why QA is necessary and continually evolving. Its no longer merely âhuh find bug. report bugâ. We are now spending more time raising issues like how is the release going to work? how are the systems observed? How are they maintained? How do we manage production defect reports? and so on.
sorry for the wall of text. I start a new gig today and Im a little amped up
The short version is yes, they trigger confusion. A lot of it. All parts of the model - all models, actually - require humans to anticipate the best, worst, and typical situations that can happen. We testers train ourselves to look for the worst case scenarios - weâre the only part of software teams that do this consistently, because itâs part of our job description, as it were.
The whole discussion over acceptance criteria reminds me of the âWhat the customer really wantedâ cartoon: What the customer REALLY wants | Monolithic.org (there are dozens of versions of this cartoon out there - just explore a bit).
The point is that everyone has built in biases. Sales/marketing are biased towards the shiny. Developers and programmers are biased towards making stuff work. Testers and QA people are biased towards finding potential problems, and - often - looking for the simplest way to provide what the product owners have asked for that will meet the needs as written. And customers and users just want the thing to do what they need it to do.
Iâve found that if the problem-finders are left out of the loop, thereâs a higher chance of delivering something that doesnât do what the customer needs. After all, even in the leanest, most agile organization, the people who actually build the software are getting the information third-hand or possibly at a further remove. (First-hand is the customer telling someone (likely sales) what they want. Second-hand is sales telling product owners what the customer asked for. Third-hand is product owners telling development the acceptance criteria. There could be any number of other parties involved between those three stages)
Given all of that, itâs not surprising that without someone to ask the awkward, sometimes negative questions it can get confusing.
If raising bugs is causing conflict rather than appreciation then you have a bigger issue than the way of working. Whether thatâs from the business or the developers.
Itâs one thing for the team to decide that a bug isnât worth fixing, but to be upset that it was found and documented, demonstrates a real problem.
Itâs like describing Utopia. No such thing. All systems are inherently faulty.
Even in ideal scenario when youâd really have perfect system with zero bugs and 100% automation on a given date, by adding new features/upgrades/modifications you would introduce some bugs for sure.
On the other hand, I once heard one guy say that the best testing team he was in was the team without a single tester - but filled with senior developers who all knew all parts of the SDLC, including testing. So it can be done but even in that case you cannot go without bugs. Sometimes you even mislabel something as a bug simply by not knowing the full context.
The thing is that it doesnt come for free. Someone is doing the testing. Always. (or there is no testing and YOLO) The time devs or product do the testing work is time not spent developing or designing new features.
I should clarify that I am not going to bat for this model, I am just highlighting the model so that when you see someone going to bat for this model you may get a better sense of where they stand on things.
Sometimes they will be so entrenched in this model that counter arguments for good testing and the value it could bring to this model will often be dismissed offhand.
If you enter the discussion with a mental model so different from this that a lot of testers will have, it could be a futile exercise and the discussion could end up frustrating and may even feel like a personal attack on your role and value.
Being aware of it and stopping to consider if their stance is based around this may give you the option to accept they are on such a different wavelength that they wont listen or be open to different ideas and that the apples and oranges might just go round and round.
If you see this happening, my own view is step out the discussion, your value will be better recognised elsewhere.
In a work role or in an environment where views can be exchanged taken on board and listened to then these can actually be very good discussions often with both parties going away with something new to add to their ideas.
They can be tough discussions though so really need that positive environment for the discussion that many companies and meetups provide.
Online discussion via posts though are very hit or miss regardless of intent and effort.
Yes, someone always does the test in the end. A bit on the joke side, this sounds like of giving only one dimension to developers, like the old stereotype âwomen should be churning out babiesâ
And on the serious side, I am obviously not rooting for such a model since Iâm a QA myself, however I am a big advocate of the team approach (Scrum theory is also based on that). Specifically, team as a whole is responsible for everything, including testing. How this is divided inside the team, itâs on them.
Thatâs why I also donât really like the term Quality Assurance in itself, since itâs not just on us to be assuring quality (and everyone else in the team is just producing bugs), but itâs the team effort.
I mightâve gone a bit too philosophical here now, I just felt I should voice my opinion on the subject
I agree on that.
There are always bug lurking, you just not discovered them so far.
Iâm a tester in a Scrum team (the only one) and I see it as my responsibility to guide and push the testing. Iâm the expert on this field.
Which does not mean that I exclude my coworkers and do everything by myself.
Often I work more as a coach and guide my supportive testers in their testing. e.g. We discuss together in advance what would makes sense to test and they come back after their interaction with the product to discuss what they found out and how to continue (often together with other team members).
I lean here to the RST Methodology: âResponsible Testerâ.
Testing is a open workshop which I maintain to which I invite everyone to do their tasks and help them with that. Also I do some things by myself.
I prefer tester/testing over QA.
The question is by what quality. Hopefully I have advocate well enough how it would work to have at least one testing export on every team.
Bad testing is less likely to discover risky bugs and does not gives enough certainty about the actual state of the product.
Agreed! And vocabulary in the QA discipline is slippery. I prefer âQuality Analysisâ over âQuality Assuranceâ as it opens the testing to everyone on the team and allows the Quality disciplinarian to engage in activities guiding and measuring those efforts.
Agreed. Thats why I continue to believe, based on my experience, that there has to be some amount of human resource invested in the quality of the product. The activities those people engage in can be varied from team to team. But the goal is the same. For an example of âbad testingâ look at what happened with CD Projekt Red and Cyberpunk 2077. A large portion of QA was outsourced and then poorly managed by the outsourcer resulting in low value cruft defect reports which then led CDPR to have a false understanding of the quality of the product. There are more failures in that to be sure, but poor testing was a big contributor
Very old video but still very valid. Too bad I never heard anyone utilising QA in such a way (Quality Assistance, as explained in the video). However I like it a lot and itâs close to my thinking.