Do you have any good reads regarding testers helping flesh out user story acceptance criteria?
I couldn’t think of any articles or blog posts off the top of my head so I went with advice based on my own experience:
“In my own experience, a lot of it is asking questions of others: how does X fit with Y? If we add Z do we have a DB table that links that to A? Over time then you start to learn the entire system front to back. Being new to the whole team is an even better place to be because you’ll see things others haven’t or they will have to explain them to you when they might never have had to do so before.”
This didn’t feel like a very complete answer to me. I thought it was okay but not fantastic.
Do you have any good reads for how testers can help with fleshing out acceptance criteria? Or what advice would you give to someone who wants to understand how testers can help flesh out user story acceptance criteria?
Could you clarify what you mean by user story acceptance criteria?
What’s the scope of it? Is it to be used by someone to decide if the story is acceptable? Who’s that person/team. Why would you care about his/their opinion?
Is it a guess of the development team of what it would be acceptable to the client if given the product?
Why not build them with the client/stakeholder - or whoever is paying for the software and expects those product features that make the story delivered? Interview him…see what’s acceptable to him…
Probably the greatest service testers can provide is to study the text for logical incongruity, vaguely or ambiguously defined statements and incompatibility with current business processes or even common sense.
In a good environment that understands acceptance criteria (that they are insufficient, coded guidelines and not an abstract interface to checking tools as a replacement for thinking) I find that a few things help:
Understand the story. You are no good if you can’t decide on the value of your input.
Keep and refer to risk catalogues. What are the usual risks on your project? What’s gone wrong recently? Do they apply here?
Keep and refer to strategy models. I like the HTSM, or usually a cut-down version of it that’s more applicable to my project and team.
Look at the story BEFORE you discuss it in a group, wherever possible. Test the story, come up with risks ahead of time and establish questions you have. Then you won’t waste anyone’s time, plus you’ll look thorough and super smart.
Make testability a goal of a story. A story isn’t complete until it’s testable. If you need generated test data, or access to a third-party system, or an interface to hook in a test tool like an automation suite, or a log file for observability then you need to ask for it and make sure it gets done.
Apply critical thinking to the story, models and conversation. I like Huh? Really? So? And? for this.
Another tip is to use explicit models. Have a diagram with you or get someone to draw one, then you can test that. What does this box mean? What does this arrow mean? What if I remove this? What’s missing from this diagram? This helps explore people’s assumptions about what everyone else thinks and understands, and what nobody has thought of yet.
If you keep good notes and models handy you can act as a body of knowledge on product risk, testability, historic problems and potential catastrophe. Then people will be all like get a load of this person being all good and stuff.
We have the 3-amigos rule (means: Code Developer, Business Analyst/Product Owner and a Tester) are starting the thinking process of acceptance criteria.
The tester often has a clear view on how to get things unanmbigiously, simple and clear described, so they really request the knowledge of the tester. Also testers thinking more about “negative” tests and is asking often the question how we want to handle these excpetions because these situations also need to be nailed down in acceptance criteria.
So in my own instance (I can’t speak for the person who asked me this), the acceptance criteria would be a grouping of acceptable to:
User of the product
Customer of the product (person paying is not always the person using)
The mathematicians within our company (some things you should not be able to do mathematically/statistically and this is not always known by developers)
Regulatory bodies (in our case the FDA particularly)
Developers and testers on the team, often times what the above people want needs to be designed in a way that is developable and testable
In my own case, the developers and testers sat with users, customers, mathematicians and people who knew FDA guidelines to flesh out their desires for features. We would then break those down in to smaller tasks (away from those stakeholders) with acceptance criteria that covered “acceptable to all of these people” and have a final review with those stakeholders. Once the feature was developed then, the other stakeholders would compare it to what they had originally defined as acceptable.
Our product was incredibly complex so acceptance criteria were time consuming and not all stakeholders time was easily available to craft the acceptance criteria together.
I would call acceptance criteria any kind of request related to what the product should/shouldn’t be, and should/shouldn’t do - from the stakeholder. If your built product matches all interpretations of that criteria, it might be accepted by or acceptable to that stakeholder.
From your description I read that you interpreted some people’s desires.
Then you made your own criteria that would match at least partially your understanding of the interpretation of other people’s desires.
Then you develop and test based on that criteria.
Why would you call your criteria - acceptance criteria?
Are you sure that you won’t build the wrong stuff if you follow your own criteria?
Why not get in contact with the stakeholders:
after each small piece of the product is built and demo/workshop
have weekly calls/meetings or even more often if you can
have a direct line of contact - calls, emails
in order to check with them what you might have missed, or clarify questions
and then adjust the criteria you know - repeat often…even during the same sprint.
With the greatest respect to you, that’s an ideal world and in my particular situation, I was VERY far away from an ideal world.
What you describe above would be fantastic, great, ideal but I was not fortunate enough to be in that environment. I have acknowledged that in the original post. I’m looking for better descriptions and situations than the one I was in to use as examples for the person who originally asked me this question.
How do you folk then manage the process from AC through to the creation and execution of a test.
Do you describe the test(s) you have designed and get them reviewed before creation of the test? Do you visualise the tests with the team before implementation? Or do you design & implement prior to review?
we design the AC with min. 3 people (dev, test, Product Owner) which are the blueprint for a high-level test case which normally will be automated and reviewd and also demonstrated on agile review session.
Lots of good advice here! I particularly like what Chris, Joerg and Heather say. The best simple advice I’d add - for the Tester Amigo’s acceptance - is to study the user story and other artifacts with the question, “How would we test this?” in mind. If the answer is clear, the rest should be straightforward (for testing at least). If not, clarification or fleshing out may be needed.
I think this also really depends on what you use acceptance criteria for and who it’s for. I’ve had two different experiences:
One was in an agency, and we had AC that only contained the core value of the story. The only details there were what the client cared about, and everything else was in notes or comments on the story. This meant that, while I contributed and discussed AC, most of my input was to accompanying material. We found this worked when working with clients who were managing this work on top of a full time job and may not be majorly technical or may not care about the technical details, only the features.
AC in my current team is much more comprehensive. My team favours Given-When-Then (I am happy to use it but I find that sometimes the grammar can be awkward), and it covers a bit more than the bare essentials, often including specific error handling and things like that. While I again am often contributing by accompanying material rather than specific AC, I contribute a bit more here. The product owner is a full time product owner for our team, so we have a lot more contact with them, and they’re much more hands on, and they have a good understanding of the system and it’s limitations, which means the AC have a different feel to them
I like the contributions here from @kinofrost, @jogi, and the blog post from @gus has many valuable aspects that we have been practicing.
I encourage testers to participate and contribute to the review of AC during Three Amigos. I expect them to review the AC for clarity, validity, and value. If there is more than one understanding of the AC in that group, then defects can form. A shared understanding is vital. I also invite them to scrutinize the AC - especially when written in Given When Then format. As @gemma.hill1987 mentioned, the grammar is challenging but learning to read and write GWT is a valuable tool for a tester.
AC are valid when dependencies have been met and the team has everything they need to execute the story card. For example, ff the story card requires a database that has not been built, then the story card is invalid and should be re-scheduled.
The value of AC are evaluated against the business goals and intents of the project. The card should align with them to be valuable. When the card does not demonstrated that value, ask questions. That’s QA stands for: Question Asker.
Just to add something: The testers in the game often fill-in the gaps of negative tests where also “negative” (as we know as invalid test cases) AC has to be written for. We tap into some of those missed AC as I was not in the game and the mindset of a tester is often different from business analysts and code developers.