When should acceptance criteria be converted into automated tests

Hi there, I have a question relating to my current place of work. We currently work using the Agile methodology in 2 week sprints which entail the common facets of this such as release planning, daily stand-ups, 3 amigos etc.

Specifically, as part of the 3 amigos we are writing up the acceptance criteria in a gherkin(given/when/then) syntax in order for these to be converted into executable automated tests using specflow at a later point in time.

My questions is, as testers, should we be writing automation code for these tests straight after our 3 amigos has taken place but before code has been written by the developer, essentially just as stubs? Or should we be concentrating on exploratory(i don’t like using the term “manual”) testing initially, obviously once the code is ready, and only write the automated tests at the end of the sprint so that these specflow scenarios can then be added to the automation suite?

A reason for me asking the above is that I’m after advice from testers who may have been in a similar position to me whereby the QA resource is quite light, therefore whilst we’d like to automate what we can and are under pressure to do so, this isn’t a silver bullet and carrying out exploratory testing would at least allow us to proceed to UAT. At the end of the sprint we could then automate our specflow scenarios.

Many thank

Andy

My personal experience is that if you try to automate as early as possible in an Agile environment you can waste valuable team, as specifically within an iteration or sprint you’re working against a highly moving target.
Also ask yourself the question for what purpose you’re wanting to automate. In lot of cases it is to build up a regression set, more than anything else.

So for that reason we decided and I advice to emphasise on manual testing during a sprint and automate after.

2 Likes

Thanks for the response Peet. I’m inclined to agree with you on this one actually.

1 Like

The trick is to try and make each first test case execution a repeatable automation step. Unless you have a strong automation infrastructure and process in place (the current holes I’m digging myself into), this has the downside of making first execution quite a bit longer, because you’re trying to solve two problems (How do I test? How do I automate?) at the same time, instead of one.

I would be with Uncle Bob (in “The Programmer’s Oath”) on this one:
“3 - I will produce, with each release, a quick, sure, and repeatable proof that every element of the code works as it should.”.

If the changes are generally well cover by non-business tests (unit, integration and component) and few bugs escape, one could postpone the “repeatable proof of work” to after the production code. Otherwise, for any reason, I would suggest that programmers take care of delivering them together with the prod code.

Additionally, I think that either way, it good to remember that timeboxed iterations are not micro-waterfalls: Each user story (or equivalent) should be independent - therefore, its validation should be wholly ASAP.

Leaving a number of independent stories being integrated without automatic validation for two weeks can be very risky - both functionally or in process (unsatisfied DoD), generating new tech debt.

1 Like