How to Optimise Development Flow in a Scrum Team?

Imagine this, you’re the only tester with a team of approximately 10 devs (I’ve been here so it’s pretty easy for me to imagine!) so your workload is quite high. Hiring another tester is not currently possible. The approach to development is “mini-waterfall”

devs develop features during the sprint and assign the tickets to me to test them before deploying the feature on production (it’s a web platform).

As the only tester, features may get stuck in testing as the developers gradually pick up new tickets to develop and pass them over to you.

A member of our Slack group is in exactly this situation and is looking for help:

Any advice for optimizing the development flow in a Scrum team?

The team has acceptance criteria,

we even have Gherkin style scenarios as acceptance criteria, so that’s pretty straight forward. I think the devs are currently not testing their feature after it’s been deployed to our staging environment so that’s one thing I want to make them do.

What advice would you offer this person?


This is why I prefer kanban over scrum in the vast majority of situations. I’d recommend you shift to it and make your scrum master to do some simple testing (it’s the only use of him/her). Piling up testing tasks will soon signalize that there’re not enough testing efforts.

But, even with scrum there are things to do:

  1. Add tester’s effort evaluation in a sprint planning. When you see that it’s enough for you, call to stop adding new features unless there’s someone who can help you. Make this very clear.
  2. Coach developers how to test, and explain them clearly that quality is not only the tester’s job. In a healthy team every participant shall feel responsible for it. And find the support from the management if needed.
  3. There are plenty of ways how the developers can reduce tester’s work even when they don’t actually do any of it by their own. (Testing the same set of functions with 50 defects may take twice as much time as the same action with only 5 defects). For example, checking corner cases during the development, better code review, more/better unit tests, prepare test environment for testing, developing testing tools and frameworks, write scripts for reproduction of difficult-to-reproduce the defects…

So, in short words, become a team, not just a group of people working on the same stuff.


This is something I’ve always struggled with myself in teams. Which is perhaps another question entirely but I’m wondering how you approach this?

This is something I can relate to.
I would suggest the following to help: -

  • get testing estimates into the numbers when sprint planning
  • use the 3-amigos process on each user story to make sure everyone understands what is being developed so it meets the business requirements and the level of testing can be agreed. Explore edge testing ‘what if’ scenarios so the BA has a chance to agree the expected behaviour and the dev has a chance to make the design robust.
  • don’t wait until the dev has moved the item into test before having a look at it - get an early look by collaborating with the dev.

1. Testing must be included as part of the Definition of Done
Engage the team during planning and clearly outline the effort to test. Come with previous examples of tickets that took very little development effort, but had a high test effort. The goal is to help the team to understand that the completion of the testing cycle is also included in their velocity. When the test effort is formally accounted for the conversation can then shift to how “we” as a team can complete our sprint.

2. Stop starting, Start Finishing
Get developers engaged in testing. Either manual or automation. If they are delivering automation to support testing, then provide them with the high priority scenarios you need covered as part of your test plan. Then you can focus on augmenting by manually testing the areas they are unable to complete. But the goal here is to not keep picking up work because they are waiting on QA to complete testing. It will keep you in the red. The team needs to work together to complete the testing so that the sprint is completed on time.


I would suggest for the tester to become more of a testing coach than a tester. At the ratio of 10 to 1, and you have the idea that the tester needs to do all testing, most likely you have one of two scenarios. The testing is super shallow and could easily been done by anyone. Or you are the single greatest bottleneck in the team, preventing a lot production value. Neither is fun for the tester.

So what is a testing coach? The main idea is that you instead of focus on doing the testing, teach the entire team to be part of the testing. A few tools you can use to get there.

  • Shake and Bake. No idea why this was the chosen name but the concept is. Sit with the developer at their computer, before they push their changes and test with them. Excellent opportunity to find things early, address them right away and share your expertise and reasoning so they can replicate that for the next feature.
  • Testing Tours. Great tool to help people look at their application from different angles to help with a larger scope for little effort. Is also feature agnostic so you do not have to do repeat work with people.
  • Test Fest. If you for some reason have to “test everything in the end”. Buy some snacks, create two leaderboards. One for issues found another for issues fixed. Buy two prizes and gather everyone in team and test the product. The member in the team that found most things get one prize, the one that fixed most things get the other.
  • Pairing. Pair with the developers for a power pair where the tester provide the expertise of “What should we test” and the developer implement it in unit tests and integrations tests. This way the testers (or the developer) can go and do a few one time tests for what is not already covered in unit tests. And the tester will know what extra scope they need to add.

Also remove the notion of Development Done and Testing Done and introduce the idea of Feature Done (testing included). Which also removes the idea of Developer and Tester. You are both developers, as in you develop a product. If you do not have it introduce the Definition of Ready and the Definition of Done. If you already have them make sure something is not Done until it is actually ready to give to the customer.

Another transition I suggest that you think about is what is presented in the Pretotyping manifest. The “Build the right it, before you build it right”. There are a lot of things one member can do in a team during the inception of a feature to dramatically reduce the production time, and the likelihood of errors, which is more akin to “testing the idea” rather than testing the product.


The main thing is that testing efforts have to be counted and planned. The exact way could vary because of tooling and preferences. Here’s an example of how I’ve introduced this:

  1. Testing effort is estimated in hours. If sprint takes 2 weeks, then, normally tester has 80h minus time for meetings, preparation for meetings, communication, switching from task to task, etc. Let’s say 60 hours of pure work. Also, it makes sense to allocate some buffer for unexpected urgent stuff. Let say another 10h. So, the tester has 50h to do the testing work.

  2. During backlog refinement session tester clarifies all the details so that he/she knows exactly not only how a feature should work, what exactly to test, but also how to test it (usual stuff, but worth mentioning, otherwise, estimates won’t be correct). Either in the same session or during the planning meeting, when developer put their estimates, tester put theirs (in my case - hours).

  • Feature A = 12h
  • Feature B = 6h
  1. On planning the scrum team selects which features to include into the sprint. They might use their effort estimation (story points, hours, parrots, whatever) to understand how much they can take. But you also do your own calculation. And if you see, that your capacity is 100% used, you ask to stop. If the team decides to include more, they have to decide how they test it. That’s it

This approach is not so collaborative and is against the principles of the scrum (still, if testing is only tester’s responsibility, this is not scrum already), but this is your last line of defense. I’d recommend starting with developer coaching in testing or another collaborative approach.

Hope this helps :slight_smile:


Thank you all so much, lots to ponder here :grin:

10 : 1 dev to tester ratio sounds to me like there’s a token tester situation just to tick a box. Don’t be that token - influence how the development team (which you are a member of) works.

A solution to this would be to define testing tasks and add/associate them to user story tickets during ticket creation, or backlog refinement.

Once the coding aspects associated with a ticket are finished, the testing task(s) can be picked up by the next best person available to start a new task. If it’s the sole tester, then lovely - if it’s a dev who secretly wants to be a tester, that’s cool too. If it’s a coder who can’t test to save his life or believes testing is some form of draconian punishment, he’ll need some mentoring and nurture to get him up to speed. In the latter case, you will lose momentum, however, assisting him should benefit the team long term as he’ll start to see the beauty of testing and become a better tester next sprint.

As others have stated, it’s important to ensure that testing is factored in for effort estimation for each ticket. Outlining the test tasks for the tickets will make your estimations more accurate and avoid sprint overloading.


Great point! I think it’s often assumed in companies and teams that development and testing are separate teams but in fact they are all part of the development of the product.

I am thinking about this case myself. What I have done so far, in short, is:

  • shifted myself into the position of a “QA Advocate” for the lack of better word
  • preach/implement test-driven processes
  • testing is done by everyone. If I cannot do something, another Dev will have to pick up the task of testing a feature
  • clean and testable code is the key
  • feature/bugfix is not merged until someone checks on it. Basically the ticket is never moved into testing phase, PR is not closed until it is tested