How does your team manage concurrent development and testing within a sprint when the number of testers is half that of developers?

We’re aiming to establish a more predictable workflow for our teams by breaking down backlog items into manageable pieces. This approach is intended to foster better collaboration between developers and testers as they work together or in tandem on these smaller tasks. In many of our scrum teams, the number of testers is half or even a third of the developers, which I believe is a common scenario in other organizations as well. If your organization faces a similar situation, how do you manage concurrent development and testing?

1 Like

Morning Phoebe :wave: .

In previous teams I’ve been a part of, I’ve been the only tester in a team of 4 or more developers. We didn’t have tasks that only I should pick up. Often I might pick up the tasks for implementing API or UI checks, but anybody in the team could pick them up. Additionally, I would take part in 3 amigo sessions, pairing with developers to help introduce tests, features, and exploratory test amongst many other things. I’m unsure if this is what you were referring to but I think if you create separate tasks for developers and testers then you are creating a bottle neck.

With regards to smaller tickets in general, this is a good idea. Things can sit around for a while when they are large but also things can change. It’s a good idea to break things down as small as possible so that you can test, implement and then shorten feedback loops. Additionally, when working in big teams, WIP limits might be something you could try if you’ve not already. This encourages teams to collaborate and get things done together rather than just picking up lots of things at one time.

I’m not sure if you’ve seen this book but I highly recommend it Agile Testing Condensed: A Brief Introduction - Agile Testing. Chapter 11 of the book is available for free to view online https://agiletester.ca/wp-content/uploads/2021/04/Chapter-11-A-Tester-s-New-Role.pdf .

3 Likes

I was for over a year 1 tester to 8 devs and be now 1 tester to 4 devs.

One thing is that test can often prepare things in advance (reading specification, discussion, preparing data and tools), while nothing usable is available. I wrote here about that, where I also advocate against a test column on boards.
Organizational I prefer sub tasks over pushing a single ticket back and forth. Or find your way of how you show that parallel work is done (lists or comments on the ticket?). Sadly at least Jira does not allow multiple assignees.
IMO most important is to TALK to each other, the team and management. The relevant persons should know that multiple people working in parallel on a ticket.

Another thing that testing can also start with “unfinished” versions, on feature branches. Discuss with the devs what makes sense to test and what should so far be ignored. When responsible doable I suggest not to wait until a the ticket is fully developed.
Development and testing can be done in parallel, exchanging with each other, and don’t need to be done sequential.

At the very extreme I did not only once pair testing with a dev. They shared their screen, showed me their local state of the application and interacted with it while I guided their testing (live). “Could you try X?”, “What about Y?”, “How important would be Z?” etc.
Most bugs we found fixed the dev instantly and rebuild the application so we can test the fix to.
As I’m also able to build and run our application on my computer, I shared my screen with a dev in another session and interacted with the application, while we were discussing things. The dev did fixes, pushed them on their feature branch, I pulled them and rebuild the application.

On more thing I do is that I share testing with the devs by the method Responsible Tester. Devs do the main work of testing while I supervise and consult them. An important thing here for me is shared document of what is planned to test and what they found out.

  • a simple example: The dev notes what they would test (aka make test plan), we discuss and change it. Then they interact with the product and note their findings (you can call it a test session). After this session do a debrief, we discuss what they found out (e.g. bugs, overseen details …) and if maybe further testing is needed. The end is always a debrief with me. Finally I’m responsible for testing and guide the devs. I just don’t do every task.

And I did not use Xray or another test management tool. Either I make my notes/plans/reports directly at the ticket. Or I use one Confluence page per ticket/story to note anything related to testing.
Jira has a known bug that at concurrent editing of descriptions the last one saving overwrites anything writing by other. No warning and no real concurrent editing like in Confluence.

Do you have further questions?

2 Likes

We would create a test plan, and follow our testing strategies as per the test plan, so basically, we would allocate tasks for each day and for each person at the start of each sprint.
So the project consisted of web as well as mobile applications and the team consisted of 2 FE Developer, 2 Backend Developer & 2 Mobile Developer & 1 Project lead (who would code only for complex features) & there were two testers, one was me & another was senior QA.
So we create a test plan at the start of each sprint and we follow that test plan throughout the sprint, in the test plan we primarily define the following things -

  1. order of testing -we had divided the releases into two categories R1 & R2, R1 on the first Friday evening of the sprint & R2 on the second Friday evening of the sprint. So once the build is received we would initiate with smoke then sanity then we would start testing the tickets.
  2. division of task - who will test which module, suppose I pick web application then senior QA would pick Mobile
  3. on which day we will perform which task - we have 5 days, so for each day we would divide tasks - for creating test cases, performing API testing, performing smoke & sanity & all.
  4. release detail - when will the developer release R1 & R2 then deploy the build on UAT and then further deploy the build on prod

We would share this test plan with everyone including the team and PM so that everyone is aware of what we have to do throughout the sprint and suppose in case there is a delay in the release or build is rejected or too many bugs in the release, we would inform the stakeholders that the build was not up to the mark and it will delay the testing process.

With such clarity and transparency, we ensure that the whole team is aware of the testing process and strategies.

Sole QA working with a team of 8 devs here :raising_hand_man:
Yes its way easier to split tasks into smaller chunks than to do changes in bigger chunks. In my experience bigger chunks have always lead to longer bug fix cycles while in smaller chunks you can identify process related issues early on.

Splitting up large tasks and collaboration between testers and developers both sound like excellent stategies to me. Both incorporate testing earlier (ie not waiting until a large chunk of development work is done before testers see it) which will hopefully lead to less testing time needed overall, so the testers won’t get overwhelmed by work.

In terms of development and testing in a sprint, this touches on something that doesn’t quite make sense to me. Ideally sprints consist of new work that is all completed within the 2 weeks, but even with elements of concurrent working, there will always be more dev work at the beginning of a task and more test work at the end. So what do the testers do to fill their time in the first day(s) of sprint, and what do devs do once all their work is finished?
In my company, down time is spent test planning and documenting respectively, and unfinished work gets brought into the new sprint anyway, but it does seem to be a fundamental flaw with the idea of a sprint to me.

The issue is that the team’s QA resource is a bottleneck, due to that resource being limited/insufficient. The answer is therefore to increase QA resources. I would do it in a way that may feel unconventional to some.

Make quality the responsibility of everyone. This means that developers must be more involved in testing and the “tester” becomes a quality coach. The ideal venue for this would be to perform mobbing, where the mob is responsible for a story from todo to done.

If the concern is that developers aren’t great at identifying tests, they get to observe and learn from the testing expert sitting with them and talking about what needs to be done. And sometimes demonstrating it, by doing. You can also do a full pre-code test identification session.

Also, split stories down into quite small chunks - say half a day - and then adopt CI. Ideally this would later be accompanied by CD.

The real advantage of this is that collaboration within the team is massively enhanced and you also dont get problems caused by large levels of work in progress.

1 Like

Half or a third as many testers as developers seems quite high, compared to my own experiences :slightly_smiling_face:

In this situation, I think there are a few important things to aim for:

  • Collaboration with developers and other team members
    • Just because there are dedicated testers on the team, doesn’t mean that others can’t or shouldn’t also be performing testing activities. Work different roles and types of testing into your strategy, and make sure everyone understands what’s expected of them
  • A shift-left / continuous testing approach
    • If you start testing before development / writing code even starts, you can reduce the effort required at later stages, so that it’s not a case of having a huge amount to do only once something’s been coded
  • Optimisation of processes and timings
    • Some stories / work items take longer to develop than others, which means that the flow each item through the SDLC usually has different timings. You can utilise this to do different kinds of testing at different points of the process / sprint. Furthermore, if you find yourself at a point where you’re waiting for more information, someone to action something, etc., then you can use this time for test management activities, such as test planning and preparing automated checks