Tickets coming over too late in a sprint

I have formed a new QA team in a company that has been developing code for several years. We now have members of QA in all teams. The problem we seem to be having is that tickets from the developers are coming over late in the sprint and not giving the testers enough time to test. The long term goal would be bolster the automation tests we have and have QA test stuff before it gets merged in to develop and deployed to staging so we know the stuff deployed to staging has at least had some testing but for now we need a solution for the time being.

In previous projects I made sure that high risk tickets came over early in the sprint and no big tickets came through on the last day of the sprint and only minor changes happened on the last day so QA can close off the sprint. This seemed to work quite well

My feeling is that in my current company the development team is not used to having a tester in their team and are used to developing till the absolute deadline.

Is there anything else the QA team could do to limit the spike at the end of the sprint?

1 Like
  • Why do the teams prefer to use sprints?
  • Why do testers prefer to work after developers are “finished” rather than along side with them during development?
2 Likes

It’s a problem everywhere.
My example: 2 week sprints. Friday is sprint planning day, the board closes after lunch. So anything that is not in the deployed environment by Thursday night will not get tested , tough luck, stuff carries over into the next sprint, and the point on the board are useless. Velocity is impacted. And even better, the bugs I find on Friday morning… well. Smaller and early testable sprint tasks help. Not all software works that way though. So the team needs to give me stuff a week early, and the deployment script needs to be in running order.

At the root of this sits the question: Definition of done.

2 Likes

In one company they had the following ways to increase the velocity:

  • slice the stories. E.g. a story for a new type of order must be split in four stories: create, read, update, and delete. Have a manual procedure in place to correct things.
  • if the story is still too big, then slice it again. E.g. most orders are from businesses, so we start with the story for creating an order from a business. So a subscription order can be sliced down to a group subscription order, which in turn can be sliced down to annual group subscription order.
  • attend the refinement meetings. They were planned in a one hour time slot behind every daily standup.
  • if stories are discussed too long during the refinement, have informal reviews of a single story before the refinement meeting. This way feedback can be used to reduce size and/or complexity of the story.
  • prepare tests during waiting. E.g. select test tool, prepare test data.
  • reduce the WIP or Work in progress. E.g. every column can have only three tickets. So developers also test and the throughput in this case is higher than the throughput of a situation with a column containing more tickets than the number of developers.
  • NB a small story can be deployed, but it might not provide enough value to the customer.

Other ideas are:

  • first implement and test the basic flow. Later test the alternative flows. E.g. test first the valid order flow. Then focus on the invalid order flow.
  • make appointments for name conventions. E.g. ids of buttons are stable.
  • do a code review for testability. E.g. description of ids of web GUI elements are present.
  • provide draft screens to the testers and let them make scripts to interact with these screens.
  • start pair programming as a tester.
  • as a tester assist the developer with testing before she or he commits the code.
1 Like

What are the testers doing while the coding is being done? Are they figuring out their tests / what they will do for exploratory?
Find ways of QA/dev to work in parallel.
What a good thing to do is after the Dev work is done you can do a “rapid test” where the developer demoes the new work with po/qa, then the QA can ask questions - see if there’s any other scenarios to think about. Then when it moves over to executing the tests the QA knows to focus on the exploratory side as the a/c positive path works. Hope that makes sense! Let us know what you decide to do and if you see any improvement?

There will be many people (if not everyone) in the community that can relate to this problem, unfortunately.

There are different ways to approach this, but I’ll share what worked for me.

Make the team feel your pain
Having everything thrown over the wall in the hopes that something will stick is a horrible process to endure, and if/when this happens, you need to make them aware there and then it’s unacceptable. Spread out the testing to other people across the team (the ones that didn’t dev it, that is) in order to get things across the line for end of sprint. I’ve found as soon as people have had to experience it for themselves, they gain more empathy.

Get delivery on your side and conduct a team reset
Share your concerns with your delivery manager because if you want to change processes, you need an ally. If you go in there on your own, even if you’re right, you need someone to back you up. Get everyone on the same page where everyone can air their grievances, and you can start getting the ball rolling.

Don’t change everything all at once
Pick one problem, and try and get that resolved first before moving onto the next thing. I’d suggest getting tickets broken down so you don’t get massive chunks in one big go as the first step, but it’s entirely up to you.

Suggest continuous delivery instead of sprints
There are many benefits to this and I won’t list them here. In terms of your problem, it’ll help with have work flow through and you can work in parallel much easier.

I feel like I just dumped my brain on a keyboard so please let me know if that didn’t make sense!

1 Like