Issue/Defect Management (Agile - In Sprint)

How to manage Issues/Defects relating to the User Story being tested?

Following on from a previous post also titled “Issue/Defect Management” which asked a broader waterfall/agile question.
I’d like to just check the sanity of our current Agile process with a larger audience.

We operate a 2 week sprint cycle not cam-band. A User Story(US) moves across the sprint board from “In Progress” to “Team QA” (= code review) to “Ready For Test”.

During the test cycle if an issue with implementation directly related to the US is found. We will discuss the issue with the developer, then in the comments of the US will detail the issue seen and steps taken to replicate. Finally setting the US back to “In Progress”. Where the developer will continue to work on their US.

  • If we find a separate unrelated bug we will raise a bug which will be prioritised by the PO for backlog/current sprint.

In Jira we can track that a US has moved left on the board, from “Ready For Test”, rather than right.
So can capture metrics for issues found, if we need to.

We find this a much faster process than raising a separate US/Bug for every issue, which would then need to be prioritised by the PO and either brought into the sprint or put on the backlog (which would then block the current US in this sprint).

We also feel it follows a similar model to other US QA steps, such as Unit Test or Code Review.
Where I’m sure no one would raise a bug for a code review issue, or a unit test failure?

What say ye? :wink:

1 Like

As with processes in general, whatever works best for you is the best and it sounds like you have thought about your process. I have been working in teams that have had a similar approach. And in those we have had all of these hand overs between code -> code review -> test -> release -> done and in most cases we found all that moving around things to provide a status of a task more administration than it was worth. A concept that I like that we implemented more often than not was the “Definition of xxxx” pattern.
Basically a checklist to see if you can move a user story. The two main ones are Definition of Ready (what needs to be done before the team can start to work on the story) and Definition of Done (what need to be done before the story is ready). The latter has also more often than not been a cause of a lot of learning. These definitions is a Checklist. i.e. Unit Tests updated, Code Review Performed, All outstanding Bugs reported and so on. So instead of having a lot of columns on the board you have the status in the story as the DoR.

Regarding the reporting bugs or not. I personally like to report as few bugs as possible but that works best when the time between developer implementing a thing and a tester testing it needs to be short. I would aim for 1 hour. When your feed back loop is counted in days or weeks you need to report them because you will have a few occasions where you find a thing and the developers will not have the time to fix it in the sprint. Thus it needs to tracked to not get lost. It sounds like you have a sweet spot there where you report the non story related bugs and work with the related ones as comments. :slight_smile:

1 Like

Thanks for your feedback @ola.sundin

Yep from tests perspective, testing a US once it arrives in “ready for testing” is pretty much our highest priority task. We aim, as much as possible, not to be a bottle neck to a US’s journey.
Practically all other tasks can be shelved and feedback is usually with developers within 30 minutes with the above approach.

We would normal move a US out of “our” sprint stage, usually for PO review. So moving it back to “In Progress” is not an extra step for us.
We would also normally detail testing performed as QA. So again adding a failure scenario is an equivalent amount of work.

(As a side note, when I was scrum master at my previous company. I found that DOD/DOR were often ignored, unless you included it in every ticket and referred to it constantly!
To start with we used to just add a link to a central DOD doc, to every US. But they never checked against the doc to see if it was “Ready” or “Done”. Maybe it was just my team! :wink: )

1 Like

Hello @brianannett!

I like the approach and practice something similar. I think it works very well where the methodology is an Agile flavor. In that sense, we may learn something new at the end of a sprint which is not always a defect. That something new arises out of the “eyes on” nature of delivering working software frequently.

Build great products on small successes!


1 Like

We used to just comment problems in the JIRA tickets and send them back to Dev. But all things wrong with one story can then get lost in the comments. It becomes unclear what has been fixed and what is still to be done. Our Devs like the Bug Subtasks that we raise against the original ticket. They are bugs that are part of the ticket and don’t need PO input as they describe things like missed ACs. If one of those Sub-Bugs proves to be a separate issue we can always move it out and make it a normal bug that needs prioritization.

1 Like

Hey Brian.

Agreed that unrelated issues should go to the backlog / through a triage process. Regarding related bugs on the ticket:

I’ve worked in the past using a similar strategy of pushing the main story ticket (US) back from QA to To Do and adding a comment of what the issue is. I found that this only really worked well for me if I found 1 big issue. E.g. push the story ticket back and add a comment that the page is not loading at all and I can’t continue testing.

If you pick up a ticket and find 10 bugs, putting these in a comment became too tricky. The developer would often miss one of the 10 bugs. Then if it didn’t pass QA again we’d end up pushing it back a second time and adding a new comment with some of the old issues still being issues and some new issues, which gets messy and confusing for devs to know what needs fixing.

Our current strategy involves creating a bug sub-task ticket for every issue that comes back from QA. We keep the main story ticket in QA until we are finished all testing and bug logging, then push the main story ticket back to To Do as well. This means that devs can tell what the status is of our testing - if they see bugs in To Do on the board but the parent story ticket still in QA, they know we aren’t finished testing and should wait for the parent ticket to come back before starting on bugs. This also allows us QAs to go back and amend / give more detail on the first bug tickets we logged, if we uncover more information further down the testing process.

But the biggest benefit for us of doing this is bug tracking. We pull monthly reports from Jira on the number of bugs logged vs story points outputted. This has helped us drive down bugs in our team.

We also use labels in Jira on bug sub-tasks to help us group bugs on an ad-hoc basis (e.g. edge case, AC not met, design not matching, etc) which we pull reports on again to identify problem areas and adjust our dev processes (e.g. make overlaying the design part of the dev DoD due to the number of design bugs we’re finding).

We still do track QA ticket pushback on the main story ticket (incremental number that bumps up every time a story ticket goes from QA -> To Do) which helps track an overall view of how QA is doing (as you’ve mentioned you do).

Having all bugs as sub-tasks also gives us a clear view on the sprint board of how we are actually doing. Seeing a ticket with 10 bugs under it with 2 days left on the sprint is a clear indication that action might be needed. Part of our team DoD is also that all of these related bugs need to be fixed for the ticket to be released (even if that means we fail the sprint).

Overall, logging lots of bugs is time consuming, but we get more benefit and a clearer understanding across the team of what needs to be done and how we can improve, than from storing everything on one ticket. P.s. we also use TextExpander to help us write bugs faster using a template.

Hope you find the best solution for your team!


Hey Robbie
Thankfully I think our product must be simpler as we see many fewer bugs than this.
It’s very, very unusual that we’d see a user story with multiple iterations of bug/fix.

I thought I was having a tough day until I read your post! :wink:

Where can I apply? :wink:

In Agile testing services, PO provides the details of the functionality to be developed, which will be broken into epics & user stories. Then the acceptance criteria will be defined per user story. The story will be assigned to one or more members of the POD.

The developers and testers will work together on a single story. As developer starts writing the code, the testers should start developing the test scenarios. It is important that Testers should discuss the scenarios with developers to ensure developers write the code to handle the scenario.

As the development progresses, the tester should test the code and let the developer know in case they find any issues. Then developer fix the issues and the tester retests it. In case any clarifications are required, the dev/test consults with PO and get the clarification and then complete the user story.

Once all the dev/test is completed for a user story, it is moved to Done and they start working on next user story.

However every time, things doesn’t go according to the plan in agile testing services. We may encounter issues which cannot be resolved by the developers instantly due to one or the other reasons.

If the issue is that the development doesn’t meet the acceptance criteria, the user story will not be moved to done and the team will not be able to take the credit for the story points for that story. Then story will be moved to backlog & will be worked on in the upcoming sprint as per the prioritization done by the PO. But what if there are instances where the user story meets all the acceptance criteria and still there is an issue in the application?

Then there will be no defects if the team is following the Scrum Framework. But, practically there are multiple scenarios which end up as a defect such as a failed scenario for which the requirement is missing, a production issue, a Sev 1 blocker issue because of environment Or what if there is an issue unrelated to any of the user stories in the sprint? What if there is an issue in production? These scenarios are difficult to track since there is no relation to the user stories. At this time, the testers needs to log the defect and provide as much details as possible. instability etc.


It is imperative to log the defects to understand the trend of the issues seen during the development phase. This will give an overall picture of the health of the product & will ease the task of preventing the defects by analyzing the trends of the issues seen. Anyhow, the teams shouldn’t spend more time on the defect management as working product should be considered as the highest priority.