Lost Bugs - who's responsible for re-homing these...?

Having just found 222 ‘lost’ bugs dating back several years, that had been ‘deferred and labelled ‘on-hold’’ at the time of testing, I’m implementing a new process to prevent this situation from re-occurring.

The trouble I face is that this is a grey area that seems to differ from project to project.

What is the general consensus? Should bugs placed on hold at the time of testing be managed solely by the PM or should the responsibility lie with Test or Dev to ensure these are chased up post-release?

I don’t think there’s a right or wrong answer but I’d be grateful for your experience.

1 Like

You need time in a project for sweeping up after releases. If there’s no time for going over old bugs then they just accumulate in whatever tool you use, unloved and forgotten about.

I think it’s a combined responsibility. Leaving them increases your technical debt, so find a responsible manager and make some space in your project plans. Triage them as a team, fix the important stuff, park the unimportant stuff (never to be fixed) and do a ‘maintenance release’.

This is an area where Testers can show their value - these sorts of bugs can bite you in the backside later on if left to fester


I have never been in a team where ‘losing’ the bugs was not an issue. I typically have a suggestion which may seem extreme, and I have never been able to test it (but I know people who have).
It goes a little something like this: Either fix logged bugs now or report them as “won’t fix”. That means (essentially) that all of your 222 ‘lost’ bugs should be marked as won’t fix.
If the project can live with the bugs, then the fix isn’t necessary. That is, it takes more effort to fix it than it is worth. If the fix adds value, then it will be fixed immediately.

There is another possible option with new bugs (the 222 should still be tossed to the side). That is, assign them to a specific sprint/project/time-frame immediately so the team knows that there is a fix planned, but not right away.

This attitude comes from my first official testing job where my product manager would routinely give me a list of hundreds of issues (mostly from before my time) and ask me for a progress report on them. I would spend days, sometimes weeks, re-testing these issues to find that out of those hundreds, maybe one might still be a problem. The rest were either fixed under-the-radar, accidently fixed when someone refactored the code for another feature, so minor or rare that they didn’t remove value from the project as a whole, Not a correct issue (testing error), and so on.

So the PM spent hours (or sometimes days) looking through dead-issues. The test team spent days (sometimes weeks) filtering the dead issues. The programmers spent hours answering the test team’s questions about their recollection of the issues.

And no value was added.


Thanks Brian - an interesting concept and a radical solution. I can certainly relate to your final point that the PM tends to drive what is ultimately an exercise with little value. The Developer(s) are asked to work on these in addition to their current workload and this often means a delay in turnaround as the priority is low (understandably so). In addition the client has possibly not noticed there are any issues as most deferred bugs are low-risk.

From a test perspective however, we need to see closure if only to avoid duplicate bugs raised at the next round of testing.

I will adopt a stricter approach and hold a post-release review to ensure the PM, Dev and Test are in agreement as to how to handle these bug types and make sure those that fit the criteria can be closed as won’t fix.

Many thanks again

1 Like

Like Brian suggested, there is a reason nobody has fixed those bugs yet. Most probably they are not important enough to be on anybody’s agenda.

Here is a short intro to what I would do knowing what you have said so far. The course of action might change if I knew more about your environment.

Why are you worried about those bugs? Number of bugs don’t mean anything unless you measure what is their impact. You can have 100 bugs that result in £1k losses a day and one bug that results in £100k a day.

I do not think there is any grey area, either those bugs cause problems serious enough to be on the playlist for the coming iterations or not. If the team runs out of things to do and they are idle you can go back and go through that list so see if there is anything worth playing in there, otherwise I would not worry about it. If there are any important bugs in there and you are afraid they will be lost, of they are really important somebody will raise them again (and again), so do not worry.