How to track legacy (or non-priority) bugs if not in the product backlog?

Hello everyone!

I have been reading some threads/articles online but I would deeply appreciate some feedback based on my particular case.

The context: a complex sector, historically no QA, large and old legacy codebase + new code based on microservices. Not using Scrum in the intended way, only some of its notions/terms. I am the only newly hired QA per the team of 12 developers.

The problem: while testing specific features in the scope of a current sprint I often find bugs in the legacy code. And nothing comes to mind (that I would like) to track these bugs. Our product backlog in JIRA is not used according to Scrum. We add only the features/bugs that are to be added to the next sprint and that’s it. This managerial decision is not discussible.

So far, I can only think of:

  1. creating a separate backlog for legacy bugs (the main con: it will become a cemetery).
  2. Adding them to the product backlog in JIRA but labeling them instantly with a “won’t fix” label/status so that they won’t be visible in the backlog.
  3. When creating test reports for testing certain features, attach a list of found bugs to them (this looks like the past century to me, w/a risk of having several Google docs with bug reports for the same module and missing an important issue)
  4. Try to convince the team to always discuss the bugs found with the devs concerned. The bugs they deem important enough to work on this/next sprint are added to JIRA and the rest are just discarded and never tracked.

Thank you again!

  1. Always add all new bugs to the current or next sprint even if they are in the legacy code, minor, non-priority, etc (if you believe they have to be fixed), then devs or stakeholders move them somewhere else but they will see them
  2. Use a labeling system, create tech debt backlog, and create dashboards available for the team that show the number and graphs to display the dynamic
  3. Try to find some measurable arguments why this bug has to be fixed, e.g. too many customer support reports from users (spending resources on them is expensive), a lot of user use (e.g. 3%+ this “legacy” parts), other arguments that show that having this open bugs is more expensive or harmful for the company reputation than fixing them, introduce some quality KPI
  4. Periodically review this backlog to have the relevant list of issues, and try to make this activity transparent for the whole team
  5. Recurently notify stakeholders about this backlog and the importance of fixing existing bugs
  6. Try to communicate with the team and suggest some compromises such as adding some bugs to spend 5-10% of each sprint time every sprint on fixing old bugs or have every 3rd sprint as a 2-week sprint dedicated to fixing existing bugs from the backlog
  7. If nothing works and you don’t have the resources to push it forward then just close all old bugs without verifying them after e.g. 6 months in backlog. It won’t solve the problem but at least you want to have a huge backlog. Every time you stumble upon bugs in the legacy code create them again and do as in the 1st point
  8. Think about the reasoning for refactoring the legacy code because it’ll help to fix all existing bugs there :slight_smile: again it might be the fact that the team spends too much time when they need to add some features or integrate new features with legacy code that is full of bugs. Usually, in such cases, the process is painful, expensive, and time-consuming for the whole team and it obviously will affect the efficiency and release cycles. So refactoring might be a solution to many problems including your backlog.
1 Like

You can create an automated check that exposes the bug.
And, instead of invalidating the build, it sends an email/message to the developer about the possible problem.

This way people will be continuously warned about the issue.

Continuous, explicit, and specific alerting.


While I like the idea…

It’s too easy to ignore emails or to setup a rule to automatically delete certain mails.



Welcome to the most awesome software quality community on earth @hellendd .

My opinion only, but unless you can get agreement to fix a bug before the next release, you are better off forgetting about and closing it. Adding a bug to yet another reminder list is like having 2 backlogs, and ultimately it’s a poison to team velocity. Backlogs are like a promise note, a promise that a thing will be fixed at some point in the future, and because of that they are a productivity poison. They hang over your head, and get in the way in the backlog combing sessions. And because they are a “promise”, your support team will keep on waiting for them, as will your sales team. In vain they will wait.


Besides the tester, the others don’t care about any of the discarded issues.
The business isn’t affected by them according to the Product manager and the team.
So why spend all this extra effort?

I’ve worked in a place where we did what you mentioned for open backlog issues. For about 3 weeks every 3 months a tester would triage thousands of open backlog issues. Even for those it was a lot of time wasted. We adapted and changed our approach after about a year - reporting less trivial issues, deleting/rejecting issues faster. I don’t see it worth it for rejected issues, unless something new in the app changed that increases the impact of an old bug.


Doing that for rejected issues is a nice way to destroy your relationships with many people and the automation completely ignored from then on.

In my experience at some point with failing automation, all the other persons would say: that only the testers should receive this report. We only care about the overall pass/fail.

How to track legacy (or non-priority) bugs if not in the product backlog?
As long as you do your job of informing the product manager of the issues noticed and do the bug advocacy as you see fit, you can move on. It’s their job to add them to the sprint, backlog or ignore.


Yep, basically like this thread Severity vs Priority vs Urgency vs Impact of Software Bugs - #4 by

1 Like

I agree with you and basically, it was one of my points (7) :slight_smile: I just provided different options because situations might be different and people may have different reasons and motivations. I’ve been in situations as you described and I’ve worked for a big corporation on a project with a huge backlog of “old/legacy” bugs, business wasn’t interested in fixing them, some devs were neither, and others were indifferent but QA there was more like as a service, and my direct manager wanted this bugs wasn’t just closed but fixed, etc. The company forced some QA KPIs on all teams and products so I had to look for a balance between two different streams of goals and my personal goals in that company :sweat_smile:


Thank you @shad0wpuppet and @ipstefan!

@ipstefan so, were you mostly reporting fewer new issues or rejecting more already opened issues? or a balanced combination of both? Indeed, I would not like to invest my (and others’) time in something that is not valuable to customers. At the same time I am not quite confident about my own memory and afraid that without tracking and possibility to look up the defects found in the past, I can start reporting the same issues again and again.

@shad0wpuppet Just thinking aloud about your points.

  1. is not feasible at the moment.
  2. are you basically talking here about a separate backlog? was actually thinking of applying labels (or in any other way doing a sort of triage based on who the end users of the tested module are (like internal, free tier, paid tier (major, minor customers))
  3. yes, agree that it must be done in case of important bugs.
  4. and 5. well, not quite applicable.
  5. have already been discussing such options with them :slight_smile:
  6. an interesting idea about setting this sort of deadline.
  7. we are already doing refactoring. It is the goal.

Thank you for the idea, could be thought about when we will approach the automation in a more systematic manner.

Thank you so much @conrad.braam for welcoming and for your reply! I appreciate your opinion. And what do you think about the situations when you report some bugs in a feature and the team lead’s response is “it is not our immediate priority now but we will definitely revise and work on this module later”. “Later” is uncertain, bugs cannot go to the backlog because they are not going to go in the next sprint but I cannot just drop them because I was kind of promised they will be taken into consideration, just later. And when this situation repeats itself several times you just start to wonder what to do with all those unsorted bugs’ reports.

1 Like

It is easier to do as you say but I am afraid (in my specific situation) if I just stop caring they will stop caring as well.

Yes Elena, these things are easier said than done. And sadly it takes years of broken promises to break the habit of a lifetime to “hoard” all the bugs. I recall a point about 10 years ago, I’d only been a tester for about 5-6 years at that time:

Our release manager took me aside and explained that he was only ready to ship a release when P1 (we used priority 1/2/3 for all bugs, and 4 was won’t-fix) bugs were fixed, or downgraded to P2, and when all open P2 bugs stopped increasing. He did not give a fig about our P3 or P4 bugs. Not a care at all. The penny dropped for me.

I finally clocked that the prioritising of bugs was really a big horse-trading affair. And the only bugs that people look at are P1 and P2. Now, prioritizing of bugs is an entirely separate topic of it’s own and most Jira board users do NOT actually prioritise all their bugs or even give every bug a criticality. Teams can only decide what to work on in the next sprint using some kind of evidence of harm. Change is constant. And the reason is simple, only the high priority bugs will ever get fixed, the pressure for new features and for new things is too great. Old bugs are better off being closed and then seeing if someone either re-opens or raises them anew and manages to argue for them to get to that P1 or P2 level. Basically having open bugs in the background increases the mental load for the team, and hyper-productive teams that truly call themselves “agile” are only agile because they only look at what is ahead, not what is behind. Implement a zero-bug policy - Peter Hilton

As I said Elena, it’s my opinion, but as a tester, it’s your job. To find new risks, not curate the old ones. Curating (keeping alive) old bugs is really the product owners job, our skills are wasted on that, as is our testing tools and training budgets. The helpdesk or support team can go ahead and curate those old bugs, they can even try to re-open them, or raise them afresh, nothing stops them if they have new evidence. And real evidence of harm to the business is what drives fix actions. Above all, be sure you practise some kind of cleanliness routine, keep re-evaluating your processes and discard processes or steps that don’t give results, they are just eating your time. Let the business care about what the business cares about, I’m pretty sure they don’t have a mission statement that says we will fix all bugs. What I can however suggest is that you flight the idea with your team to kill off all bugs that are older than 12 months, you might be surprised how many new friends you make.

1 Like

Thank you again Conrad, both for sharing your experience and insights as well as for sharing the link on the zero-bug policy, I read some articles on it before but this one is better. They might like the (though, I am afraid, more the discarding aspect of it than the “fix first the bugs and then move on with the features”).
Well, we so not have any problem with more than 12 months old bugs, we just do not have it. The problem is more the opposite, bugs may remain tracked in personal notes, direct messages, slack channels and so on without the guarantee of not forgetting about them in the rush of the working process (but this is another story!)

1 Like

Oh, people writing notes and not adding them to the bug tracker is in some cases caused by a metrics driven culture where bugs are used to beat the developers with or compare teams or drive performance reviews.
The other trouble with not logging “small” bugs and then triaging them quickly as a ‘wont fix’, is that it makes it impossible for someone to search for previously rejected “small” bugs. If I want to raise a bug I can thus find the rejected one with a reason why it got rejected for work and thus save everyone some time. Good bug description writing is key to this working.

This is a management problem. Not a process problem and not a quality problem.

Working around the elephant in the room will not remove said elephant. It will just slow you down. Not only are you not permitted a product backlog, but you suggest that you are the only person on the team that cares about quality. Both are normalised dysfunction.

These are organisational defects, not software defects. You cannot be successful whilst they remain.