Iāve been thinking about how you go about prioritising bugs in the backlog. How do you go about it?
The short answer is, I donāt.
I avoid planning like the plague.
I will, however, give my opinion about the risks of the issue.
Is the issue something that will break the product?
Is it something that will break peopleās perceptions about the quality of the product?
Is it something that, while it seems diabolical, doesnāt really have much affect on the product or the perception of the product? (Some edge cases might fall here)
ā¦
Then the people who make decisions may act on my observations.
Usually, the action is directly related to my recommendation, especially after Iāve built up some trust in the team.
When I joined my present company some four years ago, I was a little surprised that bugs werenāt assessed for severity or impact, the way they had been everywhere else Iād worked; but I got used to it.
A show-stopper is usually fairly obvious to everyone. I do sometimes put a comment in the bug report to the effect that āIf we triaged bugs, Iād rate this as a 3ā (or 4), usually to tell the dev that this particular bug isnāt, in my opinion, a huge priority to fix.
Then again, we talk to our devs.
Sometimes I find the struggle is around how you get minor UI issues fixed liked spellings or misalignments. Although a cosmetic issue - it does ruin the perception of the product.
Nice, I get the feeling your team is very proactive and collaborative? Am I right in that assessment?
Wherever Iāve worked, weāve always had well defined Priority and Severity for defects although in my current role, the more āagileā the process the team follows, the less these are adhered to. Some teams just fix every defect as itās raised.
Others have triage and priority sessions with the wider teams and donāt prioritise them just as testers. Having the devs, architect, PO and any other technical stakeholders can help ensure the right fixes are prioritised.
For me, if the defect has a direct impact on the customer or on financials, then these should be prioritised for fixing ASAP. Anything else should be scheduled within the next delivery iteration.
Also, any defects which donāt get fixed for 2-3 delivery iterations are obviously not that important and can be closed off
Yes Melissa, youāre right about our team!
You wrote earlier about getting minor UI issues fixed. We introduced a section in the bug report that describes the type of bug in terms of the quality aspect the bug impacts, such as accessibility, functionality, impressionability, performance, security, testability or usability. That way, we can concentrate devsā time in making bugfixes around specific aspects.
I like the impressionability one! This is one that I am looking for.
While prioritizing bugs, I categorize the bugs into three categories:
a) High
b) Medium
c) Low
High is when the bug results in breaking a major functionality i.e. the functionality not working as expected or anything more severe.
Medium is when the major functionality works but there are cases around that functionality that have issues. Medium is for rare cases or occurrences that could take place if some user behaves in a different manner.
Low is when the functionality works fine but there are little issues like design issues. The low priority bugs are fixed only when the developers donāt have major tasks.
There are factors like the development schedule I consider when prioritizing. For example, functionalities are given the highest priority whereas, for other minor issues, I report them if some developers are least occupied. However if it is a production related cosmetic issue, I have the developers fix them ASAP.
In the defects database, usually there are fields called āSeverityā and āPriorityā (note that they are different!). Depending on the customer needs, you could take up the ones thatās important to them. Note that their needs will vary from time to time - sometimes they look for high āSeverityā defects to be fixed, and sometimes the high āPriorityā ones.
It depends on your situation really. Things like priority and severity are useful at framing bugs.
In agile teams I prefer a bug being treated as a conversation starter. Similar to user stories. If something gets found, then an adhoc chat (triage) to decide what happens next is best approach. If itās a quick fix then turn that around as soon as you can, and donāt wast time with bug reports. If it canāt be fixed easily / quickly then the team can agree next steps.
Helps if the team have a common understanding of types of issues and principles on fixing them
For example:
- Typos get fixed with the story, or if found afterwards fixed with the next story in that area.
- Aim to fix things as part of the story
- Know what a āhighā bug looks like. Use previous bugs as references. Same goes for other classifications.
If youāre not working closely with the team some of that is really difficult. But having examples and guidelines has helped me in the past.
If itās an obvious change I submit a PR, obviously this doesnāt work for everyone or all cases. But it has worked for me a few times.
An option I learnt in some regulated industries is to do a risk analysis by putting values to 3 different variables, refering to Production indeed:
- Severity = the damage the issue would cause
- Probability = the likelihood of the issue to appear
- Detectability = how long will it take to someone to realize about the issue existence. Itās the opposite as in the other two, meaning that an issue that is easily detected is less critical than one that is hardly detected but is existing in the background and might be causing problems for longtime.
It is often used for user requirements but it can be used for defects as well and, although it took some time to fill the 3 variables for each issue, it gave a very accurate prioritization.
While this appears to be a perfect scientific method to prioritise the bugs to be fixed, the world of customers is strange, and they have their own unique ways of identifying which ones they want first to be fixed! So, always better to check with the customer as part of the regular conversations.
In my case, I would check with our product owner, I do not have regular contact directly with our customers. Also, we need to take judgement as a team first, if we can fix all the bugs found quickly, we donāt need to check with customers on their priority, we just get them fixed.
Yes, of course, the Product Owner is the interface to the customer, and thatās implied.
Regarding prioritization, this whole question was raised only because the questioner found it difficult to prioritize (potentially because of the volume of the defects to be looked at), so the scenario of fixing all the defects before reaching out to the customer does not arise in their situation.
Yes, this is an approach that works in regulated industries where everything is absolutely defined, there is no place for interpretations, and where regulatory agencies feedback are, at least, as important as customersā
I guess there is no one-size-fits-all solution for this topic
Thatās very interesting the detectability one - do you consider it as part of probability? or is it a separate score altogether?
IMHO detectability is a great metric if you can calculate it and different to probability.
Think of a web app that is failing to save any data to a backend API, say half the time.
This is an annoyance if it fails loud and you know it failed to save. Say some red error text presented to the user. So 50% probability, but 100% detectable.
Now consider if it fails silently and the only way you know is when you try and read the data back and itās missing or out of date. Still 50% probability, but much lower detectability. And a more serious bug.
Obviously both want fixing, but I would prioritise the undetectable one ahead of the one where an error was displayed and the user could take action.
Itās hard sometimes to distinguish between probability and detectability because it seems that the more probable to occur is an issue, the more detectable seems to be but itās not always the case.
For example, imagine that there are some calculations in the software that should be done using all the decimals available. By mistake someone created the calculation rounding to just 1 decimal. The calculation is incorrectly done so it would be a 100% probability of occurance, but it wonāt be detected until it causes an error that is visible, so low detectability.
Remember too that it is a way of doing Risk assessment for requirements and in this case it was re-used for bugs too. It has some complexity but with real issues it usually is a quite good approach, especially when you want to avoid āsubjectivityā.
Editing to add: in fact something similar is what happened with Ariane 5⦠low detectability issues might really dangerous