Iâm looking for suggestions on how to best triage bugs that are reported by people in my company. Currently, they just report things in Slack as text messages, but I want to create a form for them to fill out that adds more structure. Iâm not sure what I should ask for in that form apart from some of the obvious things like, âWhat steps do I need to follow to reproduce this?â, or âWhat happened?â.
I have read some articles about severity levels and Chrome extensions for automatically recording logs or screenshots, but I would love to hear how different people actually triage bugs when theyâre reported.
Are there any specific data points that you rely on for prioritising the bugs reported to you?
Thank you all in advance for sharing your thoughts
I can only speak from my own experience, so I apologize if this doesnât help in your situation.
First is the Categorization Time, Money, Resources, or any combination of the three.
Typically if it is only one of these being the focus 1-3 scale severity is good, but for all three 1-5 is recommended.
Standardization is vital, keep a list of all the questions you ask, pick the most frequent ones and have them be in a logical order. They then need to answer these to log the ticket.
Research, research, research donât ever stop.
Find a ticketing system in your budget, if not the cheapest one I have seen is a shared file throughout the company with an excel sheet logging it and numbering it with a list of the files involved.
Again I apologize if this doesnât help your situation, but I hope it helps.
Whatâs your reasoning for wanting to create a form? What problem are you trying to solve? (Are you getting too many low-quality / vague internal reports? This is certainly a problem Iâve seen before.)
From my experience, the more structure that you introduce, the more barriers you create to actually having people report the issues. (âIâve found something that might be a problem⌠but I donât have time to fill out these ten fields right now.â) And Iâd be wary of letting reporters (internal or external) select severity for themselves, as theyâre not impartial.
All of which is to say⌠if the current Slack method is working for you, maybe thatâs okay? But if youâre looking for the bare minimum requirements to improve the situation, then âplease include a screenshot or video with all bug reportsâ is often good enough. (There are lots of good plugins for Slack such as Loom which can make this almost effortless for people.)
The challenge is if you received 10 valid results from external source but only accepted & fixed 5 of those.
How did you make that priority call & decide those 5 bugs are important to fix the remaining are âgood to knowâ but not something for you to action?
What factors do you take in to consideration i.e. priority = customer impact + impact of core functionality + how many environments impacted â what does this equation look like?
if you had to make a priority call on 60 bugs received externally, besides here is the bug, what other information would be useful to enable you to make that priority call quicker
Oh. brilliant question. Welcome to the club.
Slack is not a bug tracker. A developer cannot action a slack message conversation, so whatever tracker you use needs to allow you to apply a severity or a priority, or both. Use the slack to gather info only.
Donât involve the external people in priority discussions at all. Itâs a time waster. When logging a issue I advise against using both, and I advise against high granularity scores: LOW/MED/HIGH is all you need, it takes incrementally higher cognitive load and slows down scoring when you have 5 levels versus 3. Some places actually use the priority field as a severity field, and less mature organizations think priority is interchangeable with severity or are confused.
SoâŚ
This week we ran a dogfooding and got a lot of bugs from people, for every new report, we thanked users for their report, and either raised a ticket, or just pasted in the bug number next to the conversation and encouraged them to continue raising more issues. Just the act of encouraging people to keep raising issues, even if they are duplicates was a tactical change. It meant we got far more bugs, but also mean that duplicates merely pushed the priority of the bug higher. It also gave us more data sometimes.
Itâs important that people external to your team understand that we will be tracking all reports in a database, even minor ones. Even if they can never read the tickets, they need to know that there is a system, and that issue ownership does not come down to them.
Dev and Test are required only to size defects and describe workarounds and impacts. Itâs not the developers job or the testers job to actually be assigning bugs on their own. Deciding on what to fix next is a job for the product owner to get involved in together with the team. You cannot and do not want to prioritize bugs in a slack thread. That is a disaster recipe. Sorry if this sounds like an overt opinion, itâs just the way I write based on my context, yours will differ.
Hi Conrad, I agree with you completely what to fix is not a decision that external source can take⌠I think the question is not âwhat you need fixâ or âwhat you need to fix firstâ.
The question is to understand how can the result be presented or what data can provided with a bug in order to intelligently tell you âwhat you should be looking at firstâ so the triage time taken for you to prioritise and fix something can be done quicker.
I really appreciate your reply @conrad.connected (and respect your overt opinion). Thanks for sharing your thoughts with me. I like your approach of encouraging more bug reports and using the duplicates as a confirmation signal that can push the priority higher. Thank you
So generally I like to stick to the 3x3 grid for a quick risk analysis. Itâs not perfect but it can help issues reported become actionable since theyâll be organized in a way. It can be found here Risk Analysis itâs about a quarter of the way down the page. Itâs a 3x3 grid that can be applied to identify the frequency of the issue, and the severity of the issue. This way I can make the case to the business if something should be prioritized or not. âHey this is severe, clients are affected on all fronts and it kills the app/workflow, this needs to be fixed right nowâ versus a âOh itâs affecting a small number of users and thereâs a work around for themâ so maybe not a super need to swarm a fix. Weâre never going to fix all the bugs, and this way allows me to quickly make judgements and recommendations. So the more severe or frequent, the higher the priority.
I think on the opposite side of defect prioritization heuristics are what I call the low hanging fruit. Defects that are cheap/quick to fix, cheap to verify AND have low risk of injecting a fresh bug. Because their priority is often low, they can often stay at the bottom of a backlog until a junior developer joins. These are bugs most likely to build up UX debt in your product, aim to drag in at least some low risk defects into every sprint. Sometimes these are the small bits of polish that can get you over the âquality perceptionâ bar in a release.
Very True! Itâs part of why I said itâs not a perfect tool. Since there truly isnât a perfect way since context of any given situation can change the answer or action item. You definitely donât want a bunch of âwork aroundsâ cause the end user will just not use your service. I just really like this 3x3 grid to help quickly focus the conversation to make it digestible.
We evaluate the bugs reported internally & externally the same way. So whenever somebody reports a bug weâll find a reproduction & assess the impact, risk & severity and give it a priority.
If the information is clear weâll ask for more feedback, screenshots (preferably gifs/recordings).
If itâs still unclear weâll sit together with that person to find whatâs wrong.
So if you are interested in creating a form. I would add or suggest them to use ScreenToGif (or any gif/recording tool) and upload what they are doing. With just this, youâre helped 95% of the time