In our current reporting process, we share QA-discovered bugs on a regular basis, such as monthly and yearly reports. We categorise them based on total count, and further break them down by bug severity, product, and ‘Bug Type.’
The existing bug categories include:
Accessibility
Compatibility
Functionality
Performance
Regression
Security
Spelling
UI
I’ve noticed that regression is rarely utilised, and many bugs tend to be classified under Functionality. I’m curious about the bug categories you use to classify and report issues. How do you break down and organise bugs in your reporting?
Cant bugs have multiple categories? In my previous gig we used Azure DevOps which allows for tagging. tags are queryable. And we would tag bugs or defects in useful ways: Type, Area, Etc.
As for Regression, the general consensus in my company was that it was a defect encountered that had already been guarded against by past test activity: Automated or manual tests.
So a bug or defect report might be categorized like so in the tagging:
While I see all other categories being somehow a description of the type of problem or cause, regression is to me something organizational of your testing.
Regression is when you tested and found the bug. To me it says nothing about the cause.
This are two different dimensions.
I don’t and haven’t seen any manager who cares about this either.
It can be a waste of time. Unless you have a problem or you intend to do something with that category/type that will facilitate fixing more of the important issues.
Generally, I’ve experienced these types:
binned now or after a few months,
stashed with a link to the feature they affect, or
I have used defect report classification for my reporting metrics to exec and management.
I was able to demonstrate trends and product areas that were experiencing more or less defect rates. This was also combined with similar data from production issue reports to help identify necessary attention focus on “tightening things up” which all falls under your “having a problem you intend to do something about”
I do agree though that unless the information results in action it is busy work that doesnt do much
This seems like an interesting approach I was thinking of this but was not sure I should discuss this with my line manager or not. This categorizing helps to reduce work and find out what type of bug is usually found.
I think this approach also helps us create another oracle which we can use during our testing efforts, particularly with a time-boxed exploratory testing session – where an area is to be explored to reveal helpful information about risks.
Over time it’s oracle that grows and can start to inform us, what areas of the code/application/interface/API/design tend to be more susceptible to bugs/unknowns/risks?