Does it really make sense to manage bug reports as backlog items, on non-development teams?
I’m a QA Manager using SAFe. One of my ARTs consists of several non-development scrum teams. They own products but outsource all development work to vendors. However, some of the teams have sprint stories to test the product once they receive new versions from the vendor. But this is something like a re-test or sanity test b/c the vendor is responsible for the primary testing. Nevertheless, bugs are normally found, requiring the vendor to fix and send a patch. The scrum team is inclined to log bugs in a spreadsheet. At first, my anti-pattern alarm was going off. But after spending time wrapping my head around their context, I’m not sure.
On dev teams, it’s intuitive to estimate work around bug fixes because the bug fixing and testing will be performed by the team. But if that work is not performed by the team, logging bug reports (e.g., as Jira issues) seems a little weird. Let’s not do it merely b/c dev teams do it.
I would love to hear from people with similar experiences.
My anti-pattern alarm is going off about needing a second round of testing and the lack of trust that implies. But clearly the distrust is well-placed, if finding bugs is a normal event. I’d be querying how we can make the vendor testing better (or changing vendors), rather than absorbing your predicament.
On the assumption that you are stuck in a bad situation that you cannot change, I think that a sprint backlog should consist of the work that the scrum team are doing. So I think having issues in the sprint backlog that represent the test work being done is reasonable. I agree that logging bugs in a spreadsheet is not the best process. A better approach might be to use a defect tracking system and give your vendor access to it.
The alarm bells you are hearing are because of a dysfunctional situation and process. If you can fix that, that’s your best course of action. If not, shared access to the DTS is probably the best way forward. If you’re forced into progressing the same story across multiple teams, at least limit the gap between the teams as much as possible. If there can be actual collaboration, all the better.
I’m confused. If you’re outsourcing all development, what exactly are your “non-development scrum teams” doing? It sounds like they’re doing more than testing, but it sounds really weird to me. It also sounds weird to be using a heavy framework like SAFe in that context.
The question you should be asking is what benefit do the teams see in logging bugs in the backlog? They might just be doing it because the process says so, if that was the case if scrap it as it sounds like doing agile rather than being agile. If they do see benefits e.g. seeing all o/s work in one place, being able to prioritise the bugs alongside other work items, it might be worth keeping. But I think you need to ask the question
If they are putting bugs in the backlog and also sending them in a spreadsheet, that is waste. Hence the idea of giving the developers access to the DTS. However, a better approach would be working such that bugs arent created in the first place.
This agile team is accountable for getting software deployed to retail stores. Sprint Backlog has work for staging the software in labs, running functional and integration tests, working with the vendor on new features or bugs, writing technical documentation and training for the retail store associates and helping the stores with UAT, installation and training.
Thanks for sharing, Eric. Good to learn about your situation.
I’ve not had experience with a similar situation, yet I’m attempting to step into the shoes of your non-development scrum teams.
It seems they need some form of basic tracking to give them confidence that they’ve noted things down. So perhaps whatever works for them is fine. Do they tend to revisit the spreadsheet and check things off when they have things to retest/sanity check? Do they categorise items to spot patterns over time?
Either way, perhaps it’s fine they note things down. I wonder if they need to demonstrate work to other managers or senior folks and the spreadsheet is one way to say “hey look, we found these things, how might we better work with the vendor to reduce them over time?” Perhaps a way to put pressure on the commercial contract details with the vendor i.e. the more buggier the software the less we pay (I’ve exaggerated for effect)
What happens when you ask them about their spreadsheets?
Yep, it’s a different situation given the article describes a development team, yet maybe there’s something in the following:
The idea behind this policy is that you do not have a backlog of open bugs. This means that when a bug is raised you either commit to fixing it right now or you close it as a “Won’t Fix”.
Downsides of a Zero Bug policy:
An open bug backlog can be a useful source of information for a new developer on a team.
People could be changing their behaviour and not raise bugs
“Won’t Fix” is misunderstood as final
The benefits of a Zero Bug policy are plentiful.
Everyone knows how many open bugs there are
It can be a psychological relief to the team to not have a bug backlog to sift through
No lengthy triage meetings
Improved communication between teams and bug reporters
The Zero Bug Policy experiment is great. I love it!
This team I’m talking about already tried logging bugs in Jira. They found they didn’t like it b/c they don’t estimate the fixing work. They just end up nickel and diming themselves with administrivia, pushing around all the bug reports and having to track them through the statuses, etc. If they find 10 bugs, that’s 10 stupid Jira tickets that have to get pushed around.
That complaint resonates with me. I am of the camp (similar to zero bug policy) where one need not log bug reports unless the bug is found in production. I think testers and developers probably find upwards of say 20 bugs while building a new feature (Story). It would be silly to log 20 bug reports. Much more efficient, it seems to me, to just fix the bugs and/or comment in the Story until fixed.
I mean, if a developer is testing their own work, I would not ask them to log bug reports.
Zero bug policy fails to make sense when the business requirement or drivers that told us not to fix something that looks wrong (a minor bug) are not shared later among testers. So when a newbie on the team starts, they end up raising the same bugs again, and even if they did search for old duplicates they won’t find them 90% of the time anyway, because they are often new to the terminology.
Basically you end up trading the zero bug dream in for having one person on the team who has to memorise all of the past decisions, but that’s not the question is it?
In an environment where bugs are just logged as a “wishlist” into excel, because the fixing is done externally and Jira would not make sense since closing them cannot and will never occur. You are, effectively writing a test case database full of negative test cases, which is why it’s very expensive, because generally any non-happy-path test cases are costly to execute. When you think of your spreadsheet as a list of regression tests, then it might help, because you can start to coalesce those those test cases into each other maybe? And thus test faster. Instead of thinking of them as individual bugs all of the time or even bothering with duplicating them into Jira as well, since they are really becoming acceptance criteria?
I recommend raising as bug/incident tickets in Jira, even if the development/testing has been performed offshore.
It may turn out that they have been raised by those teams in their own system already but was perhaps deprioritised- if in doubt, get it raised.
People have already alluded to this in previous comments that the metric could be used to reduce vendor rates if the product is not in acceptable state
Great point, @conrad.braam, in your last paragraph. I agree, it’s like expensive test cases. Although, they would close eventually…assuming they get fixed in a vendor’s patch.
Re: logging differed bugs for history/learning. In my experience, new testers won’t use that resource. It’s too tedious. I don’t think I would want them too either. “Don’t raise a bug until you check to see if it was raised in the past” seems like a pretty slow and demoralizing way to work. There may even be value in new testers repeatedly raising previously differed bugs. Perhaps the landscape has changed and this time we don’t differ it. Or perhaps we get annoyed to the point where we increase testability so as not to confuse testers.
It’s a convenient “source of friction”, to ask people to check for duplicate bugs. The problem with asking people to search the old bugs first, is that it produces a friction, and most friction is going to be bad friction. Not always, but just often enough.
For people not following where @ejacobson and I are going, it’s when a old old bug that we decided we don’t want to fix 5 years ago, suddenly becomes a very real bug either due to an environment change or some context change or even a random regression. At that point, a fresh duplicate bug report will surface something that the team might just have ignored otherwise. That’s why I am a fan of raising and then marking duplicates, even if it costs the team more time, than if the individual spent time looking for the old bug and made a costly assumption to not re-raise with fresh context. The team then can own marking the new bug as a “won’t fix”, so the friction is still there, but it just moved, to a place that gives us all some shared learning.