Prioritising results of a test from an external source

Continuing the discussion from How do you evaluate or prioritise bugs that are reported to you from external team members in your company?:

The challenge is if you received 10 valid results from external source but only accepted & fixed 5 of those.

  1. How did you make that priority call & decide those 5 bugs are important to fix the remaining are ‘good to know’ but not something for you to action?

  2. What factors do you take in to consideration i.e. priority = customer impact + impact of core functionality + how many environments impacted ← what does this equation look like?

  3. if you had to make a priority call on 60 bugs received externally, besides here is the bug, what other information would be useful to enable you to make that priority call quicker.

2 Likes

Are these 60 bugs from a crowd testing company, by any chance?

I ask because with my crowd testing hat on, it’s usual that the client reviews everything, rejects some of them, accepts some but marks them as “will not fix”, then computes a priority for the rest based on the likelihood of encountering the bug x the impact of the bug. But the point is, they’re all evaluated like any other bugs.

Hi @jon_thompson , yes from crowd testing company.

So you are saying all the bugs are valuable so are looked at
in order to make a call on the priority = likelihood of encountering the bug x the impact of the bug.

& this is used to to have priority against everything that get reported and the secondary step is to determine what to fix & what to leave for now?
[/quote]

Yes, they’re just like any other user-reported bug, except they should better documented and reproducible. At uTest, for example, the test team leader triages all reported issues, and only sends those that meet their strict evidence rules. They may even reproduce them to make sure they’re good enough to send. And, of course, they remove duplicate bugs!

Most of the time this is an experience call where you can assess potential impact on the business.

For example I’ll scan customer feedback for potential improvement opportunities, very few will result in direct action but some will. My experience and understanding of the business value allows me to filter fairly well in most cases.

Crowd sourced users I’m not really a fan of due to the high footprint per value returned but it still requires an experienced person to filter things fast, most things I could say no to in about 5 seconds but then something might peak my interest and I’d take a longer look.

That look will normally be clear bug needs fixed, nice to have but not now, needs further investigation, generates a question for the team or simple this is interesting I’ll have a chat with developers on this. I need to have a good insight into the business risk but often that is from similar products and projects to do this.

Here is an interesting thing. When I am doing the testing I go through very similar steps and get the exact same options “clear bug needs fixed, nice to have but not now, needs further investigation, generates a question for the team or simple this is interesting I’ll have a chat with developers on this” - in reality I usually have more questions and interesting risk clarification chats than clear bugs on any given session.

That is perfectly normal, so those companies that only pay on bugs fixed are really misunderstanding what testing is about or they are knowingly manipulating it to cut costs.

Questions and points of interest coming out of testing is often on a day to day basis much more valuable than the bugs themselves.

HI @jon_thompson when you say impact of the bug,

  1. impact of the bug on the customer
  2. impact of the bug on the core functionality of the app
    or something else?