Testing Dashboard

I’d like to create a testing dashboard that helps answers the question of “how are we feeling about the release?”

I stumbled across this example and was curious if others did similar and what it looked like on their teams: Low-Tech Testing Dashboard - Satisfice, Inc.

Thanks!
Morgan

Oh boy where to start with this :slight_smile:
I’ve tried a lot of stuff over the years to have a view on where we are with the release and "how good " it is.
And here are some points that come back every time:

  • Who are your stakeholder
    Business people need a different view then IT people :slight_smile:
  • What are the gating requirements for your release
    We actually made a DOD for our release.
  • Percentages don’t tell anything. Figures do but you probably want to show both.
  • Use a simple RAG status on your different test levels
  • Don’t forget your non functional testing
  • KISS is good :slight_smile: Make sure a random person on the street understands your concerns with the release.
  • People like colors and graphs … especially business people
  • Make sure your dashboard allows for proactive steering. With this I mean that you should be able to spot trends early and not on your actual release moment …

I hope these already help. In general I have 2 views on a release dashboard.

A simple one indicating the different features we are releasing with their respective RAG status for 4 test levels. Functional testing, Acceptance testing, Non Functional testing where I make the split between , regression, performance and security. This view has no figures or percentages

A very complex view with all the different nr’s you can imagine, nr of testcases, nr of defects, nr of deployments, etc etc … this view serves as input for the simple one.

Both are shared with my stakeholders and I’ll leave it up to them to choose which one they want to see. Most of the time they go with the simple one and ask me for more details if needed.

Hope this helps a bit :slight_smile: if not feel free to ask

1 Like

The audience for this is just the immediate dev team (manager, Scrum Master, PO, developers).

I totally agree with the KISS approach - I’m really trying to keep this SUPER simple (otherwise no one will give a crap nor use it). I’m the only QA person in-house - I’m happy to educate but I do want this to be easily understood by non-QA.

My goal is this: have an at a glance way for others on the dev team (it’s a rather LARGE team) to see a high-level view of the current “assessed” product quality. I have one item I’m struggling with putting into words, maybe you can help me figure that out…

Question to be answered with this chart: If we were to release today, how are we feeling about what we KNOW on the product quality?

Here’s where I’m at so far - I’ve pared it down from 6 columns to 4 and it looks like so (excuse the cruddy table, I’m not sure how to format it here!)…

Product | Testing Type | Quality Assessment | Comments


Product A | Functional | :slight_smile: | Passed Testing
Product A | Non-Functional | :frowning: | Ticket #355 - Cancel button missing on new screen

What I don’t like about this is Testing Type because it’s not widely understood. I almost broke it down by testing type but that’s a TON of granularity for a team that just wants the big picture. What I’d like to include instead is Area Tested or something to indicate different testing requirements that have to be met. For example… I could have one for Functionality (includes system, integration, end to end, stress testing, etc) and one for UI (which includes the design requirements, usability, etc). But I’m struggling with the wording… I looked at Bug classification for a bit but can’t quite put my finger on what I’m looking for.

:thinking: Thoughts? Thanks!

Pfew big challenge ahead :slight_smile:
Well I made a heat map here as well. It breaks down the general application into different products and applied a simple RAG status to the product based on some criteria like , test coverage, nr of defects, failed vs passed test cases. But all that is behind the heatmap and not visible. I’ll try to post a picture later. If the breakdown between Functional testing and UI works for you go for it. Try and adapt later on.

Defect categories, well you can go funky here or keep it very simple.
Very simple is 2 categories:
Release blocking
Non Release blocking

Funky:
Blocking
Partially blocking
Non blocking - disturbing
Costmetic

I would keep it simple in your case and use the release / non release blocking option.

Granularity is nice for analyzing but a nightmare for simple reporting. :slight_smile:

Hope this helps a bit more. If not we should look for a different way of exchanging idea’s that goes a bit faster :smiley:

1 Like
  • Who’s asking that?
  • How often do they ask?
  • Do they need only the feeling or more? If more, what exactly would they like to have?
  • How often is the release normally?
  • Who’s in charge of doing the release?
  • Are regularly all things in the release tested/testable? By whom, when, how? Are you aware about that?
  • What does a release mean usually? is it the master code base ?
  • Does the release have an some extra process that is required to be followed?
  • Are you in the tester role when asked about the feeling of the release? Or a different one? (I’ve been in these roles: release manager, developer, tester, test manager, observer, business representative, support/monitoring)
  • What is your relation & communication level to the Product Manager/Owner and to the release manager before the releases and during the release?
2 Likes