Increasing visibility and closing gaps with the Quality Radar (workshop)?

Hey all, I’m looking at @nicola ‘s Quality Radar, and wanted to get some input on how you’ve used it in your projects / organisations.

  • Did you run a workshop, or “fill it out” yourself / informally; how did that go?
  • What do your stickies and assessments look like, and what do they tell you?
  • Did the radar help to make things visible, and how?
  • What action points came out of it, and how did you come up with them?
  • Anything else you’d like to share?

For context, I’m looking for ways to make our quality and testing efforts more visible, and identify areas where we could improve, which would have the most meaningful impact. I don’t want us to do “stuff” just to do stuff, but really reveal where improvements are needed. If you’ve achieved this in ways other than using the Quality Radar, please share those experiences too.

Thanks in advance.

8 Likes

I have read lots of articles recently promoting shift-right and testing in production. But when there are products in which actual money is involved, like in my organization, which offers loans from 5 years to 20 years, depending on the amount, annual income, FICO, etc. I don’t think it will be feasible to test with real money, and on top of that, convincing stakeholders will also be a bit of a difficult task.

So I see the limitations of such concepts/processes, like shift right in such products.

However, that doesn’t limit us from working on the quality ofthe product and enhancing the existing quality. There are many other ways through which we frame our process to work on the quality, like brainstorming, regular communication, user analysis, competitor analysis, module breakdown, microservice architecture, a/b testing, heatmap, etc.

1 Like

Hi Cassandra (@cassandrahl) , I’ve run Quality Radar sessions at our organisation after seeing @cakehurstryan’s excellent talk on the subject at BCS SIGiST (I think). Here’s my perspective:

  • Did you run a workshop, or “fill it out” yourself / informally; how did that go?

I ran it in a workshop style format across multiple sessions. The way that played out was:

  • 2 hours (2 x 1 hour sessions) - Introduction to the Radar, dot voting on the the items on it, discussion around the items that people had either never met before or were unsure about.
  • 2 hours (2 x 1 hour sessions) - Discussion of the priority of the items first by Quadrant and then across the whole of the radar.

I would prefer to run it in a single workshop session but when I ran it the schedule just wouldn’t take the whole team being busy for a whole morning. Without all the starting and stopping I think it should fit into a morning (i.e. 3 hours with short breaks between activities). The team reacted well, especially to Quality practices they had never encountered before.

  • What do your stickies and assessments look like, and what do they tell you?

Our dot vote options were: “Doing”, “Do more”, “New”, “Do less”, “Questions” and “Doesn’t apply”. A Quadrant, after voting, looking like this:

Based on the dot voting we concluded there were practices that were better embedded (Spikes, Proof of Concept), less well embedded (Design Reviews, Risk Analysis of Stories, Risk Storming), and some things that were either entirely new to team members or ones they had questions about (anything with yellow or pink dots). We used a general proportion of different dot vote types to draw conclusions for each sticky.

  • Did the radar help to make things visible, and how?

Yes, it brought visibility to practices of which we were all aware but weren’t well embedded in team practice as well as highlighting practices that people may never have heard of. It also clarified the team’s shared understanding of which practices applied to their work and which did not.

  • What action points came out of it, and how did you come up with them?

Anything that had been voted “Do more” was ranked by priority/importance against the other stickies in its Quadrant. And then an item was selected from the top items across all the Quadrants to be worked on. We did the ranking by position within a column on the collaborative board.

Most of the good discussion had already been had during the dot voting sessions so I will change the format of this one next time I run it. I would probably take the options from each Quadrant and do an offline vote using something like the Ranking question type on MS Forms and only have a full discussion about the final result.

  • Anything else you’d like to share?

A summary and takeaways/changes:

  • The Quality Radar approach seemed to drive Quality visibility and identify the most useful improvement/change.
  • I will be running it again with a new team soon.
  • I want to try running it in a single (morning) workshop as stop/starting affects time taken.
  • This time will carry out initial prioritisation individually through a Form as the best discussion seems to happen during & after dot voting, and during the final selection.

I am more than happy to go into (even more) detail if you want to message me.

1 Like

Hi Jon,

Thanks for your detailed and thoughtful response. I’d love to learn more from you. I will DM you on Slack.

Cassandra