Hi Cassandra (@cassandrahl) , I’ve run Quality Radar sessions at our organisation after seeing @cakehurstryan’s excellent talk on the subject at BCS SIGiST (I think). Here’s my perspective:
- Did you run a workshop, or “fill it out” yourself / informally; how did that go?
I ran it in a workshop style format across multiple sessions. The way that played out was:
- 2 hours (2 x 1 hour sessions) - Introduction to the Radar, dot voting on the the items on it, discussion around the items that people had either never met before or were unsure about.
- 2 hours (2 x 1 hour sessions) - Discussion of the priority of the items first by Quadrant and then across the whole of the radar.
I would prefer to run it in a single workshop session but when I ran it the schedule just wouldn’t take the whole team being busy for a whole morning. Without all the starting and stopping I think it should fit into a morning (i.e. 3 hours with short breaks between activities). The team reacted well, especially to Quality practices they had never encountered before.
- What do your stickies and assessments look like, and what do they tell you?
Our dot vote options were: “Doing”, “Do more”, “New”, “Do less”, “Questions” and “Doesn’t apply”. A Quadrant, after voting, looking like this:
Based on the dot voting we concluded there were practices that were better embedded (Spikes, Proof of Concept), less well embedded (Design Reviews, Risk Analysis of Stories, Risk Storming), and some things that were either entirely new to team members or ones they had questions about (anything with yellow or pink dots). We used a general proportion of different dot vote types to draw conclusions for each sticky.
- Did the radar help to make things visible, and how?
Yes, it brought visibility to practices of which we were all aware but weren’t well embedded in team practice as well as highlighting practices that people may never have heard of. It also clarified the team’s shared understanding of which practices applied to their work and which did not.
- What action points came out of it, and how did you come up with them?
Anything that had been voted “Do more” was ranked by priority/importance against the other stickies in its Quadrant. And then an item was selected from the top items across all the Quadrants to be worked on. We did the ranking by position within a column on the collaborative board.
Most of the good discussion had already been had during the dot voting sessions so I will change the format of this one next time I run it. I would probably take the options from each Quadrant and do an offline vote using something like the Ranking question type on MS Forms and only have a full discussion about the final result.
- Anything else you’d like to share?
A summary and takeaways/changes:
- The Quality Radar approach seemed to drive Quality visibility and identify the most useful improvement/change.
- I will be running it again with a new team soon.
- I want to try running it in a single (morning) workshop as stop/starting affects time taken.
- This time will carry out initial prioritisation individually through a Form as the best discussion seems to happen during & after dot voting, and during the final selection.
I am more than happy to go into (even more) detail if you want to message me.