Masterclass Further Discussion: Error: 'Quality' Is Not Defined

Tonight, @georgiabloyce will be joining us for a masterclass focusing on how we can define and measure quality on our teams.

If we donā€™t get to your questions on the night, weā€™ll add them to the thread below for Georgia to answer later. If youā€™d like to continue the conversation from the webinar, this thread is an excellent place to do that :grin: Share resources and follow up success stories from your learnings here!

Of course, the recording of the masterclass will be available to everyone in the Masterclass section of our website after.

@georgiabloyce mentioned being a ā€œTest Ownerā€ - do anyone have more info on that? Cheers

1 Like

We have a product that is definitely not in a mature stageā€¦ Do you think this process will fit it as well?

2 Likes

Georgia mentioned this article from @kwesi_peterson

Questions we didnā€™t get to

  1. @gwendiagram: What did you use to measure code complexity?
  2. @gwendiagram: What happens to teams that arenā€™t meeting their quality metrics?
  3. Thomas Rinke: How difficult was it to align on the desired level of quality? Can you share an example where desired levels were very different and how you handled this?
  4. Elena: When one of the quality metrics goes red, do you act on it right away? How does the process go?
  5. Matylda: How does your typical word day as a quality champion look like? Now that everything is defined, what do you do (apart from checking the metrics)? :slight_smile:
  6. @testnsolve: How you alligned all your team mates with quality standard and practices?
  7. Thomas Rinke: If I understood it correctly you suggested that gamification can be dangerous (optimizing for the metric and not optimizing for the goal). Have you found examples where gamification helped, e.g. by getting an initial movement?
  8. Jaya: What tool did you use to get the RAG reports?
  9. Barry E.: Have you tried to measure your current quality level against user experience of your product?
  10. @furtz: ā€œ#crashes in productionā€?? You must have started from a really low base if this is one of your metrics?
1 Like

Hey @jesper! I think it might be similar to a test manager role at some companiesā€¦ We call it Test Owner because of the focus that puts on the responsibilities to product delivery, not just on management. The gist is that Iā€™m responsible for ensuring the testing done on the product is effective. So, as well as developing the people doing the testing, Iā€™m also responsible for delivering and improving our overall test strategy for the product. Hope that makes sense!

2 Likes

Hi @ifat_sharifi, really good question! We started when we were at that relatively stable state in terms of building the product, and it was valuable for us to shift our focus to quality.
I havenā€™t tried with a less mature product, but I can imagine it working. In my opinion, itā€™s never too early to start thinking about your quality level and aims, and if you do start thinking about it whilst youā€™re still very much building the product, thatā€™s when you have real power and freedom to build quality in!
Iā€™d love to know how it goes, if you do experiment with this!

1 Like

We have a couple of tools that measure cyclomatic complexity and cognitive complexity respectively - I donā€™t actually know what they are (the product teams implemented the tooling!) but will try to find out for you.
Weā€™re actually still refining this measure, trying to work out how we identify essential complexity versus unnecessary complexity, and how we reflect expected changes in complexity (i.e., adding new features) in our targets. Itā€™s (probably predictably!) turned out not to be the simplest thing to measure, despite us initially hoping itā€™d be a quick pseudo-measure for developability.

Hopefully, they use that information to inspire them to improve quality where they can! The results are not intended to be attributed to teams, because we recognise that teams canā€™t change quality without leadership engagement. I certainly hope that the narrative we send out with each quarterly report makes clear that this is to help us all understand quality, and not to point fingers at anyone.

Pretty difficult ā€“ it took iteration and us asking a lot of difficult questions about whether or not we were happy with the level we were at.
An example of our desired level being very different to our current level is accessibility ā€“ at the time we started measuring quality, we were also reconsidering our quality bar in this area (realising we wanted to do better!). This meant that initially, we set ourselves very high targets that we werenā€™t anywhere near achieving. As we came to understand our target better, and the deficit between it and where we were, we actually settled on a roadmap to get there over time, and changed our targets to match that roadmap (i.e., measuring against intermediate targets, whilst increasing these steadily). This worked much better to tell us how we were doing against our roadmap, and also was a lot less daunting/demoralising.

We have a conversation after the report is circulated and highlight metrics that have notably decreased, including those that have gone red. We always look into why itā€™s gone red right away; often with things that go red, thereā€™s something thatā€™s happened to make it so (like a new release has regressed quality in a specific way) and weā€™ve found itā€™s relatively easy to react to and repair that soon after the event. However, sometimes, for more complex issues, our investigation results in a plan to respond, and this is prioritised alongside other work, rather than resolved with immediate action!

For us, quality champion is a role but not someoneā€™s whole job. As I mentioned during the live Q&A, lots of our quality champions are people who hold relevant roles across the teams ā€“ product managers, our UX team, a security expert, the head of product customer support. Becoming the quality champion was often just a formalisation of things they were already doing!

Hi @testnsolve! By this question, I think youā€™re asking how I got the team on board with the overall process? There were two key ways I did this, which varied mostly dependent on the personā€™s role. The first was to talk to them about what I was hoping to achieve with the process, explaining the problems I was observing and the successes I was imagining! The second was to just do and then share! I found that every time I published something (be it the quality definition or the quality report), people were curious enough and invested in the success of the product enough to be interested in understanding how it had been created and what it meant for them going forward! This made it really easy to align the whole team with our practices.
Iā€™m sorry if Iā€™ve misunderstood your question ā€“ please feel free to follow up if so and Iā€™ll be happy to try again!

Ha! Very good question ā€“ yes, I think with the example of unlabelled elements I gave in the talk, it did have an overall positive impact on the team. It highlighted how easy complying with some standard accessibility requirements were. With the team making an effort to deliver on that metric, it did help to set expectations within the team for achieving other accessibility requirements.

Microsoft Publisher (#represent). It was a bit fiddly to initially create the template and originally, I intended to move to something purpose-built and perhaps somewhat automated once Iā€™d done a proof of concept. However, once I had a template, itā€™s not been difficult to update each quarter and so that hasnā€™t been a priority just yet. Iā€™d love to hear if anyone knows of any tools that would work well for this!

If I understand the question, youā€™re asking whether weā€™ve tried to validate the measurements weā€™re taking and using with the opinions that our users hold? If so, yes, absolutely! We did try to integrate user experience into our metrics as much as possible when we first created them by making use of various direct feedback mechanisms we already had in place. However, this is something Iā€™m actively looking to improve on through post-validation of the reports, because I think itā€™s critical to the success of this process to ensure your measurements (and targets!) match against your user opinions.

Considering our current quality level wasnā€™t part of the metrics-defining process ā€“ as I talked through during the presentation, we came up with our metrics directly from our definition of quality, before we began to consider our current performance. Your metrics should be things that you care about and that represent the quality aspect well; that doesnā€™t imply anything about your initial position. Itā€™s not a problem at all to have a metric that you do well in, if that accurately depicts your quality level; quite the opposite! Itā€™s great to show your team that youā€™re doing well and to monitor it to ensure you donā€™t unknowingly falter.

Thanks for all the great questions! :relaxed: Iā€™d love to continue discussing if anyone has any follow-up questions/thoughts or has any feedback after trying something from the presentation. Thanks again for your time and for giving me the opportunity to share my ideas with you!

2 Likes