We have a couple of tools that measure cyclomatic complexity and cognitive complexity respectively - I don’t actually know what they are (the product teams implemented the tooling!) but will try to find out for you.
We’re actually still refining this measure, trying to work out how we identify essential complexity versus unnecessary complexity, and how we reflect expected changes in complexity (i.e., adding new features) in our targets. It’s (probably predictably!) turned out not to be the simplest thing to measure, despite us initially hoping it’d be a quick pseudo-measure for developability.
Hopefully, they use that information to inspire them to improve quality where they can! The results are not intended to be attributed to teams, because we recognise that teams can’t change quality without leadership engagement. I certainly hope that the narrative we send out with each quarterly report makes clear that this is to help us all understand quality, and not to point fingers at anyone.
Pretty difficult – it took iteration and us asking a lot of difficult questions about whether or not we were happy with the level we were at.
An example of our desired level being very different to our current level is accessibility – at the time we started measuring quality, we were also reconsidering our quality bar in this area (realising we wanted to do better!). This meant that initially, we set ourselves very high targets that we weren’t anywhere near achieving. As we came to understand our target better, and the deficit between it and where we were, we actually settled on a roadmap to get there over time, and changed our targets to match that roadmap (i.e., measuring against intermediate targets, whilst increasing these steadily). This worked much better to tell us how we were doing against our roadmap, and also was a lot less daunting/demoralising.
We have a conversation after the report is circulated and highlight metrics that have notably decreased, including those that have gone red. We always look into why it’s gone red right away; often with things that go red, there’s something that’s happened to make it so (like a new release has regressed quality in a specific way) and we’ve found it’s relatively easy to react to and repair that soon after the event. However, sometimes, for more complex issues, our investigation results in a plan to respond, and this is prioritised alongside other work, rather than resolved with immediate action!
For us, quality champion is a role but not someone’s whole job. As I mentioned during the live Q&A, lots of our quality champions are people who hold relevant roles across the teams – product managers, our UX team, a security expert, the head of product customer support. Becoming the quality champion was often just a formalisation of things they were already doing!
Hi @testnsolve! By this question, I think you’re asking how I got the team on board with the overall process? There were two key ways I did this, which varied mostly dependent on the person’s role. The first was to talk to them about what I was hoping to achieve with the process, explaining the problems I was observing and the successes I was imagining! The second was to just do and then share! I found that every time I published something (be it the quality definition or the quality report), people were curious enough and invested in the success of the product enough to be interested in understanding how it had been created and what it meant for them going forward! This made it really easy to align the whole team with our practices.
I’m sorry if I’ve misunderstood your question – please feel free to follow up if so and I’ll be happy to try again!
Ha! Very good question – yes, I think with the example of unlabelled elements I gave in the talk, it did have an overall positive impact on the team. It highlighted how easy complying with some standard accessibility requirements were. With the team making an effort to deliver on that metric, it did help to set expectations within the team for achieving other accessibility requirements.
Microsoft Publisher (#represent). It was a bit fiddly to initially create the template and originally, I intended to move to something purpose-built and perhaps somewhat automated once I’d done a proof of concept. However, once I had a template, it’s not been difficult to update each quarter and so that hasn’t been a priority just yet. I’d love to hear if anyone knows of any tools that would work well for this!
If I understand the question, you’re asking whether we’ve tried to validate the measurements we’re taking and using with the opinions that our users hold? If so, yes, absolutely! We did try to integrate user experience into our metrics as much as possible when we first created them by making use of various direct feedback mechanisms we already had in place. However, this is something I’m actively looking to improve on through post-validation of the reports, because I think it’s critical to the success of this process to ensure your measurements (and targets!) match against your user opinions.
Considering our current quality level wasn’t part of the metrics-defining process – as I talked through during the presentation, we came up with our metrics directly from our definition of quality, before we began to consider our current performance. Your metrics should be things that you care about and that represent the quality aspect well; that doesn’t imply anything about your initial position. It’s not a problem at all to have a metric that you do well in, if that accurately depicts your quality level; quite the opposite! It’s great to show your team that you’re doing well and to monitor it to ensure you don’t unknowingly falter.
Thanks for all the great questions! I’d love to continue discussing if anyone has any follow-up questions/thoughts or has any feedback after trying something from the presentation. Thanks again for your time and for giving me the opportunity to share my ideas with you!