We have a couple of tools that measure cyclomatic complexity and cognitive complexity respectively - I donāt actually know what they are (the product teams implemented the tooling!) but will try to find out for you.
Weāre actually still refining this measure, trying to work out how we identify essential complexity versus unnecessary complexity, and how we reflect expected changes in complexity (i.e., adding new features) in our targets. Itās (probably predictably!) turned out not to be the simplest thing to measure, despite us initially hoping itād be a quick pseudo-measure for developability.
Hopefully, they use that information to inspire them to improve quality where they can! The results are not intended to be attributed to teams, because we recognise that teams canāt change quality without leadership engagement. I certainly hope that the narrative we send out with each quarterly report makes clear that this is to help us all understand quality, and not to point fingers at anyone.
Pretty difficult ā it took iteration and us asking a lot of difficult questions about whether or not we were happy with the level we were at.
An example of our desired level being very different to our current level is accessibility ā at the time we started measuring quality, we were also reconsidering our quality bar in this area (realising we wanted to do better!). This meant that initially, we set ourselves very high targets that we werenāt anywhere near achieving. As we came to understand our target better, and the deficit between it and where we were, we actually settled on a roadmap to get there over time, and changed our targets to match that roadmap (i.e., measuring against intermediate targets, whilst increasing these steadily). This worked much better to tell us how we were doing against our roadmap, and also was a lot less daunting/demoralising.
We have a conversation after the report is circulated and highlight metrics that have notably decreased, including those that have gone red. We always look into why itās gone red right away; often with things that go red, thereās something thatās happened to make it so (like a new release has regressed quality in a specific way) and weāve found itās relatively easy to react to and repair that soon after the event. However, sometimes, for more complex issues, our investigation results in a plan to respond, and this is prioritised alongside other work, rather than resolved with immediate action!
For us, quality champion is a role but not someoneās whole job. As I mentioned during the live Q&A, lots of our quality champions are people who hold relevant roles across the teams ā product managers, our UX team, a security expert, the head of product customer support. Becoming the quality champion was often just a formalisation of things they were already doing!
Hi @testnsolve! By this question, I think youāre asking how I got the team on board with the overall process? There were two key ways I did this, which varied mostly dependent on the personās role. The first was to talk to them about what I was hoping to achieve with the process, explaining the problems I was observing and the successes I was imagining! The second was to just do and then share! I found that every time I published something (be it the quality definition or the quality report), people were curious enough and invested in the success of the product enough to be interested in understanding how it had been created and what it meant for them going forward! This made it really easy to align the whole team with our practices.
Iām sorry if Iāve misunderstood your question ā please feel free to follow up if so and Iāll be happy to try again!
Ha! Very good question ā yes, I think with the example of unlabelled elements I gave in the talk, it did have an overall positive impact on the team. It highlighted how easy complying with some standard accessibility requirements were. With the team making an effort to deliver on that metric, it did help to set expectations within the team for achieving other accessibility requirements.
Microsoft Publisher (#represent). It was a bit fiddly to initially create the template and originally, I intended to move to something purpose-built and perhaps somewhat automated once Iād done a proof of concept. However, once I had a template, itās not been difficult to update each quarter and so that hasnāt been a priority just yet. Iād love to hear if anyone knows of any tools that would work well for this!
If I understand the question, youāre asking whether weāve tried to validate the measurements weāre taking and using with the opinions that our users hold? If so, yes, absolutely! We did try to integrate user experience into our metrics as much as possible when we first created them by making use of various direct feedback mechanisms we already had in place. However, this is something Iām actively looking to improve on through post-validation of the reports, because I think itās critical to the success of this process to ensure your measurements (and targets!) match against your user opinions.
Considering our current quality level wasnāt part of the metrics-defining process ā as I talked through during the presentation, we came up with our metrics directly from our definition of quality, before we began to consider our current performance. Your metrics should be things that you care about and that represent the quality aspect well; that doesnāt imply anything about your initial position. Itās not a problem at all to have a metric that you do well in, if that accurately depicts your quality level; quite the opposite! Itās great to show your team that youāre doing well and to monitor it to ensure you donāt unknowingly falter.
Thanks for all the great questions!
Iād love to continue discussing if anyone has any follow-up questions/thoughts or has any feedback after trying something from the presentation. Thanks again for your time and for giving me the opportunity to share my ideas with you!