Help me (Richard) with my ReTestBash Reflection

I’m joining in with my reflection on TestBashUK and answering the following questions

  1. What TestBash talk/workshop/activity inspired you to try something at your workplace?
    Answer: Richard Adams lets go threat modelling really inspired me and came at the perfect moment as I was starting to research threat modelling. I’ve since given a talk to my companies testing community about it and I’m in the process of planning some game based threat modelling sessions with my team. The other session I really enjoyed was

  2. What have you tried at your workplace so far and what have you discovered in the process? Please share actual examples.
    Answer: Another really inspirational talk was Vernon and Stuart’s What is Quality Coaching talk. So many lightbulb moments during this talk, from WAIT, Why Am I Talking (which I do way too often, but a lot less after that talk) to reflections on coaching and mentoring and how to properly focus and listen. It’s made a big difference in my interactions with my team and, if I’m being completely honest, at home to with the way I listen more to my children and mentor rather than instructing them. I have a lot to thank them for.

  3. What are you stuck on? What’s holding you back from moving forward with implementing your learnings? What sort of help do you think you need?
    Answer: I attended a reporting workshop, hoping to get an answer to what metrics should really be reported and hold the most value. However it’s such a tricky question and not the main focus of the workshop, so I didn’t really get an answer to that. If anyone has any great ideas for this, please feel free to get in touch. :slight_smile:


I’m nowhere near having answers for what metrics we should be reporting, but I have two things to think about that might help when judging a metric

  1. Does the metric really measure what we want to know? Especially when we are measuring something non-numeric or that is part of a complex system, like quality, it is tempting to latch onto the things we can quantify, instead of what we want to measure qualitatively. Then we fall into the trap of the metric (eg low numbers of defects on tickets) hiding what’s really happening (assuming this means good code quality, when it could be bad testing).
  2. Can people cheat the metric? What impact would this have? If I know that my performance is measured in a certain way, I might be tempted to tweak what I’m doing so that my score is boosted. I think it’s a fundamental character trait of testers to want to manipulate such systems. But are these tweaks useful in achieving whatever it is I’m doing, or will setting the metric lead to me bending over backwards to score well instead of doing my job? Eg if I am rewarded for number of genuine bugs reported, I might split up bugs into lots of separate cases, even though this makes it harder for the developers to work on.