How do you set measurable goals in QA when quality is hard to quantify?

I’m a manual tester transitioning into automation and QA engineering. At my new company, part of our learning and performance strategy involves setting measurable goals over a few months. We’re encouraged to propose our own, but I’m struggling with how to define success in a role where quality is nuanced.

For example, completing a set number of work items isn’t always meaningful, project scope varies widely. Similarly, tracking bugs found feels misleading: uncovering one critical issue in senior dev work can outweigh ten minor bugs elsewhere.

Has anyone worked in a company with a similar ethos? How have you approached goal-setting in QA when impact is hard to measure?

2 Likes

How about this

A sprint by sprint analysis of quality:

  1. How many of each bugs were reported:
    1. Production (by the team)
    2. Production (by the users)
    3. Missing use case or design implementation (new feature development)
    4. Regression (a logical clash with legacy stuff)
  2. The above can be linked/grouped under quality initiatives that the team must follow to produce better product or improve it

When the quantity decreases, initiatives are being met. Who made that happen? You did!

2 Likes

For me, I take step back and use a technique I got from a Stephen Covey audio book (slightly adapted).

So imagine, you’re leaving your organisation and there’s a presentation where your boss, a business leader, a colleague you work closely with and a customer. (Change the roles to suit what matters to you :laughing:)

Each of them stands up and talks about their experience of working with you. What would you want them to be saying about you on that day for you to know you had achieved everything you wanted and had the impact you dreamed of?

Write those down and those will build your personal working mission statement….and maybe beyond work.
Now, when you set your goals, what goal can set you on that path on that mission that you can achieve in the review period?

3 Likes

Setting your own goals with support from the company to achieve them is often an ideal scenario.

Think about things you feel you could improve at, then try an align the value of those things with your company and ideally also customer goals.

You are picking up more automation, set a target of a number of courses completed for example, research done in automation, perhaps a goal of doing a presentation on what you have learned and want to share on the automation front. If you have a project where you are doing automation you can set some goals on that, you could say you will have a automated health check up and running that easily maintainable, pick a risk such as accessibility that you will become skilled in and add coverage for that risk that may not have been done before?

Also consider how they can help you achieve those goals, paid training and courses, books ordered and importantly time allocated for self development. As you develop this will automatically drip down into real value on projects but I would not focus on that for now, focus on you and your value and see if it can align with company values.

2 Likes

I am glad to say I never worked in a company with a similar ethos, and I don’t run my company that way. I do place a strong emphasis on personal development, but this is rarely goal-oriented - the objective is simply to keep learning.

Having measurable goals seems particularly silly unless they are things like passing certain exams or certifications. Even those are of little value - I haven’t felt the need to do any exams or certifications in the last 45 years, and I don’t encourage my staff to do so either (although our most junior tester likes to collect every certification possible).

All testing metrics are bogus
If they are talking about measurable goals in the context of your work, that would be a fool’s errand because there are no statistically valid metrics in software testing. No test case is equivalent to any other test case in any way, such as the time it takes to write or to execute or its value. No bug is equivalent to any other bug in any way.

Since these things are not equivalent, it is not valid to count them, let alone perform any other analysis on them. Of course people do, and the ISTQB advocate doing so, but that doesn’t mean it’s valid - it isn’t.

What really matters?
Your company or boss ought to identify what matters to them, so you can focus on achieving it. Their primary goal may be high quality at all costs. Or it might be meeting release dates regardless of quality. Or it might be maximising sign-ups or sales, or minimising complaints or helpdesk requests.

It doesn’t make sense for a tester to be focused on maximising quality across an application if the management are primarily interested in meeting a release schedule and onboarding new users (perhaps because their bonus depends on these factors). This might sound like anathema to a tester, but it’s an important part of the context in which you’re working.

5 Likes

Hi Jonny,

Congrats first of all for thinking to set goals and plan ahead. First thing first: I don’t think you need to “transition“ from being manual tester to an ‘automation tester‘. It only would add unseen pressure. You are a Tester, a Quality expert, regardless of manual or automation.

Findings bugs etc now a days sadly only considers as a bare-minimum or “part of the job“. Still take pride in it as not everyone can do it! :slight_smile:

Now the goals part: First of all its very important to discuss with your Line Manager:

  • Whats his/her expectation from you? because they sit with execs.
  • Whats the business expectation being a Quality expert in your company.

Setting goals become easy if you have clear picture of expectations. Without knowing expectations all efforts are like chasing a wild goose. Take a step back and reflect, what interests you and what will be beneficial for the company/business in: QE and Test Automation.

Wishing you all the success! :slight_smile:

2 Likes

I agree with much of @steve.green’s criticisms of goal setting. There is an alternative to setting goals, which is to listen to the Voice of the Process and work out how to improve. A tool I have used to do this is a Process Behaviour chart. I have a talk for MoT about process behaviour charts: Enhance your performance tests and more with process behaviour charts. These charts can be used to analyse observational data, such as the metrics suggested in this discussion. The book “Understanding Variation The Key to Managing Chaos” by Donald J. Wheeler is a great resource to gain a better understanding of process behaviour charts.

2 Likes

I have bugs categorized into 3 types - obvious, workflow, edge.

I normally measure by how many obvious are reaching production. This helps me to disect on what is important to measure in all

2 Likes

Thank you for your reply, we don’t actually work in sprints at my company. Instead, we release on a weekly, fortnightly, or sometimes monthly basis, with new projects and bug fixes bundled into those releases.

At the moment, we don’t really do any structured analysis of the problems that lead to fixes. So I guess my first step is to start asking why a fix was raised: was it found by Support, discovered in Testing, or reported by a Customer? From there, perhaps I distinguish between software that’s genuinely broken versus software that isn’t meeting customer expectations.

I know that’s a simplified example, but since I’m working in a small team, I’m starting from scratch in terms of how we analyse our performance. That’s great, though, you’ve given me a starting point and something I can adapt for our way of working, once I have some data I can start creating some goals, such as reducing issues found in production… it does make me wonder, what if we suddenly get a fastidious customer that spends a lot of time hunting for minor bugs… those increases wouldn’t be an accurate reflection on how well our testing team is doing.

Thank you for that reply. I’ve just purchased How to Develop Your Personal Mission Statement from Audible. I’ll take a listen. I love the idea of stepping back and getting to the heart of what I’m trying to do. It makes sense that if I create work goals that align with my personal values and mission, I’ll feel more invested in them and more motivated to achieve them.

I’ve had an attempt at a mission statement… hmmm… this is difficult to read back because this isn’t me, but I’d like it to be.

“I strive for excellence and integrity, making pragmatic decisions that balance business needs with quality. I am organised, engaged, and a problem-solver with a keen eye for detail. I bring positivity, kindness, and humour to my work, ensuring that collaboration is both effective and enjoyable.”

1 Like

One thing that helped me when I was in the same spot was shifting the goals away from raw output and toward clarity, reliability, and repeatability. Counting bugs or tickets always felt like chasing the wrong metric, so I started framing goals around things that directly improved the team’s ability to build and test with confidence.

A few examples that worked well:

• reducing flakiness in a specific area of the automation suite
• tightening the feedback loop for a feature by improving test coverage or test data setups
• improving traceability between requirements, tests, and outcomes so work items were easier to review
• documenting recurring issues or edge cases so onboarding and handoffs improved

Tools helped too. When we started tracking test runs and patterns more cleanly (we used Tuskr for that along with Qase earlier on), it became easier to set goals that were tied to stability rather than generic ticket counts. Seeing trends in failures or gaps over time gave me enough insight to propose goals that actually mattered, like reducing regressions in a module or speeding up review cycles.

I was at a talk the other day where they talked about using user stories for setting their growth and goals and it was interesting. The structure we were given was a little more than the usual As aI want toin order to… as it including timings and more context Our group settled on this:

  • As a test engineer with:
    • Some Javascript experience
    • 3 hours a week to spend on learning
  • I want
    • within 3 months
    • to be able to write automated tests using Playwright for acceptance criteria
  • In order to participate in additional testing activities for a user story

To form this we were prompted to think about how much time we had and what would be achievable when… plus why we were doing it as 2 of the three of us thought “because its the thing to do”, which isn’t great.

When it comes to metrics, the best advice that I was given was focus on the story. What do you want to achieve? Then think about how you’ll know if you can do it. It doesn’t need to be hard quantitative data. It can be qualitative, such as based on feedback from 1:1s, retros etc.

Side note: I was a quality engineer doing sod all automation. There’s a lot more to quality & testing than the best means of executing a particular type of tests.

2 Likes

This is really helpful, thank you. Yes, that’s great advice, assessing my value and what I can offer, then aligning that with the company’s goals and aspirations, sounds like a winning approach. I like the idea of setting goals to research the feasibility of moving into different areas and presenting my ideas. It makes complete sense to incorporate courses into this; I’ll pick out some relevant MOT courses and suggest completing them within a set timeframe. I hadn’t even considered books. I generally have a few on the go, so agreeing to read them as part of a structured goal, perhaps with a presentation of what I’ve learned and how we can utilise that new knowledge, sounds like it could work well.

Thank you for your response. I agree, there are so many factors that it seems impossible to measure performance through testing metrics. I can see some value in setting goals and conducting reviews; it’s good to have something to aim for, and that structure can encourage more proactive thinking. Having dedicated time to invest in pursuing proactive ideas with your boss also seems worthwhile.

I’m quite excited by the opportunity to incorporate that structure into what I’m doing. I’m realising that setting goals for personal development is the most useful approach. I take your point about testing metrics, and I wonder if anyone has successfully managed to record metrics and make tangible use of that data.

I agree that understanding what is important to my boss and what the company wants is essential. I’m asking these questions ahead of discussions with my Line Manager so I’m prepared and able to offer constructive ideas. I’ve been doing manual testing for twenty years, coming from a support background, so I’ve missed the academic side of software testing.

I’m going to attempt some MOT qualifications, because I’m interested to see how best practices compare to the more ad hoc approaches I’ve used for software testing over the years. I wonder how far off the mark I was.

Just thinking, I’ve been hearing more about how we’ll be using AI tools in the future. It seems we’ll really need to put effort into producing higher-quality documentation that can be used by AI for internal and customer research and problem-solving. I ought to incorporate that into my goals, perhaps by comparing documentation created now with future versions and demonstrating how it has improved and how it’s AI ready.

If you want to read about someone who recorded and used metrics extensively, you can do no better than to read Capers Jones’s books. He was certainly successful in terms of getting well paid consultancy gigs.

Whether he was successful in terms of delivering value to his clients is open to question. I suspect his clients were the kind of people who “manage by numbers”, as is common in America, and who often preside over catastrophic failures because they are measuring the wrong things and making the wrong decisions as a result. I have first-hand experience of this, having worked for four American companies. For your own sanity, I suggest you never work for one.

Personally, I have not found a single statistically valid metric in any of Capers Jones’s writings. That’s not to say there aren’t any, but if there are they’re well hidden. I find his entire approach utterly absurd, but I encourage you to read it just so you know there are people out there perpetuating this stuff. But for heavens’ sake don’t implement any of it.

Qualifications
I haven’t done any of the MOT qualifications, so I can’t comment on them. However, I adhere to the view that there are no best practices - see https://context-driven-testing.com.

Ad hoc testing
It’s often asserted that testing either follows “best practices” (as per the appalling ISTQB) or it’s ad hoc (which is intended to be a criticism). This is a false dichotomy. Some of us who specialise in exploratory testing have developed methodologies that allow for a great deal of exploration and investigation while providing the necessary level of planning, tracking, reporting etc., so we know what we’ve done, what we haven’t done and what the remaining risks are. The bugs we find are usually important and reproduceable. Examples include:

  • James and Jon Bach developed session-based testing, in which exploratory testing is time boxed to enable planning and tracking of resources.
  • James Bach and Michael Bolton developed their RST methodology.
  • I developed a methodology that was inspired by their work, but which is significantly different.

These all provide a “third way” that is more effective and efficient than either a “best practice” or ad hoc approach.

You can get all the session-based testing material free, including a tool that supports it. I don’t like the tool, so we developed an equivalent using Excel spreadsheets and VBA (no, you can’t have it because it’s very hacky, but you can easily develop your own). You can still do the RST course, which I recommend - I put all my team on it. I have a four-day course, but it would require an obscenely large financial inducement for me to deliver it again.

2 Likes

Thank you.
I appreciate your positivity and encouragement.
I’m more than happy to stop referring to myself as a manual tester, re‑inventing myself as a Quality Expert is motivating in itself. It helps me frame what I’m doing and the kinds of goals and achievements I should be aiming for.
I’m looking forward to meeting with my line management as a Quality Expert and talking about how the company would like to use my expertise.

I think “measurable goals” often gets us thinking about numbers and metrics. But it can also be a simple done or not done goal; I don’t think “measurable” has to mean countable.

For example, two of goals from my last review were to get the UI automated checks running in the pipeline, and to conduct at least two mentoring sessions with devs on quality and testing topics. These are simple, “did you do it or not?” goals, which serve a specific purpose (increasing fast feedback and increasing knowledge / skills within the team), as opposed to something like number of cases executed or bugs reported, which, as you quite rightly said, don’t really say or add much.

Other goals could be to complete a certification or training course, to automate a particular flow, or to formalise and document a test strategy. Non-arbitrary, objective goals; no counting involved.

2 Likes

This is really interesting. I’m hoping my professional membership sponsorship will come through early next year, so I’ll make a note to check out your talk. I suppose the idea of using Process Behaviour Charts could be applied to monitoring and improving a wide range of practices. I’ll look to purchase the book as well, though as usual I’m struggling to find an affordable copy

1 Like

That’s interesting, it’s not something I’ve been doing, but it makes sense to analyse issues found in production and classify them. It seems like a natural way to spark discussions about quality, tracing the reason for the failure as far back as it goes, even to when the customer first mentions their requirement to commercial. It’s a nice way to shift things left. I guess you’re saying to use this extra analysis to identify weak areas, then create goals to strengthen them, rather than using those figures directly as a goal. I like it

@checkout_champion Yes, Process Behaviour Charts can be used to improve a wide range of practices. They were originally developed to help improve the quality of telephones. Please let me know if I can help.

1 Like