Ever since I joined my company the product has come to become really stable such that customer reported bugs are generally reported 3-4 times a year. Earlier, we had some every third week!
I understand that the performance of a tester isn’t entirely based on the number of bugs they catch or what bugs they missed. But are there any other metrics?
I have dealt with very buggy software for the most part of my career which means I’ve spent time in back and forth loops of bug fixing or improper use case implementation. Could this be an imposter syndrome?
Even when you are working as a solo tester, there may be people around you, like the developer’s PM, etc. Ask for feedback from them, communicate with developers, and find out what they think about the bug you raise. Communicate with PM & Stakeholders and try to know what end users are thinking of the product or project released by you. Unless you don’t communicate, no one will come and tell you what flaws are in the work.
Self-review your work, check your growth by yourself, and consider what you have learned so far while working and what you expect to learn while working there.
This is non-trivial, especially if you are comparing yourself with developers.
Developers are recognised for what they add.
Testers are recognised for what they remove.
It’s not easy recognising the absence of something.
I think that focusing on defects or test artifacts is not aligned with the business goal. That is, delivery of high quality software. The best way of doing that is not to identify defects after they have been written, but before they have been written, or as they are being written. Essentially, this means a shift-left approach.
Are you supporting the requirements elicitation process and making an active contribution to it?
Are you pairing or mobbing with developers to prevent defects as they are being written? Much has been made about synchronous code review within these activities, but testing can benefit too.
Whether solo or with a team, you should always consider all aspects of your testing work when trying to evaluate your performance - Test Coverage, bug detection and resolution, testing efficiency, automation - if any, feedback loops and communications, user experience, etc. I think that working as an only tester raises many challenges, not only the assessment of your work. There needs o be adjustment of the work, tools, procedures and else to the fact that the team is… well, just one person. We had a post about it as part of our "Sailing with testers’ series of posts (Sailing with Testers (Part XI) | Software Test Management | Testuff). It’s interesting to learn from our users about the differences between their work when in a group of testers and when working as the only tester in the company.
This is a really good question. I think the element of being a sole tester only really comes in to it in terms of that you maybe don’t have other people around you who know as much about the role and what a good tester really looks like. So feedback from other disciplines could be biased, especially if they’re resistant to support with quality engineering and testing.
Something you could think about is collecting concrete examples of times you’ve added value. What good outcomes would not have been achieved, had you not been there? What bad outcomes could there have been if you weren’t there? Of course, we can’t give a guaranteed answer, but that kind of framing can help others to understand the benefits of your contribution.