The only tester in my team - How would develop the QA of the software?

Hello everyone

I was lucky to be selected to join a startup company a little while back.

Iam the only tester in the team.

The only issue is, the work is not consistent. And Iam trying to understand what I can do to help the team to contribute regularly to the team’s software.

Currently Iam just testing out new features on dev builds. And thats it. I do hope to do more than that.

In my position, if you were in a new company and no testing existed, what would you do in my situation?
I’ll try to give as much information as I can without disclosing too much information to reveal the company.

From my previous experience, I remember in my previous role, when a new build was launched, we would do a set of test case runs each time a new build got released.
Half of a test case set, that would just include some important testcases but not a full test case which would take a bit longer to do but was through.
But I also did light smoke testing in some of my other roles each day to make sure the software was still functional. As other APIs would sometimes knock the service down.
I also did make test cases but they would take up a lot of my time. So I could possibly streamline it in this role with just the title of the test case, no steps, and a check box if its passed.


PS: Chris makes a good initial point below, please read his reply first then proceed with this one.

Here are some ideas in the form of questions:

  • How early are you involved? Can you step in ahead of things? E.g. discussing with clients the idea of implementing something, being part of business analysis and technical brainstorming, learning and making sense of diagrams or mockups, etc…; get curious and learn things fast about anything technical, business, domain, product, project, resources related so that others see your value and invite you early; I was part of more technical/business things than any developer was.
  • Is the product globalized, internationalized, localized, or translated? do you see potential or existing problems with that? I was helping development and business cope with this missing skill or technical issues related to it;
  • Do you have access to the product code? Can you read through it? Inspect it - might reveal interesting things; I was doing lots of reviews and code inspections;
  • Is your company using external software? How good is it? Explore and experiment with it; it might surprise you; I’ve been testing for years, without it being my responsibility, some external services that the product I was responsible for was using, where I found hundreds of issues;
  • Do you socialize? Do you hear the pain, and frustration of your colleagues from design, business, technical analysts, product managers, and developers, clients or stakeholders, other teams-departments that criticize the company or product? Be with your ears open and use that information to see risk, observe potential trouble, to recheck the behavior of something; Take people to lunch or a coffee and chat with them, they will open up even if not asked directly.
  • Do you know your competitors? similar companies or products; do they do something better, or different, is your product consistent with the general trend, if not - should it or is it a disruption that makes sense?
  • How’s your toolbox? Can you think of ways to reach more interesting information? Can you think of a tool that can be built to help with revealing something? In each company, my toolbox was close to 50 apps, addons, scripts, and programs;
  • Test data, test builds, test environment: can you take a commit and build your own package, deploying it in a custom test environment? try to figure out how you could do it; Do you have enough test data ready at any given time? Can you think of ways to make a collection or auto-generate extra data?
  • How are similar business startups going or how did they die? Why did they fail, can you provide some useful evidence or information that would help your current startup?
  • Logs, statistics, analytics, user info; Is the product code/api dumping logs? Filter through those and look for errors and why they happen; If the product is released, does it have any sort of analytics - can you consider adding something and going through them from time to time? Do you really know the users - can you reach some? can you study/shadow them, question or interview, read forums, or technical support tickets? what’s bugging them - reach back to the team after translating their pain.

Test cases are used to communicate the execution of a test strategy to testers. I’m not a fan of them because they’re very expensive and have a lot of limitations.

Before you do anything read @ipstefan 's answer above, and take into consideration the effects of the context you’re working in, because it defines the value of whatever you do next. How your company works, who you work with, how you work with them and so on are all important, amongst myriads of other things, so take some time to think about what you’re doing, who your company is, what your industry is and who your customers are. It’ll pay off when you’re making decisions about risk later.

As the only tester you’re developing the strategy (all the ideas that guide your test design) and the execution (test design), and you’re the tester. So instead of making yourself test cases consider another approach to developing a strategy.

One way to do this is to map out the product, consider the risks you might want to test for and make a list of tasks that you’d want to do to test it. I split my testing up into test sessions^, which are timed explorations of the product trying to fulfil a charter^. A charter is a short explanation of what the session is for, like “test the upload feature” or “test the upload feature with different file formats. JPEG and PNG are required, but also try types of invalid file even if they have valid file extensions”.

So you could do a recon^ session (or many) for a charter like “Explore to discover functions and assess risks”. You go through the product and make notes on what there is to test, any test ideas that come to mind, risks you want to mitigate, questions you have, test data you might need and so on. You’re not looking to find problems (although you might), you’re looking to understand what this product is and what it can do.

Then you might do some capability sessions, where you’re trying to see if the product can do what it indicates that it should be capable of. This is to see if the product can actually perform.

Then you might do some reliability sessions, where you’re trying to see if the product can handle difficult inputs, harsh conditions, weird configurations and long-term use.

You will think of many things while you test, and you need to decide what is worth your time. Think about risk, and the impact of a potential problem. You might get distracted and go off-charter, which is okay. You may think of things that don’t change much, are shallow fact checks and need to be repeated a lot, and this could go into an automatic checking tool (“automation”).

You can store the notes for these sessions as evidence of your activities or for reports if you need to. You can always include screenshots, supporting documents, video, recorded gif files, notes from other sessions, whatever you need to remind yourself of what you did or tell others.

At the end of a session you get all the questions answered, investigate and raise any bugs you find, and communicate any project issues you think need communicating.

In this way you’re building a common-sense risk-based diverse strategy, making the best use of limited resources, and you have documented charters to refer to later. You build different types of coverage and allow yourself to be guided by risk. It’s easy to try, approachable, dynamic, flexible and cheaper than case writing.

Also communicate with the people you directly work with. See how they like to work, what they want from a bug report, and so on. Build credibility as a capable and supportive person and that will pay for itself many times over.

Best of luck!

^ These are the terms I tend to use. Use these or your own, whatever works best for you.


Yes! You have an amazing opportunity to contribute to a healthy culture of quality from the beginning of the company’s history! Rally your coworkers around the idea that everybody is working toward quality. Ask questions that bring people into the discussion and let them know you care about their definition of quality. Get yourself invited to the earliest meetings possible, preferably when the team is deciding what they want to make, and haven’t even started designing it yet. You can ask questions like: “What is the most important thing this feature is supposed to accomplish?” “What is it not supposed to do?” “Who will be using this feature? Is anyone not supposed to have access to this feature?”

As you get answers to those kinds of questions at the beginning of the process, keep them handy so you can return to them later. Remember that the answers will probably change over the course of the project to a certain extent. As the work progresses, you will be able to add another question to each of the ones above: “How will we verify this behavior?” This will help your team think about testability. Ask this question as early as possible, and as often as you need the answer, remembering that the answer may change and that others may not have thought about it yet.


Hiya @swaltz!

That’s awesome that you’ve joined an organisation and now get to build testing and contribute. I’ve written about / talked about steps to get started and how to sell testing into projects (see links below). For me, setting out a conversation of expectations for testing (especially if people haven’t worked with a tester before) is important.