How to improve testing when the process is unclear

I recently started working as a software tester, I’m finding the testing approach within the team a bit confusing.

We are often told to “explore” the feature, but there is no discussing how we should test a feature the main focus seems to be on how many bugs we find. Also, when one tester finishes testing a feature, it is sometimes assigned to another tester to find scenarios that might have been missed which I could be avoided,

I’ve also noticed that developers are not performing unit testing. Instead, they rely on testers to create test cases and then test based on those. I would like to understand if this the correct way and if developer are testing based on the test case then shouldn’t more focus on test coverage ?

4 Likes

To me, the main focus as for now should be the definition of entry and exit criterias. When can we start testing and how much testing is enough.

Also is your team in a close cooperation with devs ? Why are they relying on testers “that much” ? And why testers themselves go for another iteration of testing instead of peer working ? Sounds like workload repartition is a bit blurry within your organization. Would be great for you to get to know if it’s more because “we always worked like this” or if “something” has led to this repartition.

There is not a “correct way”, there are thousands, and teams have to find the one fitting for “enough” quality as output. Did you already raised this topic with your coworkers ?

1 Like

If you can create the discussions suggested by @Rod , that’d be ideal.

If you meet resistance to have this discussion, in my experience, the best way to induce it is to create measures that leads to it.

For example, in your case, you could measure :

  • the ratio “bug found per minute of manual testing” and compare the first and second rounds. If it is dramatically lower for the second round, then people might re-question its relevance.
  • the ratio “bug found per work item / user story / epic” and compare between the ones where you have a clear happy-path, or DoD, or acceptance scenarios. If it is dramatically lower for clear work items, then people might re-question the absence for testing plans.

Also, who is pushing for the work? Do you have Product Owners or Product Managers? They should be the one coming up with the bare minimum, i.e. user story, happy path, and some edge cases. Translating this into testing is quite easy then.

Welcome!

Do you mean you’re looking to be told how to test the feature, or that there is no discussion about what’s important in the feature, or what it’s supposed to do and why that matters?

I don’t think that you need the former at all. You already know how to evaluate something you’re given. If you’re looking for ideas to fill in the gaps of your testing there are lists you can use to help, like the HTSM, to improve how you look at the software and give you various techniques for exploration.

The latter is a bit of a problem. You can still test without this information, but your testing will be better and more efficient if you have better understanding of it. You can do a recon session just to learn about the software and see what testable surfaces it has, think of ideas, note useful tools you could use, questions you have, and so on. You can find information in other oracles like user guides and development documents. Build up an idea of what’s complex, what matters and to whom, who expects what from the software at both a functional level and a conceptual one - does it solve the problem they pay it to solve?

Given that you’re handed the software and told to test it (which, by the way, sounds like an absolute dream scenario to me), it may be that it’s your responsibility not only to choose your approach and techniques and manage your time, but to come up with questions on what you need to know and take those questions to a live oracle (a person who knows the answers, probably).

Another way to get more information is to ask questions at kick-offs, attend design discussions, and liase with your product owner equivalent. You can also ask others who know the software better, or those who wrote it, so you can get some idea of what various people think you should do.

the main focus seems to be on how many bugs we find.

Well that’s probably a waste of time. The bugs you find depend on how many bugs there are, and how much you split a bug up into smaller bugs. Put simply - who cares? The idea is that you can get this information to people who can do something about it. Counting the bug reports is, basically, crazy for that sort of evaluation. Bugs aren’t really countable because they are non-fungible; they’re different sizes, different complexities, take different amounts of investigation effort, have different kinds of impacts of different severities on different people. If you found 10 spelling errors on one page that could be one bug report, if you found 1 on 10 pages over 10 days that could be 10 reports, and if you found 1 bug that corrupts the hard drive the spelling errors would begin to look pretty silly by comparison.

I’ve also noticed that developers are not performing unit testing. Instead, they rely on testers to create test cases and then test based on those. I would like to understand if this the correct way and if developer are testing based on the test case then shouldn’t more focus on test coverage ?

What kind of test cases? I think that just telling a developer that you think it’s a good idea that you check X, Y and Z is sufficient. They’re intelligent people. In the book sense, anyway. Usually. So this should definitely not be highly or prematurely formalised into case documents or anything. That would be an affront to the value of human life. But sure, having some input into the testing sounds like a positive thing. Do the devs like it? Or do they resent it? Do they communicate better with the testers because they’re involved early? It might be valuable enough to cover the cost.

“Correct” here sort of implies “standard”, and there are no standards (anyone saying otherwise is selling something). I’m not going to say it’s wrong - I mean it’d sort of work, and might be great in context. Developers making unit tests and testers exploring beyond basic capability is more common, because devs are smart enough to do capability testing on their own work under their own power. I suppose it could be a way to try to empower testers to aid the test effort and understand the coverage, but that can also be achieved by getting the devs and testers talking to each other, and getting them involved early. Getting a team that communicates and shares that information can save a lot of time and effort, but it does mean getting nerds to be social and empathetic and that might take a different kind of effort. Whoever’s decision it was to do it this way should be able to defend the decision for you and get you up to speed.

I wish you luck with your endevours. I know you’ll make the best of it. Being given the freedom to control your own testing and the trust to do so is a precious gift and a great honour.

3 Likes

Welcome to the MOT.

As you have noticed by now, this is the place for learning, and you are in a plce in a job where you have a bit of room to learn.

I would suggest you, #1 make tiny notes of things you see, and identify and ask lots of questions in your mind about what goes where and what is old software, what is new software. Focus on the new, and keep the old for later in the explorations. #2 Work out exactly what questions are useful to ask right now and then ask them early, so you can build understanding. Use these for form a plan for future explorations. #3 Do not commit to anything and do not try to change cultures. Build trust first, and then build in process change once you have allies. You are already given a few trust points just because of your role anyway, but you cannot do everything all at once, so keep working with purpose.

I get the impression you do already understand why you got hired, and see some of your unique value to the team. So maybe you are an agent for change, you definitely have identified communication as a key area for the whole team (coders and testers) to work on together. Remember, nothing is right, but also nothing is wrong. The “best” path is a path that is continuously moving, and a “growth then measure then change” mindset is your best tool.

1 Like

First of all sorry for late reply and thank you taking the time to share your thoughtful feedback

@Rod - So, One of the main reasons is that product itself is quite complex. And, there are many scenarios related to the product which are not explicitly defined in the documentation. So, after reading your feedback I would like to try implement the suggestion you’ve mention. @christophe and @conrad.braam . And lastly,@kinofrost - thanks for the example, will help to understand if there is a need for second round of testing

1 Like

Good to read. Your product can be complex but supported by a simple structure. Make your life easy. Also invest in the documentation. Improve and empower it. Test it too.

1 Like

Hi @viraj61968,
That’s a sharp observation, and you’re right to question it.
Exploratory testing is useful, but if the team is only measuring quality by bug counts, it misses the bigger picture.
Developers should own unit testing to catch issues early, while QA ensures broader coverage across workflows, integration, and risk areas. Passing features between testers to “see what was missed” usually points to unclear coverage upfront. More focus should be on test coverage and collaboration rather than just bug counts. As QA, you add real value by encouraging conversations around coverage, risk areas, and responsibilities, so testing becomes a shared effort instead of a handoff game.

@viraj61968 You are asking good questions. I remember finding myself in this position. I think that the problem will take a while to solve. Continual learning will help. I would recommend building learning into your work. You can do this by working in a Plan-Do-Study-Act cycle. Plan what you intend to do, do what you planned, then study the effect of what you did and act on what you learn by taking that into planning a new cycle. I wrote this blog post that gives some examples of using the plan-do-study-act cycle, and has links to resources: Using plan-do-study-act to improve testing – TestAndAnalysis

2 Likes