No Test Cases.... What can you do?

So here is a challenge I have posed in interviews before and have had some interesting answers. In an attempt to get people thinking through an approach…

So you’re added to an existing project as the sole tester, all previous testers have left and the test case repository has been lost. We need to ensure we are as confident in the next release as previous ones. What would be your approach to being able to test the next release effectively?


My approach is to have a mix of

  • test cases for complicated situations or situations which are error prone.
  • checklists which are also used in aviation.
  • exploratory testing.

Depending on time and effort the ratio of test cases, checklists, and exploratory testing will vary.

Maybe test charters are still available.

1 Like

I am presuming that the product & developers’ artifacts would be available. If they are, then I could come up with test artifacts afresh.

If the product and developers’ artifacts are also lost, then I would start interviewing the stakeholders to get to know the product thoroughly, and then come up with the test artifacts.

And then, start exploring the product.


In the short term, this is as good an excuse as any to spend some time doing exploratory testing, learn the product, identify the oracles, etc and get a feel for where the product and team are in terms of processes, documentation, etc.

More long term:

  • I would not attempt to recreate the test cases
  • Improve automation to cover the critical user journeys and requirements (and thus get that to serve the role of formal test cases)
  • Do some spelunking (talking to project folks, devs who’ve been around a while, end users, whatever) to figure out what critical user journeys aren’t covered by automation.

I’ll assume that the system is “simple” & one does not need highly specialized domain knowledge to understand it. In such situations, I do the following:

  • Talk to testers, product owners, devs etc. to figure out the system & the most important user flows.
  • Do exploratory testing & document the behavior of the system to understand how it should work. Regularly check my understanding with the team.

PS - This question might be a fun exercise. But, I wonder it its unlikely to occur in the real world. I’d ask the interviewers if I’ll be working on a project with such problems & try to figure out if the testing practices are generally poor in the company. I wonder how companies can let such problems occur in the first place.


It may be a slightly embellished version of a real situation I found myself in (all testing IP was owned by a 3rd party resourcing company who were let go and I was first perm person in), but it’s all about understanding how the candidate would apply their problem solving and testing ability to make the best of the situation. Which everyone’s answer here does that


@sjprior- thanks for sharing your experience. Its a big red flag if a company is overly dependent on a 3rd party & they can take away the company IP (assuming) without serious consequences.

Personally, I wonder if I should see such a job as a challenge and take it to learn, or look for another job instead. Maybe that could be another question on the forum (No tests, how to decide if I should still take the job?). Btw, if you don’t mind, was your team successful in setting up the tests?

1 Like

Yes I worked to bring in the right people and we took ownership of the product testing within a couple of months and we’re able to provide greater confidence than what was there before. It was tough, but really worthwhile.


Lots of ways test cases effectively go away.

I’ve seen it at least twice in my career. The first case was a software acquisition and there was no formal testing, so figuring out how to get confidence in releases required figuring out a testing strategy.

The other time I recall seeing it is when a new dev team took up a product that hadn’t been touching in 18 months, looked at the existing tests and decided they were garbage, and just deleted all of them, forcing themselves to come up with a new test strategy.


Not exactly aimed at the proposed problem, but related to it, there is a 12-part blog post series from Michael Bolton called “Breaking the Test Case Addiction”. It’s a good read.


I have some questions about jobs & companies which have no tests.

1 - Which factors could prevent me from salvaging the situation? How to spot those factors in the interview or in the first few days on the job (and quit without wasting too much time)?

2 - Given the challenges, should one ask for much more than market rate wages?

3 - Are these companies generally startups? Are/were they run by grossly incompetent people? This one is probably hard to answer.

PS - @ernie Looking at your acquisition example, I wonder if one should also make testing practices a criterion in acquiring a company. Maybe it could show you the challenges and also serve as a bargaining chip in the negotiations.

Interesting question -
There’s two sides to the coin for me 1) exploratory test to your heart’s content 2) collaboration - asking team members what their experience is of what’s most important to look at. You’re bound to make mistakes and then quickly learn from them.


1 - Which factors could prevent me from salvaging the situation? How to spot those factors in the interview or in the first few days on the job (and quit without wasting too much time)?

Asking about the state of testing, where things are at, and what they think success looks like in 6 months, a year, etc can give lots of clues as to whether they’re trying to fix things, open to change, etc.

2 - Given the challenges, should one ask for much more than market rate wages?

I don’t think a lack of test cases is a particularly unique situation, and it wouldn’t impact my salary negotiations.

3 - Are these companies generally startups? Are/were they run by grossly incompetent people? This one is probably hard to answer.

Usually it’s just a matter of organic growth and churn, and not emphasizing test from the beginning. Like I said, a lack fo test casts, not having any formal QA, etc, is not unusual at all. I tend to actually like these kind of environments, where the job is more about building out a culture of test. There’s a lot of opportunities here and so many different ways to approach things. The companies with mature practices, where the bulk of your day might be writing automation, is boring to me.

I wonder if one should also make testing practices a criterion in acquiring a company. Maybe it could show you the challenges and also serve as a bargaining chip in the negotiations.

Sure, if you’re a C-level/director/etc, an owner, etc, you should do your due diligence when you’re acquiring a new code base and/or dev team and try and figure out what the lift is going to be to incorporate them, but it’s hard to do. As an individual contributor, I don’t have much impact at that level.


The question is pretty loaded.
I’d split it into a few components.

  1. Existing project
  2. Sole tester.
  3. All previous testers have left
  4. ‘test case repository’
  5. test case repository has been lost
  6. ‘we’ need to ‘ensure’ confidence in the release;
  7. confidence has to be the same as previous ones;
  8. approach to test the next release effectively;

Then I’d start to question the meaning of each.

  1. If there’s an existing project and I know nothing I need to catch up with the knowledge right?
    Do I have time to learn anything?
    Do I have resources?
    Do I need additional external/separate training?
    Would I have someone to guide me or am I alone?
    Do I have access to anything or have to work with limited access/resources? Etc…
    Would a manager be confident in anyone new guy that doesn’t know anything about anything to tell him about the gaps and risks that the other teams and IT members have been developing since years and have developed in the last x days/weeks for the next release?

  2. Sole tester although useful to know, it would be more helpful in a context.
    Was there a sole tester before also, how did he handle things? Did anything change in the product/project or product development team? Where there 50 testers before working on it? It can be a sole tester in a project with no developers. Or a product that is released once a year by a team of 20 devs working full time on it. Is this sole tester assigned to a single project/team or does he have others to support as well? Does he have priorities? Does he get to be helped or supported by others, or by scripts/tools built by devs? Does he have to build a test env. or run it, how much does it take? Is test data and/or states setup available or does he have to manipulate it somehow or script something?

  3. Why did the previous testers leave? Is there something wrong with the management, the company, the product, the project, the respect or salary they get, the impossible or wasteful demands?

  4. ‘Test case repository’ can mean different things: bad management, highly complex or regulated environment, beginner testers, time-wasting, lots of control from higher management, high dependency on obsolete testing strategies(from 30 years ago), no responsibility for the testing work, focus on artifacts instead of information; What is the relevant points that I should be taking from this?

  5. Lost ‘test case repository’ can happen from a few reasons. The testers got pissed off and deleted them? The manager thought it’s a better idea to get rid of them as they slowed down testing. They were external and the company didn’t own them. There was a mistake from someone. How is the lost test case repository relevant?

  6. ‘we need to ensure confidence in the release’. Now I have doubts about who’s ‘we’, and what’s its relation to the software tester. Software testers are information providers and not insurers. I suspect that the releases are made by a release board or a manager. The information about the release is collected from multiple parties, including the software tester. The release manager can decide to do whatever he wants with the information about the product status & it’s risks. Is the company expecting the software tester to be in the role of a release manager as well so that he manages the risk and quality of a product release?

  7. I am not aware of the previous release confidence. And I wouldn’t compare it unless everything is equal: previous mistakes, developers, teams, features, times and timing, management, stakeholders, quality criteria, release management, product state, all things testing related, etc…
    If you do want to compare, please lay down the criteria so that I know by what I am judged so that I can plan to reach and equal the previous release.

  8. Approach to test the next release effectively?
    So far I have not been given much information on the context. So knowing nothing I can’t tell you anything.
    But I would start by identifying information on stakeholders, business, IT, company, product, project, manager that I have to work with, access, hardware, software available, product/project data, documentation, timelines, milestones, plans and wishes, code, product status, environments and availability of product on them, testability, location and resources, specific mission for testing and release, agreement and feedback with the dev team on strategy, find out what matters most for users and stakeholders(quality characteristics), etc…


@ernie & @sjprior - I posted a follow up question here - Follow up question - Should you take a project which has no test cases?. I’d appreciate it if you could share your thoughts. Others are also welcome. Thank you.

Test cases are an integral part of software testing for software quality assurance companies, since it ensures that all customer requirements are covered while testing.

There are certain situations wherein test cases are lost or they do not exist and for joining in as a new resource for such product can be very challenging. Below actions can be taken in such situations:

  1. Look/Inquire for Wire-frames or any document (if any) and get an understanding of the application using the same.
  2. Meet with Product Owners and get an understanding of the crucial areas of the application that are frequently used by customers or the areas that generate revenue.
  3. Explore the application and Query with Devs, Product Owner , Stakeholders.
  4. Document your findings and share it with the team or keep it in a shared repository preventing chances of loss of data in the future.
  5. After identifying all the areas, start with test case creation by maintaining Smoke, Sanity and then Regression scenarios in respective suites

Test cases are one way of doing test planning, which may be appropriated in only some situations of testing.

Michael Bolton has a 12-post series called Breaking the Test Case Addiction - I suggest the reading.

@joaofarias is the continuum not very much more context dependant than the simple diagram you shared shows? I often find myself comming back to Michal Boltons writing about the addiction topic,

I use most new features getting delivered (you get a feature in every agile sprint don’t you?) as a way to explore that delivery and to create pure scripts that assist me in exploring more deeply as well as build the armoury of CI/CD checks. I am a fan of automation in general, so for me, if I ever started a job where there were the testers had all gotten fired, I would start out building on both ends of such a continuum. I would (in my mind) be working in alternating-sides-steps, zipping left and right, but looking like zigzags, mostly on the the right, and gradually broadening.
Perhaps the fact that most of my automation roles have been toolstacks that I have part-created, I’ve made sure the tools, support exploration techniques. You don’t dive into a jungle without a machete do you? Exploring without tools is not really coming prepared at all.

The importance of which area of the continuum is definitely context-dependent.
And one should adapt to the context, as you did.
Tools that enhance exploration are fundamental.

My points was on the quote from @vishaldutt, particularly

it ensures that all customer requirements are covered while testing

Particularly because the open quesiton of the thread asks

What would be your approach to being able to test the next release effectively?

Claiming that one has covered all customer requirements because all scripts have been exercised, by a human or by a machine, is misleading - even if the customer have written and sign-off them, given there is such things as tacit/explicit knowledge and system behavior that emerges from the interaction of its components (which is very difficult to predict).


Don’t forget, most any product will have a few oracles in the form of

  • a bugs database
  • a support tech who has a brain you want to pick, because they know what to guard against most
  • a product owner who will focus your test exploration, if they can vocalize the key product “values” well
1 Like