Follow up question - Should you take a project or job which has no test cases?

Here is a question which asks us how we will test something when we don’t have test cases - No Test Cases.... What can you do?. Some of the answers have examples of realistic scenarios which lead to lack of test cases.

In the post, some users seem to like such environments or can at least handle them well. But, I am sure that not every company without test cases can be saved. So, what are the things which will slow you down significantly OR prevent you from saving such a company? If you know that, then you could at least avoid taking up such jobs or make better estimates for the time you’d need to fix things.

Here is one example which I think will at least slow you down significantly. You have to test a non-trivial system which uses APIs, databases/DB and message queues/MQ. These APIs, DBs & MQs are used by other systems i.e. not UI. There is no UI for you to test and get a feel of the system (easily). The job requires special domain knowledge which you cannot acquire by intuition, past experience, logic or by taking random courses on the internet. Your company has given you little training or information about the domain knowledge. There is little documentation about the systems and the automated tests (if any) are not very clear. The business processes are complex and will require talking to multiple people, many of who don’t have a clear understanding of the system at their level (product owner-ok!, developers-great!, testers-yuck!). The testing culture and tools suck. For example, some testers have little curiosity or time to understand the basics of the system and business process as long as they see expected results in the backend. There is a tendency to focus only on a small part of the system which is in their own stories, without seeing the role that part plays in the bigger system.

Is it really worth wasting too much of your time on running around for answers, often for simple things? If you are a “junior” or “intermediate” tester, then is it worth spending your time on such projects? You could instead be learning better things in a better company instead of wasting time on reverse engineering systems. Should you take such projects/jobs only after you become an “expert” i.e. have plenty of time (and financial freedom) to experiment ?

What risks do you see that could prevent you from saving such companies?

1 Like

I’d love such a project. There’s so much to explore and improve, the only thing that would make it terrible is if the people to work with are a-holes.

But the fictional project you describe, some products are so complicated it truly is like that. I know some people who work for ASML and this is their reality. You can only know a tiny part of the system and you have to be OK with that, because it’s so complex no one has the bigger picture.

As for no test cases, I always doubt that. You mean, no written-down test cases? Because surely, there are a lot of assumptions in people’s heads! You say "Is it really worth wasting too much of your time on running around for answers, often for simple things? " And I say: THIS is what I became a tester for, I LIVE for this shit. Test cases are boring man, spending time with people finding the right answer, that is what we are there for! You are not reverse engineering systems, you are improving understanding, gathering information about risk.

I really don’t get why so many testers are so hung up about test cases. Test cases are stupid, boring, they are often often accepted as “the truth”, when there could in fact be so much wrong with them. They are a false sense of security, a gateway to move away from critical thinking, a gateway to making humans act like robots.

I’d ask: If a tester prefers a project with test cases, why did they become a tester? What do they think our role is there for?


Admission, I love test cases. I’m looking at a sheet of 98 test cases right now, half of them have a “p” next to them, the other half have and “s” next to them, there is one “f” and a few blanks. So, I love captured test cases, they are my safety net, but they are a reminder to check things. I love to do them in reverse. I do a test session, and then at the end of the day I fill in the sheet, and use it to plan my next session based on where the blanks are grouped up. With each release I am perfecting a “speedrun” through the product, touching as much as possible, but in a straight line trying to visit as many interesting areas. I love my test list, but it’s the last step of my testing.

Raghu , there is a great “blind testing” of a new product technique you should use on all new joiners. Grab them in their first day at the company, take a few basic test cases and rewrite them onto a piece of paper giving just enough detail to accomplish a few user stories, no more than 3 or 4. Then observe how they learn to use the product from zero, and how they find OOB (out-of-box) failures and UX bugs none of the testers who have been there for years have reported. This experiment is proof that people can profitably test a complex product with no formal test cases.

Testing, as @maaike.brinkhof points out is about identifying risk. And because risk is always moving, especially in a complex system, a list of tests that does not, is never going to find them.


I’m in the same boat as @maaike.brinkhof - the hypothetical role as described is my current role (plus throw in Mongo and a cloud migration) and the kind of role I really enjoy. Learning and exploring, while pushing on bits and nudging things here and there to build out a culture of quality.

In fact, I think the ability to chase down these answers, to understand the system, are one of the traits that set apart senior engineers from more junior folks. They have enough tech context that they can make good guesses for how things work, strong enough skills to verify these things, and then the testing expertise to analyze the risk and come up with ways to test it.

I think it’s going to be hard to tell if a job is worth taking or not from interviews. If you’re lucky, there’ll be some glaring red flags that warn you off, but in most cases, it’s going to sound like there are challenges and a job to be done.

The way that this post and the previous are phrased sounds like they’re being framed in the context of a contractor. I’ve always been an employee so the constraints are likely different, but in all honesty, I have no idea how a testing contractors success or failure is measured. I’m guessing there’s a wide variance, with some contractors brought in just to supplement existing roles, others being brought in to be thought leaders, etc.


I think I did not express my question clearly. I am not saying that test cases are the only way to do testing or the only source of truth. But, if I am working with an existing system which is complex and uncommon, then I’d like to have at least some basic and good test cases to get an understanding of the system.

Here is an extreme and crazy question. If test cases are stupid, boring, not important etc., then why don’t we stop making them? Alternately, why don’t we write some test cases and then throw them all away routinely (every year or so) and start from scratch again just to keep things interesting? My point is that a few good test cases can be the starting point for my testing work. But, test cases are not the only tool I’ll use.

Btw, is ASML the semiconductor company?

1 Like

That sounds like fun to me. Are there any exercises which could show us how that approach works? If not, I am thinking of creating such exercises myself and try it on a real situation, similar to the one in my example.

PS -
Btw, that link on complex systems is too abstract and most people would not be able to understand it easily (see the point which has proto-accidents for example). Luckily, I have seen famous cases from the real world which involved problems in code, hardware, human factors combine and then cause life threatening problems. It was not enough to just fix the code & hardware. They also had to change the culture and processes also to make sure that the problems never happen again. So, I was able to understand many of the abstract points in that link.

1 Like

Lots of people have stopped. Bolton wrote a long series of blog posts on “Breaking the Test Case Addiction”.

The company where I’m at the automated checks serve as most of the documentation of our “test cases”, and for regression type stuff that’s better done manually, we generally rely upon pretty sparse checklists.

For our stories, we generally trust that our developers and testers have a pretty good shared context of the problem domain, and if not, that they’re communicating to discuss and reach sensible decisions, very much in the agile “working software over comprehensive documentation” model.


I like that too, provided the tests work well and are expressed clearly. Unfortunately, it becomes hard to understand and rely on automated tests if the test code quality is bad, or if the tests are done with some low code or no code tool. It gets worse if there is no basic documentation of at least the important systems.

I’d like to learn about how testing can be done with few or zero test cases. Do you know of any recorded video courses which give a brief overview of that approach?

1 Like

Exploratory Testing…
Please read the book “Explore It!” and you’ll learn how it can work.

No Test Cases != unstructured testing.


In the webinar “Teaching and Coaching Exploratory Testing” Maaret Pyhäjärvi shows, how you can combine exploratory testing and test automation:

1 Like

@maaike.brinkhof - I read the entire book many months ago. Unfortunately, I did not practice it much (maybe 2 or 3 times & found 1 important bug). So, I remember almost nothing. Btw, are there any good, practical examples (maybe 3-5) of exploratory testing which show how the pros do it?

Reading this book actually changed my course as a tester, I had been up until this book assuming I had to learn loads of techniques and focus on better automation. I mean I had wanted to spend less time maintaining automated suites, but this book was a course correcting for me. Automating is fun, but making exploring fun helps.

1 Like

@conrad.braam - Btw, did you apply the things in the book directly to work or did you do some prep work before that? By prep work, I mean looking at some non-trivial examples (perhaps like the video shared by Han Lim) or trying things on other other software. One could try the book’s recommendations directly at work and discover things on their own. But, it doesn’t hurt if one learns from others experience before that.

@raghu Actually I took a longish road to get to exploratory testing. I went to lots of small local meetups, then I got to see people doing it in various ways, then I got hands on a copy of the Explore It book. The first time you do this, you set a mini-goal, and give yourself 1 hour. My first mini-goals were to explore the UX and the OOB experiences, these are often quite easy if you are new to a product.
I experimented with making in notepad.exe, then notes on paper, and even in Excel. I find paper works best. I even have experimented using testbuddy, which is a pretty good notes app for this kind of thing. Trick is to not want to find bugs, but to want to explore all the possible “watering holes” for bugs. This typically means you have to know where the bugs might be hiding, but also means you need to start with a broad remit. Keep the exploratory sessions short. That is the best way to make sure you have taken notes, and are making progress.

I think it is hard to be exploratory API testing, and for this you are going to want to get comfortable with your tools first. I don’t have a lot of api exploring experience, but it’s pretty similar to what Mareet is showing from me little past experience. I think the smart of api Exploring is that you need to be careful not to be trying to cover all the boundary cases and so on religiously, because, you are exploring. You ideally want to be trying hard to explore with a charter. So the charter is this mini-goal I mentioned. If your mini-goal was to find out for example if “api requests in java were easier than api requests in Python” , and you discovered a python module that makes it easy, stop and start a brand new session when you hit your goal.

Another thing I do is mix up my day, exploratory testing all day is not feasible with my workplace demands. Session-based testing, that’s a thing you need to learn how to do. I use a countdown timer on my phone. But to be honest, it’s taken me years to get confident at Exploring naturally.

1 Like

I’m not sure how the presence or absence of test cases would make much of a difference in the example you describe. My guess is that in such a scenario, if there were existing test cases, they would be confusing, outdated, and likely not very useful. For one, I would expect developers to be more likely to keep design documentation up to date than test cases so the state of the documentation you mentioned suggests they probably aren’t doing either. Also, if the existing testers are unmotivated, they probably wouldn’t either.

That’s been my experience on a couple projects where we picked up a legacy product to make changes. I actually once deleted the whole “testing” folder for a project with millions of lines of code because it was so out of date that starting over was easier. I would say I was a fairly novice developer/tester on both those projects, and in both cases it turned out ok. We learned as we went, made mistakes, fixed them, discovered weird behaviors, and ultimately made the product much better.

I would also highly recommend the Explore It! book and Bolton’s “breaking the test case addiction” blog series that others have already mentioned.


Explore It! is fantastic and full of great heuristics for testing. I fully agree, no test cases != unstructured testing. Testers give ET its structure and I think that some people want to say that exploratory testing is unstructured because maybe they don’t understand its structure.
Structure comes from the overall mission of the testing project, from chartering, from test design heuristics, from project constraints, from business risks and so on.
I really believe that all testing is exploratory, at lest to some degree. Even if we have a highly scripted procedure (like a test case), we are still humans, we still make observations, we still follow up on our intuition and feelings, we go off-script when investigating bugs. All of this is exploration.
Session based test management is a great way to formalise testing, because it focuses on the activity of testing, not on test artefacts.

1 Like

This is on my ‘to read’ list and I understand that in theory MY testing will become much better if I take a good approach to exploratory testing… however which companies actually trust their testers enough to start a project without tediously documented test steps and twenty page Test Plans?!


@danuk If your company lives in a world that is moving forward faster than it can handle, your job will never be boring. I tend not to accept jobs in companies that never innovate, so I don’t know anything different. It thus goes without saying that very few people in the places I have worked “read” test reports. Old test scripts and reports tend to only tell us things about the past.

A lot of companies will want 2 things. Automation, so that all the fancy virtualization kit they bought actually pays for itself, and Teamwork. Teamwork is about knowing how to go on bug hunts that scout ahead for your team to find bugs before the team do. It’s risky business, but knowing exactly what the context a team is in and then scouting for bugs where the code churn is highest is always going to find the best bugs. Knowing how to balance your time between regression “checks”, some Automation maintenance and scouting, is what Elizabeth’s book taught me.

Finding bugs that save your team time make you a hero, boring stuff like stopping any bugs coming back later, is all those test reports and automation frameworks are good for. The latter is still good, but is a matter of “Hygiene”.


When a company expects test steps and a long test plan at the start of a project, it means that they don’t take testing seriously. The stakeholders at that company will never read the test plan and will complain when 99% of the test steps are discarded/re-written because whoever designed them did not have enough information about the product at the time.
The question to ask is “Is it helping?” and if the answer is no, then ask “Is it necessary?”. If the answer is again no, then documenting these things are choices.
I don’t hate test cases, or detailed test plans, or low level test steps IF THEY ARE NECESSARY. They are a waste of time if not and contribute to the perpetuation of the false idea that they are always needed.

@danuk Take a look at this article as well, it has helped me a lot.