Test cases VS No Test cases

Imagine kinda typical startup environment (generalized):
You’re building a product from scratch. Lots of experiments and PoCs. You’re trying to get an MVP out for some users, beta first, then more. It’s an agile setup, but you don’t have the needed infra yet (no test envs, no monitoring, etc). Everyone’s building features simultaneously. Many microservices. Some 3rd-party integrations. Processes are fresh and not really established. There’s a sorta mix of SCRUM and Kanban. You’re building and learning a lot as you go. Goals are vague, deadlines are tight as usual. The team is understaffed. Some people joined recently. Few requirements are documented. Basic system analysis and market research were done (draft stuff). Almost no unit tests, zero automation.

Pretty standard, I would say. Nothing critical or special here, I’ve seen such projects many times. I’m sure many of you have too.

So, here’s my question:
Do you usually have detailed test cases in this kind of environment? Would you prefer to have them? Why?
Are they really crucial at this phase? Worth the time to write and maintain?
If not test cases, then what works best in your opinion and from your experience?
I don’t want to look at this from a reporting or micromanagement angle. I care about real value for the team and product quality.

Let’s assume there are people in the team who have experience in envs like this.
I know the answer is always “it depends” but I intentionally used a generalized example, because it matches a lot of real-world projects I know. So both generalized thoughts and specific examples are okay here.

I know my answer and my reasons but I’ll hold off for now to avoid biasing the discussion. I want to hear real thoughts from experienced professionals.
And this question isn’t just for QAs, it’s for anyone who cares about quality and QA in software development

5 Likes

Do you usually have detailed test cases in this kind of environment? Would you prefer to have them? Why?

No. I don’t use them unless I have to.

Are they really crucial at this phase? Worth the time to write and maintain?

No. Nobody has been able to convince me that detailed written test cases as the default mechanism of describing, communicating or executing a test performance (or for new-hire training) are better, in any context, than not doing that.

If not test cases, then what works best in your opinion and from your experience?

That is context-dependent and depends a lot on your team and what you’re trying to achieve. If you have a team that can test independently then all you need do is give them the resources they need, time scales, the right support, and allow them to get going. If you have people that need more direction then slightly more detailed charters that give PURPOSE over process, so you state what you want to be tested or included, but not a step-by-step of how you expect them to behave such that that results in narrow, shallow fact checking.

It will depend on the product, the quality standard, the team, the business, the users… but generally speaking I am against overformalisation and premature formalisation of all kinds. We do not live in a world where writing everything down in detail is an acceptable cost. It is too slow, too error-prone, too narrow, too shallow, too expensive to do, too expensive to maintain and unless you’re working on some ancient piece of equipment that hasn’t changed since the 70s because it just works your product and project will change enough that the maintenance of your suite

Let the computers do the fact check work, let the humans do the learning.

I don’t want to look at this from a reporting or micromanagement angle. I care about real value for the team and product quality.

I think they’re the same thing. Reporting is how you get the information you’ve learned into the hands of people who need it in a way they can use it, it’s critical to the value of testing and our perceived quality of the product. Micromanagement is crippling to the team, and allowing them to do what we hire them for is not only good for time and profit, but also for motivation, confidence and process improvement.

5 Likes

the problem with a startup is that there often is no tester and if there is, you are all alone.
In a startup environment it’s usually fast paced as you mentioned, trying to get that MVP.

And sooo much will change, so writing scenario’s for things that will often change, usually has no ROI/benefit.

I believe not, here documentation is more important then test scenario’s ihmo. (At least in keeping it up to date)

2 Likes

Do you usually have detailed test cases in this kind of environment?
No, in my experience, in this type of environment, any shortcut that can be used will be used.

Would you prefer to have them? Why?
My preference would be to automate them. Then it would be clear what was being tested. Detailed, documented test cases can add value to the process; they ensure that the process is repeatable, regardless of who executes those tests, your customers might want to see them, and they support training. However, it depends on what you mean by detailed.

Are they really crucial at this phase? Worth the time to write and maintain?
I know you said that you don’t like ‘that depends’, but it does. If you plan to reuse the test case, I would suggest that you need something; again, my preference would be to automate. However, remember that automation is not the solution to everything. I would say if you want detailed test cases and you decide not to develop detailed test cases, treat it as technical debt.

If not test cases, then what works best in your opinion and from your experience?
Automated test cases.

Even with experienced people, if you do not have test cases that clearly define configuration and data, for example, it is likely that two different testers will execute the tests in various ways, potentially getting different results. This can cause confusion, especially when you are about to release, and one tester says no issue, and the other says there is an issue.

In my experience, commercial customers will, at some stage, usually on the back of a problem, ask to see the test cases, to judge for themselves how good your testing is. In these cases, it is worth having something to fall back on.
Again, it depends on what you mean by detailed. If it takes 1 day to develop a feature and 3 days to develop the test case, I would investigate what is happening.

In my experience, you should have something, first, ensure that the team understand the features and the product, this helps reduce the need for lots of details. Second, at a minimum, I expect to see what the test case is trying to accomplish, the data to be used, and the configuration. I would also suggest use a tool to manage your test cases.

2 Likes

I’m drifting more and more towards the view that test cases are for developers and not testers.

Whilst I’d like to see developers do a level of testing and maybe even a direct to automation test case approach and I believe this would also be of value in this MVP model its not usually my place to tell the developers how to do their job even if its important that I am aware of what they are doing.

In reality I have seen a mixed view on this, some developers will naturally also write test coverage regardless of development model and goals, others sometimes need nudging.

This bit is a curve ball though “Almost no unit tests, zero automation.”, I’m not so keen on compensating for perhaps lesser development practices even if MVP, you could start a longer term pattern of doing the developers job with long term harm.

MVP from testers view point I’d say I’m usually a fast and curious approach optimising value of finding the important things within a limited budget, timeframe and constraints. If I know developers are doing less testing than standard I’ll up that curiosity element a notch or two.

2 Likes

My view would be on increasing the testing being performed - not the format. Testing can have many formats - test cases, unit tests, test automation are just a few of the formats.

I would explore the company/cultural reason for this. The market for the solution, the product position in the market, the mean time to repair. The market and company appetite for quality.
Above, I guess, would be fine in a niche market with few competitors and low quality expectations.

2 Likes

Been there.
Wrote and managed test cases in testrail. Later just deleted everything.
As people here have written, startup products change so quickly that it feels like you’re a mathematician working for NASA where your work would instantly become obsolete (Hidden figures reference)
Then there’s some corner cutting too, botched patched work for the sake of making things work to show something to the investors. They might put aside some edge cases for the sake of race against time.

But what’s important is to have concrete documentation so that it’s easier to track where the team came from.

2 Likes

Hi, I would focus on the acceptance criteria: well written, SMART defined… if this exercise is done, then tests can be based on that. I’m sure that MVP development will benefit from it. The key is to get the discussion ongoing with business. Concerning the technical aspects, volumes, performance, security etc… should be known to choose the right architecture. So again, keep communicating with business, not only to find out what it is now, but also to find out what there expectations for the future are.

4 Likes

I’d say it all depends on the customers expectations. When our customer expectations are that software is delivered fully working and they know whats coming, then we need a traceable QA process. So a mature product.

However, when we’ve been producing an MVP or POC, we’ve usually got in front of the customer and said “We want to show you something and get your opinion?”. Thats rare for us as we mainly deal with mature products, so QA can sometimes get into a little conflict with Product teams who assume we can’t adapt to a different set of objectives.
Once we explain we can, as soon as the dev teams had something demonstrable, we would give them a time boxed exploratory session as a team using the PQIP methodology. No bugs, no test cases, just feedback before they put it in front of a customer. They can decide from our feedback whether to tweak or just explain to the customer any flaws.
The likely outcome of these MVP’s from my experience after a customer has seen it is, the idea ends right there or we rip it up and based on the feedback develop it further with a stronger architecture. Its very rare development just continues.

2 Likes

I wouldn’t take a job that required using a test case management tool like TestRail. Maybe there’s a context in which they are not a time-wasting, mind-numbing catastrophe, but if there is, it’s a context in which I don’t want to work.

With regard to “concrete documentation”, I haven’t seen any in about 15 years across the hundreds of clients we have worked for. That all disappeared with the advent of agile development. The documentation of requirements is now so vague as to be useless for testing.

My approach is to ignore the requirements until I have done some exploratory testing, after which I can go through the requirements and tick them off. Usually, all the documented requirements are met but the exploratory testing found a heap of bugs that could not be found by testing against the requirements.

2 Likes

If you’re referring to how terrible the tool is, then yeah its the reason we stopped using it :sweat_smile:
But, we weren’t forced to work with it, we just wanted to try out the whole “test case management” thing.

As for the documentation part, I work for a product so that’s why we have that “concrete documentation” practice in place. For a scenario of one project after the other, I too found myself doing more exploratory and less documentation reliant testing.

really like your points and approach, and I agree with you on them :slight_smile:

All test case management tools are the same in this respect. They are all based around the naïve assumption (cruelly perpetrated on the testing community by ISTQB) that all the tests can be derived from the documented requirements. It all looks very pretty, with traceability from each requirement to the tests that verify it, and it generates all the bogus metrics that management love.

However, there are numerous problems including the enormous documentation overhead, especially when requirements change. It also fosters the false concept that testing can be complete (which management loves to hear) and it puts blinkers on the testers, preventing them from even thinking about all the other tests they could do.

On several occasions, testers have told me they were not permitted to raise bugs that did not derive from documented requirements. In fact, it usually wasn’t even possible to do so because the tool required that a requirement existed in order to attach a test to it.

The bottom line is that these tools don’t just encourage you to do bad testing, they force you to.

2 Likes

I’d be less opposed to test cases for a start up than a larger org. Largely because feeling the need to define test cases for manual tests is a red flag that your org has “less than desirable quality practices” IMO.

I think having tests as a living documentation of behaviour can be a very positive thing but ideally those are largely automated.

For me I’d want to work for a company where I’m focusing on what is relevant for the story. My hands on testing may touch old ground but that’s not the majority of my effort and importantly I’m focused on how the software is working, what I’m finding and discovering rather than some words someone wrote 4 years ago.

In your scenario the issue is the lack of requirements & automation. Don’t replace that with thousands of manual tests that will rot. A great starting point is ensuring that teams are mapping code changes to user stories / bugs etc. If you’re not documenting how everything should work, you can at least allude to why and a simple user story number that someone can look up to see ACs ain’t hard…

… Assuming you have stories, ACs and all that goodness.. if not start there.

1 Like

It’s worth noting that I’ve been in an environment where we abandoned requirement docs and had sod all automation yet behaviour was understood because we collaborated, sharing domain knowledge, and worst case, we’d look at the code, see who implemented it and their explanation of why they did it that way. And then we knew.

1 Like

My point being that I think if you can understand why code is behaving a certain way, it is more powerful than a massive requirements doc or year old test cases.

2 Likes

My thoughts around test cases is that their main purpose is to codify knowledge.

There are situations where test cases are necessary, but in general I believe they are inefficient containers for knowledge.

I wrote something about it some time ago:

1 Like