Imagine kinda typical startup environment (generalized):
You’re building a product from scratch. Lots of experiments and PoCs. You’re trying to get an MVP out for some users, beta first, then more. It’s an agile setup, but you don’t have the needed infra yet (no test envs, no monitoring, etc). Everyone’s building features simultaneously. Many microservices. Some 3rd-party integrations. Processes are fresh and not really established. There’s a sorta mix of SCRUM and Kanban. You’re building and learning a lot as you go. Goals are vague, deadlines are tight as usual. The team is understaffed. Some people joined recently. Few requirements are documented. Basic system analysis and market research were done (draft stuff). Almost no unit tests, zero automation.
Pretty standard, I would say. Nothing critical or special here, I’ve seen such projects many times. I’m sure many of you have too.
So, here’s my question:
Do you usually have detailed test cases in this kind of environment? Would you prefer to have them? Why?
Are they really crucial at this phase? Worth the time to write and maintain?
If not test cases, then what works best in your opinion and from your experience?
I don’t want to look at this from a reporting or micromanagement angle. I care about real value for the team and product quality.
Let’s assume there are people in the team who have experience in envs like this.
I know the answer is always “it depends” but I intentionally used a generalized example, because it matches a lot of real-world projects I know. So both generalized thoughts and specific examples are okay here.
I know my answer and my reasons but I’ll hold off for now to avoid biasing the discussion. I want to hear real thoughts from experienced professionals.
And this question isn’t just for QAs, it’s for anyone who cares about quality and QA in software development