What is your software testing process?

I’d love to hear people’s thoughts on how they approach testing of things!

Do you have any processes that you like to follow?
Or processes that you have to follow?
Are there any models that you find work well for your situation?

If someone asks you ‘how should we be testing this’…what do you do/say?


For testing of a reasonably large piece of new development work, I like to follow a process something like this:

  1. Perform exploratory testing for a relatively short time (say an hour or two) making notes on areas I know to be important/risky or which seem that way or which look … erm … unpolished
  2. Literally don’t even think about the app/required testing for a time (probably rest of that day)
    The next day, go back to the same areas and deepen the initial/previous analysis by reading documentation (if any exists), talking with developers/business people or just more exploratory testing/app usage
  3. Begin to sketch out/improve test scenarios/cases thinking about risks and test methodologies
  4. Probably repeat from step 2 down until I feel I’ve achieved a good level of test readiness and it seems worthwhile to have my test cases reviewed by someone with a fresh set of eyes
  5. Depending on the test case review, either proceed with creating/improving lower level test cases or revise the test scenarios. In any event, I’ll using a similar flow to the above

I tend to repeat the above from time to time when testing is quiet just in case we missed anything. I also include information on bugs found when repeating the analysis assuming some exist.

In terms of having to follow processes, in my experience that seems to depend on the company size. Typically there seems to be positive correlation between the company size and number of processes, the biggest of which is processes to follow for requesting deployments, for regression testing and for raising bugs. In terms of the creative side of testing (risk analyses and test case creation), I’ve not experienced much process in large companies, except perhaps for test case reviews, which are something I like to do regardless of the company size.

In terms of models, nothing I can think of, though test heuristics (if these are models) are something I am increasingly aware of and think for testing new work I will use them going forward.

If someone asks “how should we be testing this?” I would reply “there are probably no absolute right answers. What do you think yourself? Would it help if we look at/think about it together?”


I would like to use more mind maps,wardley maps, and cynefin but I have to use processes inspired by v-model and total quality management. I am well aware that I work in a large IT consultancy that delivers all kinds of IT solutions - some with CI/CD others not so much (mainframe, desktop). IT for retail is one thing and IT systems for medicine factories is something else. As my “situation” often changes the model changes accordingly. … Practices appropriate to one project project would be criminally negligent in another.

If someone asks me “how should we be testing this” - I ask what’s in the contract, what is promised/mandatory? And secondly: What do we want to know besides that?


I like to have an understanding of the business’s concerns: What about the feature/thing do they care most about? What scares them? Do they have a vision on how this particular thing is supposed to help them accomplish the organization’s mission?

Once I’m clear(ish) with that, or I’m able to draw reasonable assumptions, then I’ll look to some conversations with the developers, hopefully via Three Amigos. I want to understand what they’ve covered with automated testing, and I want as clear a picture as possible for dependencies, data flows, and edge cases.

With all that in place then I can start exploring to fill in any gaps, and potentially adding automated tests along the way.

That’s all sort of high-level and hand-wavy, but it’s how generally I try to approach things.


I think Mind Maps are a wonderful way to lay out ideas for individual test sessions, or even broader test approaches!


Will use these in the coming work week! :smiley:

As SW development is engineering discipline oriented towards problem solving it should be taken in the context of business domain and Project purpose and constraints.
So first of all we must understand what the problem is and what is the proposed solution, and how the product developed under this project intends to solve the problem.
Understand Project purpose, constraints, and type (Full scale development, modification of existing product, auxiliary SWdevelopment) contract requirements including standards under which the SW should be developed such as ISO, DOD, CMMI …,
Taking into consideration Business domain, Project purpose/constraints, Governing Standards will have heavy influence on the development process and therefore also on Testing process.
After completing the preparations don’t forget to talk with all stakeholders and understand their needs and requirements.

Testing process:

  1. Requirements Review (Requirements are the basis for test process) review for completeness , consistency, contradiction
  2. Establish Test Plan (Test responsibility, Test environment, Resources, Schedule, Test Levels , Defect management and report).
  3. Develop Traceability Matrix (Requirements – Test Cases)
  4. Develop Test Cases
  5. Test Execution (including some exploratory testing: learn the System, get confidence about problematic areas)
  6. Review defects initiate corrective activity
  7. Perform regression testing when needed
  8. Define and establish criteria when to stop and exit testing.
  9. Write Test report
    In Embedded RT Systems in a real word Exhaustive Testing is Impossible, there are an infinite number of tests that one can run ,but the budget and schedule are finite (limited and restricted), so test smarter not harder, Implement Risk Based Testing (RBT) and tailor your test process adjusting it to the context (domain, purpose, requirements, Project Type).
    If you tailor the process to context/constraint and implement RBT you can get answers on key questions that you are concerned at the beginning of the test process: When can I stop Testing, what is the Risk of not testing certain condition and chunk of the code?
  • What test conditions should I cover?

  • How much effort should I spend on each test condition?

  • Which test conditions do I cover first, and which can wait until the end?


Hello @chdoron!

I looked at your list of tasks and wondered how many of those included collaboration with developers and business analysts. In my experience, the collaboration has been much more important than some of the tasks listed.



If you read more carefully my post you would notice that at the very beginning I mentioned that we need to talk with the stakeholders, this include much wider collaboration than you suggesting just talking with the developers (developers are only one of the stakeholders, we usually have more than 1 stakeholder in our projects) don’t mess collaboration with time sharing.
All our projects are not a one man project or one man company so without talking collaborating with other teams that are involved in the project development you can’t solve any problem.
I have listed here activities that give some artifacts in the test process none of them can be created without collaboration, the same goes in life you can’t advance if you are not collaborating, but if all your effort is spent on collaboration who will do the work and produce all artifacts that are required to complete the test process.
So I suggest to take a balanced way, only collaborating will not finished the task I listed (though you thing that collaboration is more important than task execution).

1 Like

Hello @chdoron!

I saw that stakeholders were mentioned but thought of them as business managers or product owners rather than project team members. Thanks for the clarification!

My apologies for implying that collaboration alone can complete tasks. When I read the list provided as a process, it gave me the impression that a tester was working in an isolated manner. I wanted to explore how testers in your organization engaged other roles in the project, and how those roles influence test planning.

As one example and in my opinion, collaboration is more important than reviewing requirements. In this sense, a tester, developer, and analyst have an opportunity to review requirements for clarity and understanding (I believe what you listed is necessary as well). When resolved to a story card level, the tester, analyst, and developer should have a single understanding of what is getting built. Testers should drive that conversation to facilitate a single understanding. That would be an example of collaboration.
While requirements provide some basis for testing, test planning would be incomplete if a tester depended solely on requirements. So I believe, as you do, testers should have a good understanding of the business domain. I also believe they should have an understanding of the technical implementation so they can articulate system or product risks and explore those risks in depth.