Novel testing tools, terms, and techniques

Hey folks,

I’m currently working on a new talk to give an overview on novel (or just not generally known) testing tools, terms, and techniques. My collection so far contains

  • approval testing,
  • property-based testing,
  • fuzzing,
  • mutation testing,
  • exploratory testing,
  • ensemble testing.

Is there something missing? Would you like to hear such a talk?


Maybe not generally known a lot is Chaos Testing
Personas are generally know but those with accessibility traits not so much
Quantification for measuring usability
Operational Acceptance Testing for hardware
Hope they are useful and it does sound like it could be an interesting talk


These are great suggestions. Thanks a lot @adystokes :+1:

Evil testing - deliberately attempting to make the software break. This can include anything from trying to enter text in numeric fields to trying to hack the site. I usually save this until I’m reasonably sure the basic functionality is solid.


Nice one, @mkutz.

I think the following aren’t talked about enough:

  1. Bias
  2. Risks
  3. Oracles
  4. Heuristics
  5. The skill of asking questions

Here are some in my mind:

  • Epistemology

  • Mental Models

  • Problem Solving

  • Game Theory


Interesting! I’m thinking this is related to red teaming & general security testing.
That whole area totally slipped my mind so far.

1 Like

You mean to be aware of cognitive biases?

Risk assessment techniques!! Damn, totally forgot about these so far.

In my mind this is rather basics and nothing novel. But you’re definitely right that is is being underused.

I know what a heuristic is, but am unsure what you mean here. Can you elaborate a bit?

Things like five whys and recognizing vague terms? I like that a lot. Totally ignored that part so far.

Thanks @simon_tomes. Love these suggestions!

1 Like

That’s definitely novel to me. Seems to be a quite general term. Can you provide some sources how that can be applied to testing? Would help me a lot.

Thanks for these, but I’ll probably not use them in the talk for they are quite broad and probably deserve one talk each. I could work with more concrete examples that apply these to testing, though.

My idea for the talk is to take roughly 3 minutes per term/tool/technique to provide enough information for people to judge if they can apply this to their problem and want to invest time to research the topic.

But again, thanks a lot for the ideas!


There is some red teaming and general security testing, certainly. There’s also testing to determine where the limits of the software fall, whether there was a mismatch between allowed field lengths on the user interface and on the back end (which, to be fair, only matters if the UI allows more than the back end allows). Not to mention finding how gracefully the software handled unexpected failures.



Totally agree yet I just think it’s one of those things where some oracles are discussed as the only oracle, such as a requirements document.

Sure thing. There’s a huge opportunity for testing folks to realise the power of heuristics and to use them in their day-to-day testing activities. I think some are just used without considering what they mean and the power and ability to combine heuristics to spark test ideas. For example, testers I’ve worked with use CRUD (Create Read Update Delete) all the time and probably do so without thinking it’s CRUD. And when we think of it as CRUD we can then combine it with others. For example, let’s combine “Never and Always” with “Delete” and “Update”: I should never be able to delete someone else’s an embarrassing photo of me but I can always update the tag so others can’t see it!

Hope this extra helps. Always happy to chat more whenever. :slightly_smiling_face:

1 Like

@sebastian_solidwork and I have been trying to popularize the term “semi-automated testing”. Which can mean things like:

  • Using automated test code to get the system to the state you’re interested in, then taking over from there.
  • Combining automation code with the human brain to get the best of both, such as by automatically generating a bunch of screenshots (e.g. of translated UI’s) and then having a human apply a “blink test” to the results.

I guess it depends on what you mean by not generally known.

There’s a lot that, to me, people don’t seem to take to. Some people seem to believe that automation really is a kind of testing, rather than a tool. Some people believe in best practices or KPIs. Some people think that it’s sensible to count test cases or bug reports like they’re fungible. Subjective nature of quality, ignorance about the difficulty of metrology, reliance on “manual test cases”, basically everything I’ve written about more than five times.

New Stuff
If you want me to say something I don’t think I’ve talked about before…

James Bach has the Steeplechase Heuristic, where extreme data can test the limits of one input, but that data could also be used to test an input downstream that it would ordinarily not reach. Galumphing, which is low-cost unnecessary variety that can sometimes yield new information.

I came up with things like Post Festum changes, where new changes or fixes to a system are done with new perspectives and missing context, which introduces new problems. My example is a system designed to add users with an employee ID - the default is 00000 and must be changed. Then the system has a new feature to add many users at a time from a list of names, but because the developers forgot about how user IDs work the code injected default employee IDs into all of the users. Post Festum means “after the fact”, as in the time passed and things forgotten since the original development. It literally means “after the feast”, as in the party’s already over.

I came up with something called the Motion Sickness Heuristic, when testing workflows with loops or forward and backward flows, such as the back button on a browser is to a website. You provide unusual or extreme input, then move back and forward through the system. You can find new paths and branches of a flow, or unusual behaviour for data, or idempotency issues for functions that work on that data. You can find problems with data persistence across states, data on unused path branches, mandatory input constraints, all sorts. It challenges the idea that users will always go forwards in a flow, and that downstream functions or data never end up upstream.

I carry an idea I call “state abuse” where you find a way that a nice user (or a nasty one) interacts with a system in a state that’s not expecting it. It probably exists under another name. Browsers are great for this because anything with a web front end can be left in a state while you manipulate the data from somewhere else, like a new tab. A simple example is saving edits for a user you already deleted. Stuff sometimes gets developed with the assumption that a user reached a state through a known workflow, so the state follows certain expectations, and subverting that can find problems. The key is to look for interfaces that CRUD data and consider how to play them off each other.

Not a lot of people know about Allpairs and Perlcip.

I rarely talk about games that help with understanding testing skills, like Set, Zendo and Concept

I don’t hear about Huh? Really? So? And? much any more, but I always find that useful.

Would I Like To Hear A Talk?
I suppose my wish to see such a talk would be motivated by learning something new that I can use to improve my testing, combined with seeing something that other people haven’t done so many times before. Talks generally put me off when they’re about something insanely specific, are tangential to testing, would have made a better blog post, are selling something, or come from a place of philosophic and scientific ignorance. So if you could find something new to me (I don’t know how you’ll determine the novelty of what you collect), talk about it in a way that would be helpful to testing (what a tool achieves, what a term educates about, what a technique can do to find information for people that matter), and avoid confusing tools and processes for learning and communication I’d be into it. I should say if your goal is a large audience rather than me watching then my perspective may not be as useful.


Oh, I love that. Trying to do that in my company as well.
I think I also read about “Automation in Testing”, which is probably the same thing, right?


Thanks a lot. I’d put a lot of things you mentioned under the general term heuristics as @simon_tomes already suggested. I really like your specific examples! Helps me a lot to prioritize.

The motivation for the talk is mainly that I realized that I know a lot of these things as I get to play a lot with them and am able to do some research on the job. Not many others seem to get that kind of time. So I have something to share with these rather busy folks to help them decide what to look into to solve their specific problems.
So the talk will probably be a broad overview on things that are novel in a sense that they are not into some ISTQB course or are recently becoming more relevant or popular. Each thing will get 3 to 5 minutes only. So I won’t be able to explain very complex things and I won’t go into a lot of detail on anything. Basically I’ll try to explain how things work, what they are good for, and what you will need to apply them to your problem.


What I love about this approach is that it could spark many things in many people. It could easily lead them to want to go investigate further and see what’s possible — just like you’ve done in your career, @mkutz.


You are welcome! That phrase is another I prefer to use.

I would not say that Semi-Automation is a specific concept, but a more of a “catch phrase” to advocate for automation (and development in general) in testing beyond the typical standalone E2E “test automation” on a CI-Server.
Automation being a tool for testers, not a replacement. And by that offering a wide range of often unused possibilities.


Race conditions. Generating a system load that resembles real world volumes and simultaneous requests, then checking that all transactions remain stable and return the data to a valid stable state.