Novel
I guess it depends on what you mean by not generally known.
There’s a lot that, to me, people don’t seem to take to. Some people seem to believe that automation really is a kind of testing, rather than a tool. Some people believe in best practices or KPIs. Some people think that it’s sensible to count test cases or bug reports like they’re fungible. Subjective nature of quality, ignorance about the difficulty of metrology, reliance on “manual test cases”, basically everything I’ve written about more than five times.
New Stuff
If you want me to say something I don’t think I’ve talked about before…
James Bach has the Steeplechase Heuristic, where extreme data can test the limits of one input, but that data could also be used to test an input downstream that it would ordinarily not reach. Galumphing, which is low-cost unnecessary variety that can sometimes yield new information.
I came up with things like Post Festum changes, where new changes or fixes to a system are done with new perspectives and missing context, which introduces new problems. My example is a system designed to add users with an employee ID - the default is 00000 and must be changed. Then the system has a new feature to add many users at a time from a list of names, but because the developers forgot about how user IDs work the code injected default employee IDs into all of the users. Post Festum means “after the fact”, as in the time passed and things forgotten since the original development. It literally means “after the feast”, as in the party’s already over.
I came up with something called the Motion Sickness Heuristic, when testing workflows with loops or forward and backward flows, such as the back button on a browser is to a website. You provide unusual or extreme input, then move back and forward through the system. You can find new paths and branches of a flow, or unusual behaviour for data, or idempotency issues for functions that work on that data. You can find problems with data persistence across states, data on unused path branches, mandatory input constraints, all sorts. It challenges the idea that users will always go forwards in a flow, and that downstream functions or data never end up upstream.
I carry an idea I call “state abuse” where you find a way that a nice user (or a nasty one) interacts with a system in a state that’s not expecting it. It probably exists under another name. Browsers are great for this because anything with a web front end can be left in a state while you manipulate the data from somewhere else, like a new tab. A simple example is saving edits for a user you already deleted. Stuff sometimes gets developed with the assumption that a user reached a state through a known workflow, so the state follows certain expectations, and subverting that can find problems. The key is to look for interfaces that CRUD data and consider how to play them off each other.
Not a lot of people know about Allpairs and Perlcip.
I rarely talk about games that help with understanding testing skills, like Set, Zendo and Concept
I don’t hear about Huh? Really? So? And? much any more, but I always find that useful.
Would I Like To Hear A Talk?
I suppose my wish to see such a talk would be motivated by learning something new that I can use to improve my testing, combined with seeing something that other people haven’t done so many times before. Talks generally put me off when they’re about something insanely specific, are tangential to testing, would have made a better blog post, are selling something, or come from a place of philosophic and scientific ignorance. So if you could find something new to me (I don’t know how you’ll determine the novelty of what you collect), talk about it in a way that would be helpful to testing (what a tool achieves, what a term educates about, what a technique can do to find information for people that matter), and avoid confusing tools and processes for learning and communication I’d be into it. I should say if your goal is a large audience rather than me watching then my perspective may not be as useful.