Live Blog Testbash Germany: When to Say No to Automation by Jack Taylor

I’m pleased for Vern introducing 99 second talks between the talks. My fingers can take a break!

And we’re off, with Jack Taylor from the FinTech industry. I’m interested in hearing what he has to say. He lives in London and works in Brighton and did a graduate scheme in FinTech to get his masters. He’s gone through a few roles before becoming a tester in 2015. He’s also into metal and rock, plays in a back (cool pic!) and he likes cats. I now like him and am willing to learn :wink:

We’re starting at the beginning of the journey. No one knew what they were really doing, and he started out doing manual testing. Once the word automation came in, “the modernisation program” was designed (that sounds terrifying!). The idea was to track all of the metrics in the world per automation script and calculate how much money way being saved (a part of me just died). In every script, you had to track how much effort you were putting into the script and how much you’ve saved. (I see a problem with this approach but I can’t quite put my finger on it ….). So Jack learned Selenium and started writing tests willy nilly without any kind of plan. He ticked off the basic functionality of all their apps. The numbers in the tool looked good. There was green. But nothing had actually been tested properly, and the defects came in.

He’d been checking things with out any purpose (great gif of a dog looking under the hood of a car!). He’d only been checking – and not exploring. And exploring can’t be automated. Because it’s non-linear interaction.

An example from a date picker he was testing. The script was basic. Check the dates, submit and assert on the next page. When he explored with a charter that instructed him to use a variety of inputs. He used the keyboard, used the back button, hack the URL, refresh the page, used invalid dates.

Already when he was manually typing instead of selecting from the date picker – he found a problem. He could just add that test case to the test base, but adding many of these kinds of things to your scripts is a great amount of overhead. Even if you decide to do that, there are still the things you haven’t thought of yet. YOU CAN’T WRITE SCRIPTS FOR THINGS YOU DON’T KNOW (my caps. Also, pelicans!).

Jack realises that what he should have done in the modernisation program beginnings was to analyse the applications based on heuristics, oracles and mnemonics. He should have decided how to test each application using this information. And he should have tried to meet leadership goals while also keeping test quality.

Some definitions:

Heuristics are experiences or points of reference to give us a baseline from which to explore and learn.

Oracles help us see whether something is right.

Mnemonics are ways of remembering things.

He’s introducing the HICCUPPS heuristic now.

H – History of the product (it should be consistent with previous versions)

I – the image of the product should be consistent with organisational standards

C – it should be consistent with comparable products

C – Claims: it should behave as people say it will

U – User desires: Is it meeting them?

P – Product (missed his description of this, but it’s googleable)

P – Purpose (missed his description of this, but it’s googleable)

S – Statutes (for example for private data)

To determine risk, he’s introducing RCRCRC from Karen Johnson.

I like this one too:

  • Recent: new stuff might not work
  • Core: essential functions
  • Risk: what would cause the biggest problem if it broke
  • Vonfiguration: Anything dependent on environment settings
  • Repaired: bug fixes can introduce new issues
  • Chronic: stuff we always break.

Richard Bradshaw also gave him tips for automation: When creating a regression suite, it’s important to consider what tech you’re testing and who will use and maintain it (oh yes! Consider your audience for your automation!). You should also ask yourself what regressions you’re actually testing against, what the testability and automatability are like, and what layers you need to test at to get the required information. You should be aware of your timeframe and whether you have space to explore and learn tools. You should know what other testing is in place and also what risks you’re trying to check – behaviour? Visual? FX?

Jack is now suggesting fighting back using metrics. People thought that exploration was lazy. Until he showed defects found by exploring versus defects found by automation suites. You can also use low tech dashboards (example from Nancy Kelln). You can have function areas, test progress, and the results in something with colours. Apparently good for managers :wink: (I try to be different!).

All in all, it’s important to deal with problems before tools. Don’t just create a Selenium suite because you know Selenium. Always choose the best tool for the job.

His final tips:

  • The nightmare headline game from Elisabeth Hendrickson
  • Using personas (shout out to Cassandra!)
    • He likes using Sarah Connor who hates tech and is computer illiterate!

Despite his being quite new to the field, Jack is noticing how much he is learning and recommends keeping up to date with tech, so you can maximise your arsenal!

His conclusion is that automation plays a large role – and yet it has limitations. Each system is unique and requires a multi-dimensional approach to testing. Constant exploration is a must-have for successful testing. Make sure you consider problems over tools, and deliver stats to leaders – but without sacrificing quality.