I’ve lost count of how many test artefacts I’ve used in my career. From test plans, test scripts, test strategy docs, spreadsheets, slides, confluence pages, swagger docs, mind maps, bug reports in countless different tools to exploratory testing charters, more spreadsheets and more!
I love how these artefacts can often have an intention that conflicts with reality.
I’d love to hear a story from you about one specific test artefact that you’ve used in your career.
What is the test artefact? Ideally share a specific example with context
What was its intended use/purpose?
How was it actually used?
I’ll start
I once created/used a regression testing checklist called the Release-o-Matic-3000.
The Release-o-Matic-3000 aimed to remind testers what to check during a regression testing pre-release, it also included some sanity checks to run after the release
We used it every two weeks as a team to run important checks for our two week releases. It worked well. It was simple. Yet also fallible as we would often become complacent of what’s in it. Over time we’d ignore some parts if we were convinced no recent changes affected a certain area. The spreadsheet would make it clear what we’d skipped. Yet it was only really for our QA team. No one else would look at it. Only the rest of the team would ask “Are we ready to release?” and we’d look at the sheet and say “Yep”. But that’s another story! At that point in time not everyone was owning or as responsible for quality as our QA team were. Don’t @ me!
A story that comes up at work from time to time, as a cautionary tale of producing things that you think have value but actually doesn’t.
A colleague once put a baking recipe in the middle of a test report as a way to prove that no one actually read the thing after it was delivered. And therefore just unnecessary work.
Reminds me of Van Halen’s “No brown M&M’s” clause in their concert performance contracts. If there were brown M&M’s in the green room, they would know that the contract was not carefully read, and there could be safety concerns that the venue might not have addressed.
I once included weekly shoulder massages for all the testers as a line item in my test lab budget and it got approved. Sadly, my subsequent requests to book the massage sessions were rejected despite the budget having been approved.
I created video recordings for numerous projects where it was not possible to make screen recordings, such as:
We had a client who built weird DOOH installations. One was a suite of games for a 12 feet wide, 8 feet high touchscreen. When you were close enough to use it, you could only see a small fraction of the display. We set up a video recorder to record the whole screen area and all the hand / screen interactions, and analysed the video after each test to see if anything had happened outside our field of view. We could also investigate things that happened too fast to assess while doing the test, which can often happen with games.
I was investigating a bug that was causing data loss due to incorrect synchronisation between a server and web-based clients. It appeared to be intermittent, but very few bugs truly are. I set up a camera to record the screens of the server and a couple of clients. Analysis of the recordings allowed me to identify a one-second “window of opportunity” during which a certain action would cause the incorrect synchronisation and data loss. It also provided irrefutable proof to the developers and management.
We were testing an auction website that had some complex functionality during the countdown stage. We needed to write manual scripts for multiple concurrent user inputs with precise timings, even for the exploratory testing, and while we were applying those inputs there wasn’t time to check and assess what was happening on all the monitors - it was a mad frenzy of activity for about ten seconds. I arranged four monitors in a 2x2 matrix and set up a camera to record them so we could review what had happened afterwards.