Ask Me Anything: Testing Essentials

Tonight we’ll be joined by the legend, @danashby, for an Ask Me Anything session all about the essentials of software testing.

We’ll add any resources mentioned during the session to this thread. We’ll also add any questions we don’t get to during the session :wink:

If you miss the live session, a recording will be available on the Ministry of Testing website once we’ve edited and added captions.


Question Vernon and Dan wanted to come back to

Hi Dan, What sources of information do rate for “best practice” Information? For example I’ve been looking in to quality metrics. The Accelerate book points towards Roll backs and away from code coverage. But currently it’s my only non opinion based source. Do you have any others?

Questions we didn’t get to

  1. Paul Coles: There was a question on the club the other day. What do you think of not being able to link cases to requirements until you’ve written all the tests? Do you think this the best way, or you do it as you go along. Would we even need to link requirements to tests, outside of waterfall type projects? (I think this is the Club thread Paul is talking about Requirements Coverage on Tests)
  2. @lgibbs: What is the ultimate goal of software testing?
  3. Alexander Orlovsky: testing is important, but how do you sell it, what is your major selling pitches?
  4. @major: Are there any Security Testing tools (Paid and Open Source) that you would recommend for both new & experienced Testers?
  5. @akshayagupta: Best practices to setup Native Mobile apps test automation strategies and tooling? Shift left, Stubbing strategy, test without backend, shall we choose appium like tool or native tools, non-functional testing etc .

Resources Mentioned

Find Dan on Twitter

Free software testing essentials training

And the Club area for that training course

A whole list of places you can practice testing in a safe space on products you have permission to test :wink: Products and sites to practice testing on

Drive (the book Dan mentioned) Drive | Daniel H. Pink

Accelerate book

Leading quality book

The art of business value book

Some talks about metrics✓&q=metrics

Fiona Charles hosted an EXCELLENT session about Critical Thinking recently, one to watch back Testing Ask Me Anything - Critical Thinking - Fiona Charles | MoT


Awesome! Yesterday was so much fun. Thanks for everything!
It was my first AMA, but I dont know why I was nervous - I had a blast!
(Sorry for going off on so many tangents though :sweat_smile:

These are all great questions. I’m glad we posted them on here so I can continue answering them.

I’ll answer them in separate replies below over the course of the next few days :slight_smile:

1 Like

So, first question from Paul Coles…

Being completely honest, I’m trying to move away from test cases, pushing these checks towards automation. Test Cases assess quality from the correctness perspective. They use (rely on) the explicit expectations, and we form some steps or actions to follow relating to manipulating the software a specific way, so that we can then assert that the software meets the expectations - the output is pass or fail.
We can do all of this with automated tools, and for the time it takes to create a test case (with the steps and re-writing the expectation within this specific test case artefact), with modern automation tools it can take just teh same length of time to script teh check too - especially if you link the script directly to any acceptance criteria within the requirement artefact. And of course, automation scripts need to be maintained, as would a test case too.

With regards to linking testing artefacts to the requirements, I’m a fan of doing so. And doing it early in the development process to show visibility on teh state of the testing too. Be that test cases (if you still use them), test charters for exploratory testing, or automation scripts (tools like Jira enable you to link code from github or bitbucket really easily to a user story). Results should ideally be traceable within the requirement too.

In my current context at Ada Health, this is essential for compliance reasons - auditors need to see traceability between requirement artefacts and the testing artefacts along with the testing results. We are Agile too, development methodology doesnt make a difference in the eyes of the auditors.

1 Like

Question 2 from Louise!

I love this question! It’s one that many people might struggle with, and certainly most of us would over think this. I’m going to try to keep my answer simple, and I’ll break this into 3 parts: The purpose of testing, the goals and objectives of testing, and the impact of testing. (and I’m probably going to go off on tangents with this, which will make my answer soooo not simple. Haha)

The purpose of testing in my opinion is the simple part - testing is assessing quality, it’s purpose is to assess the quality of what we are testing. And we test many things as testers on software projects - we obviously test software, but we also test the idea of the software, the designs (architecture, UI, UX, and code design) of the software, we test the requirement artefacts, we test our development processes and ways of working within our teams, etc… Each of these things have a “level” of quality that we can assess and uncover information about.

Testing has different goals and objectives too. Goals might revolve around the various activities of testing, and improving our testing (e.g. goals around structuring our testing or reporting our testing, or improving our testing flow, etc). And objectives relate to the outcome of our testing, it’s purpose - sharing the information from our assessment of quality in different ways to help stakeholders (including our teams and ourselves) reach an intersubjective consesnus on the quality of the thing we have tested, so that an informed decision can be made about whether the thing’s quality is good enough or needs to be improved.

Now, impact is taking this to the next level, and it’s super important so I really wanted to include it in this answer. Impact is how the outcomes from meeting an objective actually affects people. In our case, this is how the outcomes of us sharing information relating to our assessment of quality affect people - be it ourselves, our team, our business, our customers and, of course, our users. Our impact from our testing also goes hand in hand with the outcomes of the decisions made from our information. BUT how we choose to inform about our assessment of quality - even the words that we choose to use when reporting on our testing - have an impact regarding the outcome of the decisions. We need to think about that…

Sorry for another really long answer… turned out that maybe it’s not so simple after all! :sweat_smile:

1 Like

The recording is now live!