Most of the time, when we do automation, we match the expected result with the actual result. There is a set of pre-defined steps, and after executing those steps, we get certain results, and with the help of various assertions, we verify whether that matches the expected result.
But in all these, we miss the exploration part, probing part, critical thinking, etc., that actually shapes test design, test strategies, etc.
So, if we have a certain checklist of expected results and focus only on matching the expected result with the actual result, are we doing testing or just checking … because testing is beyond checking also, and if so, is it okay to use the testing word along with automation?
It is a test, and one would hope not the only test. bit every test needs exploration and a regression test (what automated tests fall under) is just a checklist of does X, Y, Z still work
Automation can be either. It can be used in an exploratory manner to learn about a system rather than checking for expected results, but very few people do that. Those that do always seem to be testers - I have never heard developers talking about it. The vast majority of people just use automation to do mundane checklist based checking.
If your automation is just verifying expected behaviours, it’s checking, not testing. Bach and Bolton have written at great length on the topic.
Depends on which school / group of people you ask. Different people ask different takes in this.
As I go with Rapid Software Testing, I say basically yes.
But don’t forget that it’s not “testing VS checking”. While Michael and James started with an article with that title in 2009, they published in 2013 a new one. Since then they consider checking as one activity within testing out of many others. Testing is the craft in general. And I agree with that.
You may read more here.
Testing in general, a critical thinking person, can’t be automated. But different tasks within in, like checking.
Typically I call it then “check automation”, while also automation can be used for way more than just checking to support the testing.
Used to be checking but with the chatgpt capabilities I do more exploratory stuff directly in “automation code” nowadays, both cypress to test gui or perl to test api´s. Its so easy to do what ifs.
So there´s a “grey zone” btw manual and automation for me nowadays.
if you put in specific fuzzy data and expect a specific output => checking
if you put “any” fuzzy data in and want to observe openly how the software reacts to it => testing
My point is that also with so called fuzzy data, not aligning to interface definitions etc, you can be in the realm of checking when you expect specific (fuzzy) data to result in a specific behavior of the program. Some call it unhappy paths, I call it error handling. You can check that by automation.
I say: no matter which type of data and expected behavior of the program you have, as long as you have fixed expectations (at most extrem “hard-wired” by coded checks) its checking.
When approach it more openly, don’t judging outcomes as either correct or wrong in the first place, you are testing.
And maybe the part with having specific invalid data and checking for a specific error handling isn’t fuzzing anymore?