I saw a blog post released recently called 41 Definitions of Software Testing. When I was reading it one missing word really struck me: Risks.
I didnāt see any mention of risks in the 41 definitions. I, personally, found the definition from the Software Testing Clinic to be a better rounded definition. Having recently taken RST with @mb1, I donāt think I could write a post about this without mentioning his blog post Testing isā¦
Iād like to discuss our own definitions of software testing in a safe space where we might refine them as we discuss.
How do you yourself define software testing? Does it differ to any of the suggestions above?
I do like the way MB used the language, āamong other thingsā. In my experience, a role in a team can not be defined by a simple, āthis role isā statement. Not testing, not programming, not managing. Further, most job adverts which say ātestingā miss the mark about what they really need in a tester.
So, Iāll begin with things that arenāt in either list:
Testing is, among other things, helping your team to identify the risks of releasing your product in its current state.
Testing is, among other things, helping your team to understand the risks of releasing your product in its current state.
Testing can be (not always is), among other things, acting as an advocate for our customers.
The basic definition I tend to use is āthe discovery and investigation of riskā.
I like this one as it easily fits in with early testing of ideas, entire team collaboration in discovery and investigation and primarily the general āactivityā nature of testing.
It also fits with my preference of looking for testers who fit sort of highly technical Sherlock Holmes type hats.
I do though occasionally work with test missions that do not lean entirely towards that definition but I have not yet found it of value to have one definition of testing that would cover all possible testing missions.
I attempted to put some thoughts into words (with the help of a freelance writer) into this blog
I think everyone should develop their own pitch for:
Explaining software testing to a 5 year old, a 15 year old, a product manager, a software engineer and to a colleague. Kinda like how this video tries to explain cryptocurrency on 5 different levels.
At the risk of digging old thread, Iāll try to refer back to original question.
Iāve been thinking about the essence of ātestingā and I think that the core is really āseeking answers for questionsā.
Sometimes questions are asked explicitly, sometimes we must state them ourselves. Sometimes we must provide thorough answer supported by various data (and we must prove that our support arguments are correct as well!), sometimes quick chat is alright. Sometimes it takes us weeks to find answer, sometimes we have them in matter of minutes. Sometimes we must provide answer to questions that are meant, not merely asked. Sometimes we must explain why our answer is sufficient. Sometimes we must work with clients to help them understand consequences of their questions, effort required to answer their questions and to help them refine their questions. And sometimes we fail to find answer we were looking for.
Yes, such definition is not specific enough. Many other professionals seek answers as well - scientists immediately spring to mind - and yet, they are not testers. There are many questions that testers have no interest in answering. We work within - broadly defined - IT, which limits scope of our work in some way. And finally, there probably should be something about how our work impacts other people and products we test.
Sometimes I feel that if we want to know āreal truthā about testing, all substantial definitions are deemed to fail and should be abandoned. If we āreallyā want to understand what testing means, we must define it through usage of the word in the community of word-users and recognize that it changes and shifts meaning, it is elusive and sometimes self-contradictory. But then - such definitions, even if they are charming for some, arenāt really helpful.
I have tested a lot of software over the years and from my experience, I would define software testing as a method to check whether the actual software product matches expected requirements and to ensure that the software product is defect-free.
Beside all of the above, I describe software testing as:
āitās not just pushing buttonsā
You get a new piece of software/hardware which no-one ever saw and you get the chance to break it down, nuke down the server with performance tests, hack it and criticize someoneās work (in a proper way of course)ā¦ and you get paid for it.
I would also like to add itās change management, in many legacy-settings there is still a lot of lack of quality assurance.