I was reading an article on design principles, and of course it made me wonder if anything like that existed for the software testing world.
I scratched my head and couldnât recall anything like this, and then made the mistake of googling âsoftware testing principlesâ before crumbling into software tester shame.
So please help me out here, are there any principles for software testing, and if so, what would they be? Can we create our own community sourced principles of software testing? What exists out there on this topic?
I would think that one of the principles should be around competance and intention.
As information providers using a mix of spoken, written and tool based methods, we need to make sure that we are competant to use these platforms in order to do our role.
We need to make sure that the information we are giving is as truthful and complete as possible. None of this âthrowing it over the wallâ as it is just too hard too time-consuming or too whatever to finish giving the complete story around our findings.
If we doubt our intentions, then be honest about this in our comms. I wanted more time to work on something lately but there was no solid reason for it rather than a slight unease about coverage. The best I could come up with to the team when asking for the time was ââŚum 20 years in testing and pls can I have the timeâ They not only gave it to me but pitched in too to see if they could find the âgut bugâ and we did. And it was weird, edge case, tiny piece of corrupt data based but there. The conclusion was probability - low but impact - massive so it was pegged as something for fixing in the next release.
Part of the competance has to be about ability to do something if we can and ability to research/train if we canât. Testers who say yes all the time cause issues. If you can think, well, I canât do this now but it took me x time to learn to do y which is similiar and communicate this, you are being the best tester you can be in terms of communicating your competance.
There has been so many half-hearted attempts to define principles of software testing (none mentioned, none forgotten), and even some whole hearted ones. Iâm not sure I like any of them - not even for inspiration.
After all, testing is a practical act performing tests that matter for people who matter. Ish. Not something principled.
On the other hand, Iâll try this one which Iâll call the software testing principle of transcendental relation:
I cannot know software until I test it.
(Yeah, good old Immanual Kant is somehow hiding himself in that.)
The 7 Principles of Testing Taken from the Foundations of Software Testing by Rex Black, Erik Van Veenendaal, and Dorothy Graham
1. Testing shows the presence of defects
Testing shows defects are present but DOES NOT prove there are no defects. It reduces the probability of undiscovered defects remaining in software but does not prove correctness.
2. Exhaustive Testing is Impossible
Testing all combinations of inputs and preconditions is not feasible (save for trivial cases). Risk Analysis and Priorities should be used to focus testing efforts instead of exhaustive testing.
3. Early Testing
Testing activities should be started as early as possible in the software life cycle, and be focused on defined objectives.
4. Defect Clustering
Testing efforts should be focused proportionally to expected and later observed defect density of modules. Small number of modules usually contain most of the defects discovered during pre-release testing, or are responsible for most of the operational failures.
5. Pesticide Paradox
If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new defects. To overcome the âpesticide paradoxâ, test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to find potentially more defects.
6. Testing is context dependent
Testing is done differently in different contexts. Example safety-critical software is tested differently from an e-commerce site.
7. Absence-of-errors fallacy
Finding and fixing defects does not help if the system built is unusable and does fulfill the usersâ needs and expectations.
My answer is âthe Google search appears to be working as designed â, which is to say it is returning principles ⌠but as @rosie may be getting are they helpful, are they relevant?
That said, the developer principles are also arguably not so great either, except at a very high level. For example, DRY (Donât Repeat Yourself) ⌠knowing your code âshould be DRYâ, does that principle alone give you techniques to avoid repeating code? Maybe you have to dig a bit deeper to find specific techniques? Perhaps, for example, you would only then know to refactor your code to create a single generic method meaning you can remove several bespoke methods which essentially have repeated code?
So maybe a slightly different question is what principles do, or should, we have in software testing that Google isnât returning and maybe we can list here?
Also, if we think more of principle as a pithy little nugget then some of the chapter titles in James Bachâs Lessons Learned in Software Testing may stimulate further reading/investigation (see the pages in Amazonâs Look Inside tool for all the chapter titles). When I was starting out in testing Iâd like to have known a principle like he has in a chapter title: âyou will discover things which will âbugâ someone whose opinion mattersâ. Cute !
My first Test Manager back in the mid/late 1990s had âSeven Rules of Software Testingâ. She had come to software testing from infrastructure project management where she was responsible for integrated hardware and software testing on a very, very big project of national significance (a big power station on the coast of North Wales).
Sadly, with this being before the Internet was a Thing for most people, these seven principles have never appeared anywhere online (unless anyone out there knows differently) and Iâve lost the two sheets of A4 they were printed on. I only remember two:
âA software tester is a devious person who devotes their time to destroying something that a kinder, gentler soul has spent their lives creatingâ
and, possibly more relevant (and something Iâve found much more useful!):
âA software tester should always be sufficiently open-minded to add new tests to their test plan. They should never be open-minded enough to remove a test from their test plan.â ((I always assumed that the rider to that second one referred to pressure from devs or management to limit test scope.)
I suspect that there was some slightly less than professional baggage in my test managerâs previous working relationships with her predominantly male colleagues, so there could well have been some self-defence in that statement.
I like your âDiscovering the unexpectedâŚâ lineâŚ
The seven principles in ISTQB Foundation (referred by Kelsey in an earlier reply) was a real eye opening for me, as I stumbled into software testing before that was even a thing (in 2003). I had been testing in all the wrong ways for 5 years, so it was a real relief learning these when I switched jobs and got certified.
I always use these when educating our own new testers who have not taken the course/certification or have no experience with testing. We hire some of our testers from the health domain, as we also need that knowledge.
I wouldnât as much call them as âPrinciplesâ as I would as âQualitiesâ or âAttributesâ or âAbilitiesâ.
And here they are:
Ability to spot a deviation from an established expectation (against the requirements).
Attention to detail
Having the big picture of the application usability in mind while validating and verifying the software at hand (Domain expertise).
These are the attributes, I would say, are what constitutes a great tester. If these are present, other skill sets (soft skills, automation, etc.) can be learnt and augmented.