What misconceptions should senior testers be educated about that they may have picked up during their career?

Perhaps obviously, this question was triggered by one @mwinteringham asked ā†’ What misconceptions are junior testers taught early on around testing and quality?

1 Like

ā€œmanual testing vs automated testingā€ :smiling_imp:

I think some of the points from the junior thread apply for older testers as well. (I intentionally did not used ā€œseniorā€ or ā€œexperiencedā€.)
I see not too few older testers being stuck in Expert Beginner (scroll some pages down):

2 Likes

What BDD really is (i.e. not cucumber) and how to do it with a pen and paper.

1 Like

Iā€™m having a fascinating/frustrating experience with my work that I think applies here. They arenā€™t test engineers but itā€™s things Iā€™ve heard from testers.

I think it is best summed up with with two quick points.

  1. Automation doesnā€™t solve everything.
  2. Gatekeepers of quality isnā€™t a good thing.
5 Likes

Curious about ā€œhow to do it with a pen and paperā€.
Doesnā€™t BDD require the creation of executable specification (the ā€œdriven developmentā€ part of the acronym)?

No, just a specification. BDD is having conversations around concrete, formalized example scenarios so as to agree a shared understanding of how the app should behave.

Turning those scenarios into an automated test and using that to drive development is ATDD.

1 Like

If BDD doesnā€™t imply executable specifications, how can the scenarios drive the development?

Additionally, the BDD in Action book has a chapter named ā€œFrom examples to executable specificationsā€. Dan Northā€™s Intro to BDD article is all about executable specifications and he also said BDD is simply TDD from a different perspective (I assume we agree TDD is about executable specifications).

Most people drive their development from non-executable specifications. Iā€™ve done word-document driven development more times than Iā€™d care to remember.

Itā€™s not cheap to set up the tooling necessary for building effective executable specifications. Itā€™s also risky - it fails a lot, and a lot of the tooling sucks.

On most projects Iā€™ve joined the infrastructure isnā€™t there from the start. I will always try to introduce it gradually, but before I do I will still try to introduce people to BDD with pen and paper and adjust the mindset of ā€œhave idea ā†’ throw it over the wall for developers to implementā€ to ā€œhave an idea ā†’ bat it back and forth via formalized examplesā€.

Ok. You talked about the process of moving people towards BDD.
But BDD itself, the practice definition, is about executable specifications.

ā€œBDD with pen and paperā€, as the qualifier itself implies, itā€™s something different than BDD,
since the examples donā€™t drive the development, but the developerā€™s interpretation of the examples drive the development.

As a variation of TDD, BDD is similar with double-entry bookkeeping, if you change one side, it becomes incompatible with the other. With ā€œBDD with pen and paperā€, you can change either side without anything in your system saying there is a problem.

I would be careful in using this expression it has the expression ā€œDriven Developmentā€, whereas the practice doesnā€™t implement this connection.

Ok. You talked about the process of moving people towards BDD.

No. We are doing it from it from day 1. Automation is what comes later, not BDD.

There is a benefit in doing BDD around an executable specification but 90% of the benefits come from how it shapes communication with people. Itā€™s 100% about a mindset shift in approach to developing specifications.

the practice definition, is about executable specifications.

No, itā€™s not:

Although BDD is principally an idea about how software development should be managed by both business interests and technical insight, the practice of BDD does assume the use of specialized software tools to support the development process.

A quick jump to tooling actually usually ruins any attempt at BDD. The executable spec DSL must match how the stakeholders express themselves and integrate seamlessly with test automation tools to work.

In practice Ive seen this fail more often than not, especially when the tool is Gherkin based.

As a variation of TDD

BDD was inspired by TDD but itā€™s not a variant of it. ATDD is the variant of TDD that uses executable specs.

1 Like

I can, and will, go further. ā€œExecutable specsā€ are lies that at one end prove a misunderstanding between the role of computerised checking and human understanding of implicature and the metaphorical nature of the affordance of language versus the abstraction of computer languages down to mechanical processes, and the other end sell the idea to non-testers that testing can be automated and give false impressions of the high capability of tools and low capability of testers. Even when it works itā€™s still failing.

Can you go into some more detail about what you mean by this?

I have the impression that you may be saying that it doesnā€™t make manual/exploratory QA redundant (which I fully agree with), but are you saying something stronger than that?

I have the impression that you may be saying that it doesnā€™t make manual/exploratory QA redundant

In a way, I am. In my namespace thatā€™s all just testing, for which many tools are used, coded checks being one. That being said I consider BDD a separate concern from testing in many ways - itā€™s a development approach to guide building software. I donā€™t think of it as testing, I think of it as using checks to help steer development towards some desired value. As soon as you start using to test the product youā€™re stepping into a different arena with different goals and risks, and you need to be thinking about serving a test strategy - to be able to defend the cost of writing, running, maintaining, investigating and reporting that check as a responsible tester.

are you saying something stronger than that?

Yes. Human language and computer language are fundamentally different, and to pretend that one means or infers the other can be both useful and dangerous. The further away from an expert you take an executable spec the more misinformation you deliver. We shouldnā€™t trust flavour text in such a simplistic way.

We write checks that attempt to describe a complex idea full of implicature and tacit understanding - obviously technically incorrect and incomplete but acceptable and useful for the goal of guiding development. Then we try to reverse the flow by using the descriptions to describe the checks, which I believe to be a bigger and much more important mistake. We make checks a subset of the testing of a described specification, okay, but we cannot pretend that the described specification is fulfilled by the code underneath. To use it as a communication tool outside of those with enough tacit understanding of the weaknesses is reckless and irresponsible. It also breeds the idea that code can perform descriptions of testing, which is fundamentally incorrect in terms of code, and insulting and dehumanising in terms of testing.

Other ideas are tied up in this misunderstanding as well. The belief that testing can be automated, that checking is sufficient, that ā€œexecutable specsā€ are really specs that are executable, that BDD automatically performs useful testing beyond its scope; all helping to bury abstraction loss under convenience and degrade the perception of the craft of testing.

Yes. Human language and computer language are fundamentally different, and to pretend that one means or infers the other can be both useful and dangerous.

Well, it can, as all languages can, itā€™s just easier to screw up this communication with English language because it defaults to vague and verbose.

The value in the back-and-forth aspect of BDD is often in filling in those gaps. This is where a testerā€™s acumen can be valuable before a line of code is written - they are often good at spotting the details left out of the spec.

BDD certainly works much better if your communication centers around a DSL that doesnt suffer from this issue. I call this the domain appropriate scenario language.

1 Like

it defaults to vague and verbose

This is a bit of an aside, but itā€™s almost constructed from a system of vagueries. English, and I imagine all human language, is built from what Guy Deutscher calls ā€œa reef of dead metaphorsā€, to use concrete terms to make references to the abstract. Time is a common example, such as ā€œa short time passedā€, despite the fact that time is not of a physical length, nor moves in a way that it can physically pass. Every word then becomes a worn down version of its historical meaning, affordances on affordances like animals decaying down into chalk, until its metaphorical nature is folded into everyday speech. Even the word metaphor is from the Ancient Greek, meaning to transfer or transport from Ī¼ĪµĻ„Ī¬ (meta = with, across, after) + Ļ†Ī­ĻĻ‰ (phero = I bear, carry). In English we take it to mean a transfer of meaning, but in Greece you can do metaphors between your bank accounts.

My actual point is that computer language describes a mechanical and deterministic endpoint - the way electricity is moved on the physical level, whereas human language is interpreted as part of its communication. One speaks to metal, the other speaks to people. Another way to phrase the concept would be to ask who is trying to understand the language, a deterministic machine or a particular human being, and everything that comes with that.

Coders know that computer language syntax is a shorthand for abstractions down to the metal, but if we take descriptions of them and use them to communicate outside of that domain of knowledge we present the idea that the ā€œtestsā€ are really testing, and really doing what they say they do, which is actually our clumsy way of describing deterministic-interpreted systems with human-interpreted communication.

they are often good at spotting the details left out of the spec.

I donā€™t think that BDD is testing, and the use of its checking artefacts in a wider test strategy involves a lot more communication than the business-readable test infers. Testers use specifications for reference, but obviously no system is fully described by specifications, so testers work outside of written specs a lot of the time - the unwritten specs and infinite tacit possibilities (ā€œprogram should not wipe userā€™s hard driveā€). Iā€™m happy to help at the design stage to build better specs so we have fewer surprises later, and Iā€™ll give feedback to people doing BDD, but honestly as a tester specifications are much more powerful as human ideas than computerised ones. I can use them to build models and influence my testing and direct my risk assessments. Knowing that thereā€™s a check out there that can tell me that thereā€™s one particular example that a human-built system says can work is useful, but not as useful as the majority of the problem - will it work?

If by ā€œfilling in those gapsā€ you mean educating business people on the limitations of checking then Iā€™m not certain that itā€™s working in a general sense, nor necessary in a specific one. My role as a tester is to communicate and report what people need to know, in a responsible way, to people who matter, and Iā€™m not sure we need to BDD business-readable descriptions to communicate in that way, I think thatā€™s creating work that creates problems, and I donā€™t know what itā€™s supposed to be for. As for other communication itā€™s still tricky - when we say that the spec represents even some checking we make an assumption. If we say that it represents testing thatā€™s a much bigger assumption. Assumptions are where bugs live. After all, we assumed it would work when we built it - we didnā€™t build it to not work. Making those assumptions (or more than we need to) of a system that is supposed to make fast, small, accurate, pseudo-repeatable comparisons seems to me to be a little recklessā€¦

BDD certainly works much better if your communication centers around a DSL that doesnt suffer from this issue. I call this the domain appropriate scenario language.

ā€¦ which is why I like your domain appropriate scenario language. It is a significant improvement, absolutely no doubt, and Iā€™m glad to see it. Itā€™s just interesting that the further away from business-readable we get the lesser the problem becomes. My problem is with the gap between the ā€œbusiness readableā€ description and the actual events that take place. Business people wonā€™t read the code, theyā€™ll read the description and assume that the code ā€œtestsā€ for that eventuality. A lot is tacitly communicated by this, or more accurately itā€™s left unsaid - not everyone understands the difference between whether something can work, or if it will work under various circumstances for extended periods of time under many different pressures and challenges. It assumes the correctness of the code - the parser, check code, DB and interface interactions and so on. It doesnā€™t mention the abstraction losses that we pay for the privilege of coded checks. It doesnā€™t explain the difference between testing and checking to people who may conflate the two (people assume written test cases are fungible with an automated equivalent, for example). Stuff that a good tester and good automator knows well - tacit information not reported; in this case in favour of a business-readable format. Thatā€™s dangerous. Weā€™re potentially promoting testingā€™s obsolescence through omission, advertising it as equivalent to shallow checking, and I think we have to take better responsibility for our reporting. I think BDD is a great idea, but to tie it into automatic checking and let our business-reading test clients believe that weā€™ve achieved an important part of our test strategy as a result is wrong.

So I say leverage from the spec to the code with all your might, and use the checks as traffic cones to steer your development toward stated behaviours, just be very careful of mapping the other way around - the spec does not describe the checks, it merely influences their construction, and to state otherwise is not without consequence.

1 Like