Ostrich Effect - Featured TestSphere Card

One hundred cards. One hundred Test-related concepts.
Here on the club, we’ll feature a card from the TestSphere deck for people to write their stories about every month.

I challenge you:
Take a few minutes to think about your experiences with the featured card.

What bugs have you found that are related? Which ones have you missed?
How have you tackled testing for this concept?
What made it difficult or more easy?
What have you learned? What can others learn from your experience?

Take one of those experiences and put it to prose.
Telling your stories is as valuable to yourself as it is to others.

A frightful bias. I remember a distinct hesitance to log a certain bug, because I knew it was created by a developer who’d get angry.
It was a bug that might not really be a bug, but more of a miscommunication, a misunderstanding.
This developer didn’t like having bugs logged on his features, for who knows what kind of reasons.

I should’ve figured out why he didn’t like it, I should’ve talked to him and try to explain my stance, my concern, my aim to support us all to build a quality product.
But sometimes, the results don’t seem to outweigh the trouble and you stick your head in the sand. You don’t log that bug, you don’t have that talk.

What’s your story?

3 Likes

Once, a long time ago, our team created a new feature for our product. The feature had to be created for many reasons. Not having the features would cost us money, would cost our customers money in the long term, and not having it would mean our suppliers would walk away. But the problem was, the new functionality would give the appearance of our customers losing money in the short term. This meant that if we succeeded, nobody would be happy.

But, we stuck our heads in the figurative sand and trudged on. We assumed that the right messages would go to the right customers and, while they may not be happy, they would at least understand why.

So after we finished and sent the code into the wide world, our product’s users stopped just short of pitchforks at our doors (but not short of calling in the lawyers). It seems that nobody told marketing and our directiors that we were making a difficult to swallow pill, so they did nothing about communicating its advantages, so instead we left them to deal with the fallout.

From a pure test standpoint, we stopped pulling our punches. In every test report since that time, and in every demonstration, we mentioned the risks as we saw them. We didn’t stop our reporting at “It works according to the specifications,” but instead we talked about what we thought about customers reactions to the product. This then changed the test discussions from “what is the requirements coverage” and “What kind of code coverage do we have?” and “are there any bugs?” to “How do we feel?” “Are there things which aren’t required which may make this better?” and “Are there any potential issues?” among other things.

6 Likes

Many years ago we had a new test manager who introduced bugs as a measurement. For testers more raised was good but for developers more raised was bad. This led to conflict and arguments. I realised that it was the label ‘bug’ that was the problem. So I stopped raising bugs. Instead I just described the behaviour to the developer and asked if that was desired or not. Devs were happy there were no bugs raised, quality improved and I wasn’t bothered how I was rated.

That takes all of the bias out of raising issues as you are only describing what you see. I spoke about this at Leeds Tester Gathering earlier this month. I’ll add a link to the video when it’s up on YouTube.

3 Likes

Hey Adrian, warning people not to use bug counts as a metric is something we all fight against. Do share that link to your talk later.
My one suggestion when I find myself here, is why are my developer team not doing some of their own testing? I.E. owning more of the requirements, feature integrations, and, more of the things they can test. Like automated integration testing. For some kinds of software, early bug detection is the biggest money saver, so shifting defect metrics to frame them around things like customer impact, and around early detection as a way to reduce risk and defect costs. Usually you will find your testers are finding more issues than the devs can find, because the product is hard to test, and only the testers have the time to set the system up for example. Having a metric on how long it takes to run a specific test from ground zero is a useful discussion for example , to drive up test-ability, and ultimately shift bugs back into a place where developers can self-medicate almost. This kind of setting up question might get dev and test to talk to each other more.

The downside to automating or just making testing easy and fast, is that devs will find issues, and not even raise them at all, but rather will fix them right away before committing code into the main branch.

3 Likes

Will do Conrad. I’m very fortunate that I’m in a team now that really do all own quality and the devs are happy to test. I like to think that’s in part due to my approach and being an advocate for testing.

I’m curious why you see fixing things right away or automation as a downside? I’m not sure I understand your point on this. Personally if we can go through a sprint or number of them without wasting time raising bugs but knowing we’ve done things to improve or make the product more stable then I only see this as a win. Please help me understand your point, thanks.

1 Like

In my (previous now) position we had remote integration testing teams, who often found bugs (out of band with development) simply because they were a bigger team, and were testing more of the system interactions.

So with much of the automated testing happening outside of the team, we get a very different class of defects being raised. And teams get pressure to spend time fixing these often minor or unrealistic bugs even when a deadline is looming because the external testers script tests with the intent of breaking things, not proving that things work, because they get credit for bugs found, not for features we ship successfully.

But my key point is that when a team owns the whole enchilada (like where I work now), the defect density metrics become less of a stick that management can beat you with.

I’ve only recently purchased a pack of Test Sphere cards, and created a blog post about a random card I picked. The first card I picked was this one.

My story is not an example of the Ostrich Effect, but it easily could have been. It was just a couple of hours before a planned release and I was doing some last minute exploratory testing. Disaster struck, I found a bug. This bug was one that would only effect a small number of customers (if any), but if it did occur then the consequences could have been severe. I recommended to management that we push back the release while we fix this bug, and fortunately they listened. We released 2 days later once the bug was released and the application retested.

Things could have gone very differently. For a number of reasons, the tester might have decided to not report the issue (fear of being blamed for not finding the bug sooner, uncertainty about if the bug was severe enough).

Or, the team might have decided to ignore the bug and go ahead with release (pressure to release without fail, not caring any more and just wanting to get the application released so they don’t have to worry about it anymore).

The full blog post can be found here:

3 Likes

Think I had only used my pack once, have had it for about a year.

Today is release button day, sp I have chosen 3 of the decks, Patterns, Heuristics and Quality aspects. I got developers to draw only 3 cards, and told them to choose any of the 3 ideas on any one card , to stimulate thoughts and discuss. I got a good test case out of this exercise which only took 4 minutes. (The test does pass, I had just forgot to capture it.)

I am now keen to do this once in every retrospective session. Thank you for the great idea source.

3 Likes