Confirmation bias experience

So I did a thing at work today. I was asked to try to identify why an account couldnā€™t be updated. I had an idea and was so convinced that was the cause that I only tested scenarios that would back up that idea. A simple attempt to actually update the said account would have shown me that my idea was wrong in this case but I somehow chose not to do that. Confirmation bias at its finest, I think. Iā€™m feeling terrible now because Iā€™ve essentially wasted a day chasing the wrong idea.

Has anyone else experienced something similar? Maybe then I wonā€™t feel so bad :slight_smile:

6 Likes

Thanks for your honesty and openness, @testerbere.

Absolutely! And it happens all the time.

Iā€™ve lost count of the number of times Iā€™ve gone down a recreation rabbit hole where Iā€™ve kept my focus narrow in an attempt to prove the thing. On reflection I wouldā€™ve highlighted something sooner in one of these typical instances:

  1. Usually trying the simplest thing could cause me to recreate it - instead Iā€™d go for convoluted ways to make sure I had it exactly ā€œrecreatableā€
  2. Itā€™s an environment issue yet would always forget to check that first so end up wasting time going deep
  3. Iā€™d go so deep and narrow where I wouldā€™ve likely have benefitted from going short and wide first

Us testing folks can be hard on ourselves. I place a heavy bet youā€™re not alone, Ere. I hope you can resolve that with yourself and your sense of bad feeling will soon pass.

2 Likes

Thank you very much Simon. I can really relate to the instances you highlighted. Hereā€™s hoping to fewer occurrences of these in the future!

1 Like

Yup had that too. in my case, it happened not as a newbie tester (back then i was keen on exploring what was actually going on); but as a mid level tester (when the knowledge gained had gotten into my head to think ā€œyea, i know this system through and throughā€). But after experiencing confirmation bias, i did some ā€œintrospectiveā€ and realized what was amiss (i had let presumption take its toll). So after that, i decided to revert to the true exploratory method, always remembering "it happened before, donā€™t let it happen again!) Cheers @testerbere , we all did it at some point!

2 Likes

Thanks for sharing, @agw.

That reminds me of a time I fell into the regression testing complacency trap. I wonder if thereā€™s a bias name for it. :thinking:

The sales team rushed into our office to point out that ad revenue had suddenly taken a dip. Turns out myself and my fellow testing colleague had signed off a release without realising all the AdSense had disappeared. Both of us were so used to running regression that we no longer used the regression test checklist. We literally didnā€™t see what wasnā€™t in front of us ā€“ the missing AdSense. Was a big learning day for us both!

We resolved the issue quickly and through a post-mortem, it actually led to a push for automated checks for the essential features and a renewed energy of not taking things for granted or becoming complacent with regression tests. On reflection, it was very likely the start of moving towards a whole-team approach to quality instead of relying on QA folks to sign off a release.

1 Like
  1. Itā€™s an environment issue yet would always forget to check that first so end up wasting time going deep

This! I canā€™t count the times that I was butting my head against the wall, asking myself why it doesnā€™t work, when I only needed to check some environmental or config value to realize the cause. Sometimes even something as simple as my VPN was down and I couldnā€™t access the internal API! :smiley:

Sometimes simple Post It note, plastered at the side of your monitor, helps! On it one would write something like this:

- VPN?
- env?
- timezone?
1 Like

I read about the Experimenter Effect (or Observer Effect) in Psychology sometime back.

This refers to the influence that experimenters (read ā€˜testersā€™) who conduct an experiment (read ā€˜testsā€™) have on the performance of participants (read ā€˜testingā€™) and the interpretation of the results. It is a form of bias that affects the validity of experiments as the experimenters (read ā€˜testersā€™), either deliberately or otherwise, influence the test results. It is one of the reasons why the results do not get replicated by future evaluations (read ā€˜bugathonsā€™ or ā€˜new test roundā€™), and good experimenters (read ā€˜testersā€™) look for various ways to negate it.

1 Like