So I did a thing at work today. I was asked to try to identify why an account couldnāt be updated. I had an idea and was so convinced that was the cause that I only tested scenarios that would back up that idea. A simple attempt to actually update the said account would have shown me that my idea was wrong in this case but I somehow chose not to do that. Confirmation bias at its finest, I think. Iām feeling terrible now because Iāve essentially wasted a day chasing the wrong idea.
Has anyone else experienced something similar? Maybe then I wonāt feel so bad
Thanks for your honesty and openness, @testerbere.
Absolutely! And it happens all the time.
Iāve lost count of the number of times Iāve gone down a recreation rabbit hole where Iāve kept my focus narrow in an attempt to prove the thing. On reflection I wouldāve highlighted something sooner in one of these typical instances:
Usually trying the simplest thing could cause me to recreate it - instead Iād go for convoluted ways to make sure I had it exactly ārecreatableā
Itās an environment issue yet would always forget to check that first so end up wasting time going deep
Iād go so deep and narrow where I wouldāve likely have benefitted from going short and wide first
Us testing folks can be hard on ourselves. I place a heavy bet youāre not alone, Ere. I hope you can resolve that with yourself and your sense of bad feeling will soon pass.
Yup had that too. in my case, it happened not as a newbie tester (back then i was keen on exploring what was actually going on); but as a mid level tester (when the knowledge gained had gotten into my head to think āyea, i know this system through and throughā). But after experiencing confirmation bias, i did some āintrospectiveā and realized what was amiss (i had let presumption take its toll). So after that, i decided to revert to the true exploratory method, always remembering "it happened before, donāt let it happen again!) Cheers @testerbere , we all did it at some point!
That reminds me of a time I fell into the regression testing complacency trap. I wonder if thereās a bias name for it.
The sales team rushed into our office to point out that ad revenue had suddenly taken a dip. Turns out myself and my fellow testing colleague had signed off a release without realising all the AdSense had disappeared. Both of us were so used to running regression that we no longer used the regression test checklist. We literally didnāt see what wasnāt in front of us ā the missing AdSense. Was a big learning day for us both!
We resolved the issue quickly and through a post-mortem, it actually led to a push for automated checks for the essential features and a renewed energy of not taking things for granted or becoming complacent with regression tests. On reflection, it was very likely the start of moving towards a whole-team approach to quality instead of relying on QA folks to sign off a release.
Itās an environment issue yet would always forget to check that first so end up wasting time going deep
This! I canāt count the times that I was butting my head against the wall, asking myself why it doesnāt work, when I only needed to check some environmental or config value to realize the cause. Sometimes even something as simple as my VPN was down and I couldnāt access the internal API!
Sometimes simple Post It note, plastered at the side of your monitor, helps! On it one would write something like this:
I read about the Experimenter Effect (or Observer Effect) in Psychology sometime back.
This refers to the influence that experimenters (read ātestersā) who conduct an experiment (read ātestsā) have on the performance of participants (read ātestingā) and the interpretation of the results. It is a form of bias that affects the validity of experiments as the experimenters (read ātestersā), either deliberately or otherwise, influence the test results. It is one of the reasons why the results do not get replicated by future evaluations (read ābugathonsā or ānew test roundā), and good experimenters (read ātestersā) look for various ways to negate it.