Software testing and bug stories in the news

Don’t we love stumbling on news stories that relate to bugs and software testing?

I stumbled upon one today and felt like there should be a thread to add it to.

Come across a story? Drop a link to it here.

My contribution today is about a lottery ticket winner who wasn’t actually a winner and is now suing for the winning amount. :woman_facepalming:t4:


Oh yes, I do love stories like that. Mostly because there are lessons to be learned from it. here is one where a wrong over-the-air software update was sent to microwaves.

When this news came out, our company were looking into remote software updates, and I used this as an example when were discussing testing effort


Bought a new kitchen recently, and this is the reason, why we didn’t get any “smart”/ internet-connected devices!


The Horizon scandal, Royal Mail etc. Though the more that comes out the more I think it was a bad piece of software that then got compounded by management coverup. I started to watch the itv docu drama but had to stop as it was making me too stressed.


A constant seems to be the data breaches:

Related to the latest bug I noticed in the news, is the use of AI in recruiting:


If you read the court papers for that one it’s almost comical—the guy doesn’t seem to just be suing for the prize amount: he’s suing three defendant companies each for that amount, for a total of 3x the actual jackpot. I feel like he should have had someone check his math because while I do think the lottery involved should bear some amount of liability for posting “wrong” numbers on its website for several days, I can’t see asking for three times the amount, plus interest, plus damages is going to help his case be taken seriously.

1 Like

That’s just the American way. You sue for a telephone number and hope that raises your compensation.

1 Like

I enjoy tales of defect. They serve me as part guilty schadenfreude and part object lesson.

Some of my favorites -

The Y2K “bug”- I was working in IT during this period. I like this one because its called a “bug” but it isnt a “software bug” in any traditional sense. It was working as designed. The design and implementation did not anticipate the longevity and pervasive nature of that design. It also serves as a lesson in “small change, big effect”

Therac-25 Therac-25 - Wikipedia Changing from hardware interlocks to software gates was not a thoroughly tested change, resulting in the exact scenario that waa guarded against ocurring, and actually killing people. Sometimes a change is not always an improvement; or that change can be very expensive to properly implement. QA is a resource that should help identify these risks. And its also why I wont work on medical or aviation software. I just can’t.

Knight capital - Knight Capital Group - Wikipedia This one hit close to home because at the time I was working in QA at a hedge fund that did algorithm based trading. This isnt a software defect so much as a deployment defect. But it could have been prevented with more right side testing. I watched this play out like a train wreck. Not long before, I had actually comitted an error of my own that affected our own production and cause a halt in trading of one algorithm for a day (fortunately safeguards at our company and at brokerages kicked in and kept the damage to a bare minimum.). Again, one small change - very big effect. An effect so large that it effectively erased a whole ass company in a day. I point to this one when members of other disciplines scoff at the utility of me gaming out “hostile user” scenarios or “edge cases”


Always ask yourself this:

“Would I have found it?”


This one came in the other day: ChatGPT seems to have had a meltdown!
Has anyone experienced weird comments or even language merges recently?

1 Like

Chat GPT has seen some things…

Well, I am collecting such stories and writing about them occasionally).

The last known stories:

And another one fresh in! And you guess it, because it is a leap day today …

petrol stations out of action

Going through the news I found that Invenco was the software company.
Interested in getting an idea of how they test I checked their site and found an open senior test engineer position: Senior Test Engineer (iNFX) - Invenco by GVR
This was interesting to me:

  • 'Automate all aspects of testing of the product by completing the development of test software to test new functionality and modifications and enhancing test systems through automation. ’
  • ‘Able to demonstrate an example of testing maturity with regards to measurability and repeatability’
  • ‘tested through validation and verification methods,’

I was thinking, that if you repeat the same test, then you automate it even, you won’t find problems like 29th February.
Assuming you haven’t even thought about what could go wrong to start with as your role was to validate and verify and not look for trouble.

On top of the above leap year issue a few others have been collected:

1 Like

as long as there has been no test designed to test that particular use case. :smiley:

The literal verification and validation testing model implies you stay strictly on the explicit requirements. If there was no requirement for that, there shouldn’t be any test.

1 Like

True. Then wouldnt that be a failure of the model and not the test? it seems to me that any date/time driven code testing would include the consideration of cases for common issues like regional formats, leap year, time zones, etc. (by consideration I mean “this is a case. is it valid in this context?”) - but if the model is strictly governed by the requirements then the requirements are defective…and isnt that a valid defect to file?

Ethically I think as a tester I’m responsible for any dumb decisions or bad work that I’m doing.
If someone else enforces a model on me where I can’t do my best work testing, I don’t have to agree with them or listen to them if I respect myself.
I’d like to see that from many more testers.

On validation and verification, the general use is like this: Verification and validation - Wikipedia
Verification and validation (also abbreviated as V&V) are independent procedures that are used together for checking that a product, service, or system meets requirements and specifications and that it fulfills its intended purpose.[1] These are critical components of a quality management system such as ISO 9000.
I’m not sure that the testing of the requirements document/s is in their scope?!


BTDT :smiley:

My last job everything was driven by story cards. That was the documentation of work. And both developers and product owners would habitually create title only cards or descriptions of work that were a sentence or two. It took a long time but I relentlessly pressed forward the idea that “if it isnt in the document, it isnt getting tested.” over time it got better. Now I didnt enforce it to the exactitude that you note. But I did stop seeing cards that read “make this like that over there”

1 Like

@cupcake_tester experienced one just the other (leap) day.