Favourite Cause of a False Positive/Negative in an Automated Check?

I saw this on Slack and thought it would be a fun one to transfer here and get some more chatter going :slight_smile:

What is your favourite cause of a false positive or false negative for an automated check?

Some replies on Slack so far:

Well, it’s always fun when you see passing builds and then realize
one or more tests were never executed because you’re using RSpec and the files weren’t named something like *_spec.rb
when you do rename the files, the tests actually fail :man_facepalming:

Automatic retries of failed tests.

What about you? What are your favourite causes of false positive or false negatives?

From LinkedIn so far I’ve got

Missing function calls are always fun.

CSS. Sure. The checkbox is in the DOM but it’s not even in the viewport. It’s somewhere buried off-screen.

My favourite cause of a false negative in an automated test is timing.

1 Like

Favorite cause for a false-positive or false-negative: feature or UI was refactored - automated check is useless - delete.

1 Like

I agree with both timing and feature refactoring for false negatives in your Automated UI Checks!

With timing: It’s usually the app and/or the environment slowdown. I try to have the team agree on a maximum wait time for any element or assertion. Anything over that should fail the test for performance investigation.

With feature refactoring: This can be hard if the refactoring is done by another team but your UI automation is reliant on it - like a user flow navigation to a particular page you want to test. You can try to avoid this with BE calls to setup your tests but sometimes you can’t help it.
I guess the best way to combat this is over communicate between teams.

I’m currently working on a blog post about mutation testing, including common mistakes to look out for when testing your automated tests.

I can give 2 real world examples.

I once had a test where the close button on a window was pressed, the test then checked that the window closed. The issue, the window also closed when the application crashed, and there was no check in place to see if the application was still running after the window closed.

Another, during a test a link in a list was clicked on. The locator used a contains search to find the correct link. The issue, the string was contained within several links in the list so the wrong link was clicked on. The rest of the test was able to proceed after the link was clicked on, but was testing the wrong link.

With Selenium, a lot of false negatives are caused by timeouts.

This is why some inexperienced testers might claim that their tests are “flaky”.

But they actually just don’t know about the Page Load Timeout that needs to be configured, not just the Element Load Timeout.

Some testing environments are painfully slow and not configuring the Page Load Timeout will cause your test to fail even if you add a harcoded Sleep step.

The WebDriver basically cuts off the connection if the page loading time is higher than the Page Load Timeout.

This is one of reasons why we added a “Page Load Timeout” option in the Settings for each test suite on Endtest.