Struggling with Flaky Selenium Tests

I’m so done with flaky Selenium tests. Every time I fix a script, something else breaks.
I feel like I’m babysitting my automation suite instead of testing the product.

Does anyone else feel like these frameworks are more work than help lately? I am really looking for solutions.

1 Like

My solution would be to use Playwright, I find in my experience it simply is the best in many ways, one of which being its lack of flakiness*.

However that suggestion may not be very helpful for this particular question so if you’d like to share more information on how you are struggling with flakiness, I’d be happy to help.

*in my experience, I have noticed considerably less flakiness when compared with Cypress & Selenium

Hi Keerthi,

I have experience in Selenium and Cypress - I believe flaky tests will be there, its important to check whats causing them to be flaky? is it our code or problem at the tool? What I have observed, usually its problem the way we have written code or have utilised capabilities.

Now, I am not sure whats your AUT or automation structure is? however, with Selenium I would recommend to structure the automation scripts using POM (Page Object Design pattern). It really helps in reusability, maintaining, monitoring and fixing scripts. Because this design pattern really sort the automation suite and helps in debugging and fixing, I believe flakiness would be a lot less if not 100% gone!

Thats the structure - its from my private repo test project, I have not made it public yet as some changes are required. Basically in src/main/java - add all the pages and elements, in src/test/java inherit those pages and write the actual tests.

Did it few years ago: Steps to Automate Mobile Application (Android) (using POM)

My top priority still would be to fix failing tests , then would jump to flaky tests for sure! :slight_smile:

Hope it helps?

Howdy!

You sound frustrated. One way to look at flaky tests is that they are not broken. They are very good tests, in that they are telling you that something is going on that you’re not aware of. They are a starting point for investigation. It isn’t something to fix, it’s something to investigate.

What you’re being told is that there’s some error in your model of the product or the automatic check code or the environment (or something else).

It’s a good opportunity to look at that code and ask things like what it’s supposed to be doing and why it exists in the first place, generating this cost. It might be time to see if your code is checking in a way that’s too high level. To review your changes and see what check code is likely to be affected. To see how these checks align with your test strategy - or if they exist just because they exist.

Then to consider if it’s achieving what it’s supposed to achieve for your strategy. Does it check the right things in the right place in the right way? And then to consider if it needs to be changed, updated or removed, and perhaps other places that might suffer similar issues.

I think a lot of automation tools are expected to just work forever in a changing product, environment, market, user base and everything else about a product and project and context that shifts underneath us. But they don’t. They need to serve our purposes, even when our purposes or situation changes. That’s why GUI tests are so brittle. So take it as an exciting opportunity to investigate a problem, and have your tools be properly supporting the test effort, and earning the cost they generate, rather than just be there because why not.