False Positives in Visual Regression Testing

Biggest pain of the visual regression testing are false positives. Whenever your content changes, or even image gets rendered differently – you have a change. How do you solve this in your projects?

I have an idea of substituting content with Lorem Ipsum text https://diffy.website/blog/less-false-positives-content-fixtures. What do you think about that?

Have you looked into alternative solutions, like https://testautomationu.applitools.com/automated-visual-testing-a-fast-path-to-test-automation-success/ ?

It seems to me that the code you would need to successfully make your scripts more robust, thereby preventing some false positives ( = found a bug where there isn’t one), would also make your test less valuable by creating more false negatives ( = didn’t find a bug when there is one). An example: if the system has a bug where the text isn’t displayed, the test might not see it, because it replaces the text/content during the test (by replacing it with lorem ipsum text) and that text is displayed.

1 Like

Applitools has some good stuff around this, but is too expensive for smaller companies to consider. For us, we do exactly as described and substitute in text so that when the capture is taken we know it’s going to be static - done carefully you can make sure it has all the same styles, etc so all of that is checked.

There is the risk, as Sarah described that the initial text wasn’t displayed and the inserted text is, but in our cases that’s far less likely than something going wrong with the styling of the page, so we’d rather capture our substituted text.