Visual Testing: Which Tool Are You Using and Why?

Following on from our recent Masterclass, I realised just how many visual testing tools there are out there!

I was wondering, which tool are you using for your visual testing and why did you choose that tool? Is it a decision you had input in or was the decision made for you?

Hi @heather_reid, I missed the masterclass but looking forward to watch it online once it is available on dojo.

I work extensively on mobile apps. My lead and I went to conferences and were exposed to a lot of visual testing tools and frameworks for web.

Which tool are you using for your visual testing ?

We then started developing our own framework for android and iOS on top of shot for android and SnapshotTesting for iOS.

Why did you choose that tool?

We chose them because they were the closest we could get to what we wanted. There are three parts to it though.

  1. the tool helped us with the boilerplate snapshotting bit
  2. we then had to add our own mocking layer to make sure the snapshots were presented with the same data.
  3. We added a layer on the app to support firing up a specific screen on a specific state.

We tried to keep #2 and #3 as close as possible between android and iOS to help maintain it.

Is it a decision you had input in or was the decision made for you?

Yes. Given we are a small team, all decisions are collaborative. Since it also required modifying some app logic, developers were also part of the conversation (which was great!)

1 Like

Excellent! I love seeing such collaborative efforts on these decisions :grin:


A similar conversation to this popped up on Slack recently with @wylie looking for some options to explore.

Alan Giles suggested Percy - a tool he’s currently exploring.

@topperfalkon and @gemma.hill1987 also discussed Wraith.

Percy is cool as it works seamlessly with browserstack (as they now own percy) but for me AppliTools is leaps and bounds ahead of the rest - if you havent checked it out yet I’d certainly try the free trial! It also let’s you very quickly test across multiple devices and browsers. Absolutely amazing!


For me, the open source project BackstopJS is the place to start with visual tests. It only support chrome and doesn’t scale super well (though there are ways around that, and since it’s open source in theory anybody can improve it). All the paid tools I’ve tried (and I’ve tried many) fail to live up to the hype. Doesn’t mean there aren’t reasons to buy them, just that they haven’t been right for me. As a technical person, I appreciate that I can use Chrome devtools protocol to mock client-side API calls.

Also worth mentioning is Chromatic, which leverages Storybook, so can run visual tests on components without deploying a full site. Meaning, it can run earlier in CI pipelines than other tools (locally, every commit, in pull requests, etc). And if your devs already use Storybook, you get visual tests “for free” (after paying for it… but it’s priced pretty reasonably compared to other tools).

1 Like

A discussion about this popped up on Slack today with people offering:


Has anyone tried these?

My choice is Applitools Eyes.

There were several reasons:

  • free online courses
  • AI which only mentions differences noticable by human beings
  • no costs for small number of tests

For a personal sideproject I wanted to get more insight how Visual Testing works with Test Driven Development. Spoiler: it is possible.

1 Like

Be careful with this. The AI only mentions differences it thinks are important. It can (and does) miss differences that are noticeable by humans. It is up to you whether the tradeoff between false positives and false negatives is worth it.


The default way to notice differences in the UI is “strict”. In this case the tool will use AI. Only differences which can be noticed by people are reported.

There are three other modes. “Exact” can be used for a pixel perfect comparison. Depending on the situation the tester must choose the proper match level.

If the AI notices a difference, the test result is flagged as orange. This basically means, that the tester must decide whether the test has passed or failed.

1 Like