Hey guys,
I wanted to contribute something to the testing community while learning about AI. So i’ve been working on a tool on my spare time hoping to improve and extend the capabilities of visual testing. I recently got an MVP working wanted to get some feedback from you guys and validate some assumptions.
The tool i’ve been working on is Oculow. It’s a tool that provides visual validation to UI tests. It has some simple features for image comparison (pixel based comparison) nothing new in this area. The section i’m mostly looking into expanding and getting feedback is on the algorithm i’m training to detect patterns that may symbolize visual errors. Here is a capture of a detection so you get a better idea of what i mean:
It’s still at its very early stage but i want to get some feedback from those who use it. If you think that this might be useful in other areas rather than visual testing or any other thoughts that might come up, i would also like to hear from you.
For those that might be interested in trying out the tool and seeing if it provides value to your UI tests, feel free to contact me or request access through the site.
https://www.oculow.com/early-access.html
Thanks!