To anyone who has experience with automated browser testing:
What’s missing from the current tools?
What could be done better? For eg. action recorders?
I built a no-code browser automation tool for myself and a few of my customers and I’m trying to validate if there’s an opportunity to take it further.
It’s at - firelab.io if anyone is interested in taking a look, it’s really just an MVP right now.
The comparison by previous runs build into the reporting section of the tool.
So you can compare previous runs from run 1 to run 2 and to run 26 in a baseline to see how many passed/failed % you have etc.
the comparing of test runs results with good enough granularity to help you find the anomaly starting point date and possibly time, that is also getting my vote. This all depends on if you are pushing as a QA tool or as an “automation” tool though, the requirements and the actual scope gets much easier in the latter use-case.