In my context, some areas we are struggling with a little at the moment are:
-
Missing certain use cases on device/browser combinations.
-
Brittle end to end tests due to content changes on our website.
-
Some test suites taking longer than weād like.
-
Alerts firing too frequently due to slow performance.
In my research, I found some potential ways that AI assisted tools could help us with these things:
-
I didnāt find a specific tool that could help with this but AI assisted tools in general could help us identify these types of scenarios based on usage in production.
-
Self healing tests could help with the brittleness of the tests we have by monitoring our site for changes to the UI and adapting our tests.
-
I read about a tool called Launchable which analyses your test runs, sees which ones are most likely to fail, and runs these tests first. It can also provide insight into which tests are flaky as well as never-failing tests
-
I came across a tool called BigPanda whose aim is to provide software teams with intelligent alerts. They acknowledge the large amount of data we have access to now and not all of it is relevant to every situation. They aim to gather relevant data for incident management to speed up analysis and fixing of the problem. Further down the line, it could analyse the frequency of the performance alerts we generate and recommend a better system to highlight these to us.