🤖 Day 6: Explore and share insights on AI testing tools

One place where I see AI (or even not-AI automation, really) could help me is smarter tests selection for automation run. In my current project, we run automation suite for each PR. Full regression run takes about 30-35 minutes, which is not too bad, but overall wasteful - many of these tests do not bring any new information, when you consider the context of a change made. So a tool that could look into PR and decide which tests should be run (and if any at all!) would help.

I didn’t run exhaustive Internet search - just a single query in Google. What I found:

  • Launchable has “Predictive Test Selection” - marketing page documentation.

    You need to register your project with Launchable first. In CI pipeline you can use their CLI tool to request a list of tests to run. You submit a list of all tests, id of the build and what test runner you are using. In response you will get back a file with selected tests, that you can pass to your test runner.

    (I have checked documentation for pytest, because that’s what I use. The command they give in documentation is likely to fail if your tests are parametrized, especially if your parameters have spaces. Common mistake.)

    They have integrations for many test runners across most popular languages - Java, Cypress, Jest, dotnet, pytest, RSpec, even something for perl.

    To use predictive test selection you need at least “Moon” plan, which is second out of four. They do not disclose price on their website. “Earth” plan, which is first, is $250 per month per “test suite” (probably approximately project, but I assume single project might have multiple “test suites” in some cases), so “Moon” is probably more expensive than that. They offer 4 weeks trial which I did not engage in.

  • Appsurify offers “TestBrain” and positions themselves as “AI Risk Based Testing Tool”. So while smarter test selection is a module for Launchable, it is the core of offering for Appsurify.

    Documentation is shorter and lack many details. In a documentation I can access from main page there are links to documentation on “Gitbook”, which requires me to create an account and then says I’m not authorized. Anyway, they claim that I will need to add a script that will push results to them, and model will be fully trained after they have results from 50 runs (which should take 2-3 weeks - sounds roughly alright for my current team, but I’ve been in teams where 50 runs would be achieved in a day or two). At this point I should change my integration to run selected tests instead of all.

    They claim to support multiple code repositories, CI/CD systems, test frameworks. List of test frameworks roughly matches list from Launchable, I think Appsurify has more items.

    They have two pricing plans, but there is no price disclosed for any of them. “Professional” plan (presumably more expensive) can run on-premise, which I appreciate.

    Unfortunately, a link to documentation that is not publicly available and link to “Blog” that results in error page do not give me confidence that this company still exists. If they do, I guess one large client could basically support their entire business. But if I were a small client, I would worry if they are still going to be around in few years and if I should commit to them.

  • That’s it. Google did not give me more tools.

    I found a blog post from Facebook discussing “predictive test selection”. Obviously, the tool is not publicly available. I only skimmed the blog post, but they seem to cover the high-level overview of the system, without going into details.

    I also found a reference to Microsoft “Evo”, which is supposedly a smart test selection tool developed by Microsoft. I was not able to find any more mentions of that thing, so even if it exists, it does not seem to be available to anyone else.

14 Likes