Cross-service E2E testing - Really appreciate your honest feedback!

Hi Ministry of Testing community,

I’m a test automation engineer who’s been working with microservices for past few years.

From my personal point of view that one thing consistently frustrated me across different teams: E2E tests that span multiple microservices feel like they’re always one API change/new feature away from falling apart. The cost of maintaining them keeps creeping up until someone quietly decides they’re not worth keeping current.

So, at this time, I decided to build something real new for test automation engineers for better maintaining e2e testing for multi mciroservices — CratonAI — a tool that generates a runnable E2E test framework automatically based on how your services and business flows are defined.

Before this move, I have three questions I’d genuinely love your perspective on:

  1. What’s the part of E2E test maintenance that eats up the most time — and is it something your team has ever tried to fix?

  2. If a tool generated your test framework automatically, what would need to be true for you to actually trust it enough to run it in CI/CD?

  3. Has your team ever scrapped an entire E2E test suite and started over? What led to that decision?

Critical feedback is just as welcome as positive — probably more useful at this stage.

Thanks for reading.
Tony

  1. Data is generally the most complex and important thing when it comes to e2e testing I find. Whatever solution you choose for handling data will likely have the biggest impact on test flakiness and / or performance.

  2. I find this a difficult one to answer. I’m unsure how an AI building a test framework fixes the problem you presented. You say that the reason you are building CratonAI is because of maintenance, but the solution to this problem is an entirely new test framework? Wouldn’t the problem of changes requiring maintenance still exist?

  3. Yes. Our old framework was overengineered and due to a reduction in QA resource the decision was made to streamline. Best decision I’ve made honestly.

Hi @canofcam ,

Thanks so much for your feedback!

On point 1: data management is something we’ve thought a lot about. In CratonAI, the test data logic is defined by the team themselves as part of the input — so the data setup is tied directly to the business flow definition rather than hardcoded into the framework. By now, the ideal process is to let user to identify the data structure and we create fake data.

On point 2: you’ve put your finger on exactly the right tension, and I want to be honest about it. You’re right that changes will always require some maintenance. What I am trying to shift is where that maintenance happens — instead of engineers manually updating test code when a service changes, the idea is that updating the service definition in CratonAI regenerates the framework automatically. So the maintenance burden moves from “fix broken code” to “update a configuration.” This is my current thought to resolve it.

On point 3: “overengineered” is something we heard a lot. Just curios what made you decide to streamline rather than refactor?