Miscroservice Automation: How do you handle test execution between different version for microservices

How to Version Test Artifacts (Test Cases and Framework Libraries) for System Integration Test (SIT) Automation

When automating tests for a single microservice, we can version the test artifacts based on the deployed service version. In case of a rollback, older test artifact versions can be used to validate the previous service state. While this approach works well for isolated microservices, it presents challenges in SIT automation.

SIT automation reflects real-world customer scenarios, where multiple microservices operate concurrently, each potentially running a different version. This creates a versioning challenge—test cases available in one version of the test artifact may be missing or incompatible with another.

How can we effectively manage test artifact versioning in such dynamic, multi-service environments?

1 Like

Without detailed context, only general suggestions can be proposed.
To ensure effective test execution across different microservice versions, avoid redundant test artifact versioning by prioritizing backward-compatible changes, using feature flags, and other system testability improvements. Tests should not be tightly coupled to business logic changes, so in case of a service rollback, you will be able to revert the corresponding test changes.

Microservices approach is designed for small, incremental deployments, minimizing changes to regression testing. Controlling the state of the test environment is crucial—if multiple services undergo frequent changes, consider dedicated environments per developer or team. Alternatively, if time to market allows, implement a testing queue per env for end-to-end scenarios.

Additionally, integrate service version detection in the CI/CD pipeline to ensure compatibility.

Microservices Patterns by Chris Richardson has chapters covering this
Also Martin Fowler has a bunch of articles related to microservices testing

Thank @shamrai for the response. Will try the suggested approaches.
The main issue that we are trying to solve is how to version the test code for SIT cases which involves all the microservices.
We require a smarter test execution strategy that can:

  1. Dynamically associate tests with the corresponding microservice release.
  2. Execute only relevant tests based on the current version of each microservice in production.
  3. Avoid running tests for features that are not yet deployed or have been rolled back.

You can use a dynamic approach and run a selective test scope. A lot of test runners support Tags (or similar approach). When you have a test scope divided by service, feature, or any other property you can dynamically choose the test scope.

In that case you can use appropriate tag even for service version, like “service-A-v.1.2.3”. But it looks like not a good approach. Because using git for the code storage you can easily have master branch for running tests for latest released versions. And some custom branch for running any custom tests for appropriate service version