In the context of microservices architecture, how do you organize your tests and stagings (test envs) to effectively simulate external system dependencies and manage test environments, so you can test everything well? Also, can you talk about how you keep your testing setup consistent and reliable as your software goes from development to live use, across different stages of the dev lifecycle, especially when the parts of your software are changing quickly (interfaces or schemas)?
I can suggest the following:
Prioritize creating a suite of contract tests. These ensure that any changes in service interfaces or schemas don’t break existing functionalities. You use tools like Pact for contract testing, which helps in validating interactions between service consumers and providers, ensuring compatibility across different stages. You can write relatively simple scripts to test contracts or schemas
For simulating external dependencies, you can use tools like WireMock, which allows you to mimic external APIs and services, you can test your services in isolation but in a way as in the real-world environment.
Use Docker to encapsulate service environments - the same configuration, regardless of where they’re deployed. You also can automate the deployment of these containers using Kubernetes (if you have resources, time, and experience for this), which helps to have envs synchronized across the dev, testing, and prod stages.
Do you have any other suggestions, ideas, tools, experience, etc?
At the individual microservice level, for unit tests, mock the external dependencies where possible, or seed the external dependency instance to provide data (for reads) to cover specific use cases. These steps might help.
At the system and regression test level, freeze or lock down the complete env (staging or intermediate pre-push validation env) to known versions for testing releases, where the versions are fixed and won’t accidentally be set to another version by someone’s mistake or CI/CD version bumps. The versions locked to (for each component) are either the new version being tested for release, or the existing production version that is running if only a subset of microservices are being pushed and not all services being updated. Freeze whenever it is time to test releases, otherwise, the given env is unfrozen for regular testing of versions that get bumped from time to time.
While this is an important and tricky thing to cover, I think an equally complicated and harder to tackle problem is scale and performance testing of microservices (as a complete set) in the cloud than functional testing.
Truly testing cloud scale is costly and resource intensive, trying to model and extrapolate/interpolate results from a scaled down model takes work and experimentation and may or may not yield anticipated results of reality. And it gets harder the more microservices you have in the cloud.
It would be interesting to hear what others have to say with respect to that.