Creating a test strategy for asynchronous microservices applications

In the last few months I started a personal study project. The initial objective was to further study the differences in the development and implementation of quality strategies for microservices with synchronous and asynchronous communication. I talked about that in this post: Study project about QA strategy for microservices projects

I have learned a lot in this process, and I want to share some of my experiences through some articles with everyone in the community.

Here is the link for the first article, named “Creating a test strategy for asynchronous microservices applications”: Creating a test strategy for asynchronous microservices applications | by Fernando Teixeira | assert(QA) | Jan, 2021 | Medium

Any feeback is really welcomed!! :slight_smile:


This is quite good, I only wish I had the discipline to be as thorough in my own personal projects!

1 Like

Interesting reading material.

We currently use (also for performance testing) a correlation-id header, which flows through all the following calls so we continue our API tests fully and check the logs (automated) for the results.
Something alike this: The Value of Correlation IDs | Rapid7 Blog


I remember these correlations IDs, from my last time. For a while I was on a production support team and I’d spend a lot of my time looking at Application Insights logs in Azure and these Correlation IDs were very useful in figuring out data traversal between different systems and generally in investigations.

1 Like