How to best approach microservice unit testing

I have recently joined a company with the aim of bringing all their testing up to where it needs to be. We have identified unit testing as the first area that needs improving. I have been tasked with putting together a time frame of when the tests will be improved. (eg. 3 months 50%, 6 months 80% etc…)

My quandary is do I focus on one team and get them to deal with each microservice in turn or do we say we will cover off the historical unit testing when we start looking at the regression tests and just focus on getting the unit tests sorted from this point forward?

Any advice would be gratefully received.


Please consider carefully if you want to go the “KPI route” in regards of unit test coverage because this is for sure a rabbit hole.

Personally, I’d start with

  • risk. Which code is super important, crucial, complicated. Start there.
  • use refactoring as starting point to then add unit tests.
  • what do the devs want? Make it a shared decision.

That should get you going.


Getting the team involved to help sounds a great idea.

Also it may be worth time boxing the activities to see what value they give you. This may also help to show that you are going down the right path to the key stakeholders.

1 Like

I would also look into prioritizing the microservices that need testing. I wrote recently how to find components that may require more attention: Forensic Testing: Uncovering Quality Issues Using Your | MoT
There is also an analysis like that for microservices, if you are interested.

1 Like

To add to @maaike.brinkhof excellent point that coverage is a horrible metric to use as a KPI, I’d also say that unless the company is willing to stop doing feature work and they’re going to focus on adding tests that don’t exist, I’m not a big fan of adding unit test coverage to neglected code bases just to have unit tests. Adding these unit tests to code without tests in a vacuum is just asking for trouble.

I much prefer starting at the top of the test pyramid and adding some high-level e2e tests to verify the functionality and have the start of the safety net, as well as requiring that all new work include unit tests, so coverage is added to the parts of the code that are being touched/iterated on, and it lets teams maintain forward momentum instead of focusing on what may be essentially static code.

And I’d definitely do this across the board with all teams - the focus should be on building a culture of quality, not whether or not there are unit tests, code coverage, etc, and especially not only on certain code/projects.

1 Like

If you have to deal with an existing code base you kinda want to grow the unit testing coverage organically. Best way to do it imho is to embed it into the way of working. So, from now on, whenever we build a new feature, we also write unit tests for it. Whenever we fix a bug, we write a unit test that specifically covers that bug. When we refactor something, we add unit tests for that part.
I wouldn’t bother with code that’s outside of the scope I’ve mentioned above. It’s already there and apparently it’s sort of working… (we assume). As long as we don’t make changes, we should be good. And if we do make changes, based on the approach above, you should start to write some unit tests for it.

Forget about percentages as they are meaningless. It’s very easy to go from 0 to 100% coverage by doing some snapshot testing. But does that really give you more confidence in the product that you’re about to release?

1 Like

Thanks one and all for the very helpful advice.
I think I’m going to draw a line in the sand at where we are now and will ensure unit tests are written from this point forward. As and when we look at refactoring we can use that time to add any “historical” unit tests in just to help build coverage and confidence.