How do end to end and integration tests fit into the CI/CD pipeline?

I saw a Slack thread that had 111 replies :open_mouth: so, of course, that needs its own Club thread, especially when I’ve seen others ask the same or similar questions :grin:

The original poster asked:

We have a new product we’re building that is using the microservices architecture. I don’t really have much experience with microservices, only really with monolithic. I’m having a hard time wrapping my head around how E2E and integration tests fit in to the CI/CD pipeline when it comes to microservices. Right now, from what I’ve read it doesn’t seem like they work all that well. It feels like most tests need to be lower levels and then there’s contract testing (which I’m also trying to learn more about). I still feel there’s value in having some E2E UI tests but I just don’t know where it fits in. What do you guys do? How or when do you run your UI tests when at any time a service is deployed? Or if you can point me to some good resources, that would be great!

@topperfalkon and @carloskidman provided some excellent insights in the original thread which I hope to summarise soon but thought I’d open this question up to The Club while I’m working on that :wink:

So, how do end to end and integration tests fit into the CI/CD pipeline for you?

1 Like

Hi Heather,

I am a Product Manager for our Automation team and would like to share some personal experience with this subject.
Our team has automated regression tests for each layer of our application (API, UI, etc).
Most of our UI tests tend to be our E2E / Integration tests for full user experience & workflow validations.
For each test we not only categorize the test type (UI, API, etc), but we also tag each test with its own Severity level. This severity is determined by the business impact if that particular test were to fail. This allows us to not only prioritize when the tests run, but it also helps us determine in the overall scheme of the release which tests need to be researched immediately vs within an hour vs next day (lowest serverity). It is important to note that although we constantly analyze our tests to ensure we decommission irrelevant or brittle tests that we still have need for someone to manually research/review fail reasons more often for UI tests.
As far as timing as to when these types of tests run, our E2E UI tests are kicked off post successful deployment message for our services but only after API tests and Smoke tests have run successfully.

Hopefully this info helps you out.

1 Like