Difference Between Unit and Integrated Tests


(Alastair) #1

As you can probably tell by the topics I’ve created, I’m attempting to find out as much as possible about testing in a Continuous Delivery environment.

Today I’ve had a discussion with developers around their unit tests, especially around APIs. Our developers mock/stub the API in unit tests and force an error (for example, a Bad Request) to occur.

We have tests which also generate a Bad Request by using a POST and using an invalid body - but we do this against a deployed service.

Our test will check how 2-3 areas of code interact with each other - but when our unit tests test each of these in isolation, is it still worth having an additional test around these?

I’m not sure if this is the best example - but if our developers are covering all code paths in unit tests, where does the value of integration tests (as part of a build process) lie? Should we be using them sparingly?


(Jesper) #2

A common pitfall is to have a range of test phases that do the same tests… because requirements. I usually insist that each phase have a different focus or a different environment (with external integrations etc).

What I’m hearing is that you have 2-3 modules you can test both individually and as a whole. In both cases on API and service layer. so you could model this in 4 areas: Module / whole - Service / API. And then place test activities in each field.

A similar model:


(Gabe Newcomb) #3

Integration tests look for bugs around interfaces and mistaken expectations around which function/piece is supposed to do what – they absolutely are useful even if your team has super awesometastic unit tests in place. I wouldn’t worry about duplicating coverage unless you’re duplicating both the area of coverage AND the nature of the coverage. Of course when prioritizing, you may very well want to worry about getting some coverage in place for areas of the code that have no coverage at all first and then get to worrying about coverage at different levels (unit / integration / end-to-end).


(Jesper) #4

How about renaming them to fit your context
http://www.codingthearchitecture.com/2015/06/12/unit_and_integration_are_ambiguous_names_for_tests.html


(Darrell) #5

The way I look at this is one test should, ideally, find out problem. If you have a test which can fail for multiple reasons then you have to spend time figuring out which reason (or reasons) the test fail.

A unit test should be set up so that there is one reason it can fail and there is one assertion to catch that failure. When you see a failure you know exactly what failed.

There might be three reasons an integration test might fail. If you are testing the integration of code A and code B could fail because there is a bug in code A. Or there is a bug in code B. Or there could be a bug in the integration. If we set up the Continuous Integration (CI)/Continuous Delivery (CD) so that if unit tests fail, we don’t even bother running the integration tests then when an integration test between code A and code B fails, we know that all the unit tests passed. Therefore the integration test failed because code A does not integrate with code B.

Again, there were three reason the integration test might have failed but because we run unit tests first, there is really only one reason the integration test failed.

Finally, if an integration test finds ONLY the same defects as a group of unit tests, then it is a redundant test and does not need to be run. You should google “mike cohn test pyramid”. You’ll see examples of the layers of testing (unit, contract, integration, etc.). Tests on the pyramid should be pushed down to simpler tests if possible. If 5 unit tests can catch the same thing as 1 integration test then write 5 unit tests and remove the integration test. If 5 unit tests can catch 5 of 6 defects but the integration test will find the 6th defect as well, write the 5 unit tests and keep the integration test.

Darrell

P.S. Testing too much is better than not enough testing, IF you are trying to reduce defects at all costs. However, too much testing might be costly and the business might decide they’d rather miss a few defects but ship for less money.


(Vishal Dutt) #6

A unit is the smallest testable part of a software. Unit Testing is majorly performed by software developers themselves or their peers. In rare cases, it may also be performed by independent software testers. In Unit testing, it is advised not to create the test cases rather focus on the behavior testing whereas Integration could be performed by Developer or Testers as per organization policy. As per Top Software testing companies, the difference between these two are mentioned below:

  1. Unit testing is to verify that the single module of code is working fine or not whereas Integration testing verify the multiple modules.

  2. Unit testing is not divided into different categories whereas Integration testing is classified into following categories are
    a. Top-down Integration.
    b. Bottom-up Integration.
    c. Big Bang.
    d. Sandwich etc.

  3. Unit testing is to verify the single component of the software whereas Integration testing is to check the whole behavior of the software.

  4. Module specification is required for Unit Testing and Interface specification is required for Integration testing.

  5. Unit testing lies in White box testing and Integration testing falls under both White box as well as Black box testing.

Besides all these differences, Unit testing and Integration testing are very essential testing techniques for developing error free software.

Hope this information is helpful for you.


(Joe) #7

We recently completed an API project. I worked with the tech lead to establish good unit tests written by developers, and the test engineers would create some automation. We called the tests written by test engineers behavioral tests.
At first, we had some duplication between the unit tests and the behavioral tests. Since the behavioral tests were meant to exercise the API as a user would, some unit tests were discarded where they duplicated that kind of test. As the API grew, the difference between the unit tests and behavioral tests was easier to determine. Both sets of tests used mocks to isolate the tests from environmental errors.
When we deploy to a testing environment where we can execute integration tests, we execute a single smoke test. The objective of the smoke test is to evaluate connectivity (can the API connect to a database or other servers?), security (are the IDs and roles established in the environment so that the API can operate?), and configuration (are the configurations correct for the environment?). We rarely executed any other tests because we had confidence that API behavior was sufficiently exercised with the unit tests and behavioral tests.