What is a test case?

Perhaps a slightly provocative title, but here we go!

We have fairly recently started automated API testing, but aren’t very happy with the way it is going and are having a rethink about the approach from ground up. We think a big weakness is our approach to test cases, and whether we really know what they are, or whether our test cases fit at all with a move to agile methodologies.

We currently have test cases based on each individual remote API method, happy flows and negative scenarios for each. We are struggling to keep these concise and be able to effectively capture the prerequisites and verification steps. We are also having problems because we are using the APIs themselves to handle a lot of the set up, a perpetual struggle of attempting to keep tests isolated, but wonder what the point is if we are hitting every API to set up tests further down the chain.

What are your general approaches to creating a test case?
Is going for the “These are the test cases for API X, these are the test cases for API Y” the right approach?
Is more of a scenario based approach better, a journey through the application calling the APIs in turn?
How formally do you define a test case, do you go for a [prerequisite, steps, results] approach, or something else?

More generally, can anyone suggest any good reading on the subject?

I’m doing automated API tests in my present job, for which we’re using cucumber and gherkin, with the intention that we have a test suite that can run on Jenkins for each API. A typical happy path scenario for that would look something like this:
Scenario: call out to the (name of API) results in success
Given a valid call for the (name of API)
When the request is made
Then the service response is “OK”
And the (stuff returned) is as expected for (response from API)

So to go back to your question, we’re following the prerequisite >steps> results approach; we do sometimes use But steps to alter what’s going in to the call to get specific outcomes, usually triggering a specific error. We’re focusing on testing each individual API but where there are interdependencies, then we try and factor these into the tests e.g. if API x is working but API y isn’t , what result should we get.

We generally organise scenarios in feature files by response code family, usually a success one for the 200 family, a client error one for the 400 family and a server error one for the 500 family, and do this on a per-API basis.

If there is a definite flowpath through your APIs then I’d be considering how that pathing can be reflected in the organisation of your tests: even if you are testing each one individually as far as your batches of test cases are concerned, could you add value to the test process or just make it run smoother by arranging the order they are run in to match the flow of an end-user going through them?

Hello @marko!

APIs are wonderful little critters. When designed and written well, they are small, isolated, and very testable. However, they may be deceptive in their simplicity.

We started an API project designed to both provide a single, enterprise-wide, place for a vendor service and to isolate some of the vendor specifics from the enterprise. In addition to that functionality, we added database support, some data validations, and logging. All of those added features are not (and should not be) visible to the user of the API but still require some exercise.

Early in the project, we established some testing boundaries with the development team. The unit tests (tests developers wrote) covered the bulk of the functionality beneath the API request. That is, the database support, validations, and logging were verified with unit tests (lots of mocking helped here). The testing team added automation to explore API behaviors (those a user might experience). Together, the unit tests and the behavioral tests covered the APIs very well, and were executed at every check in. Best of all, these tests became our regression suite.

At deployment, we had smoke tests to exercise the workflow much like @professorwoozle suggests in his last paragraph.

Joe

Thanks, this is helpful and gives us something to think about.

The issue we really have is setting up test data, where often the easiest way to add data to test an API is by calling another API, so we find that in our more complex tests we are hitting many of the APIs anyway, so wonder about the value of some of the isolated tests. We are considering chaining the tests, so once we’ve added a customer through tests of that API, there is no need to do it again every time we need a customer for another API. I like the idea of giving a little bit more thought to the grouping of tests, perhaps having some dependencies between the tests, but limit the scope of that.

Interesting that you mention unit testing as well, I don’t think we currently lean enough on unit test for a lot of scenarios, and don’t really have any clean way of tracking what is covered in unit test that we can ignore in behavioural tests.

I think there is definitely scope for a happy path smoke test which would give a degree of quick confidence, and running through that might help clarify a few of our ideas about it.

I am going to add my two cents worth here if that is OK. I am an assistant test manager on multiple projects going agile and others remaining waterfall. There is not one recipe for success, but I have found the following helps in test case design:

-test purpose is clear
-no fluff in the test steps, we don’t want to create a test that could be done by a computer, rather we like the human interaction
-tests should be automated only on best practices. Example: highly repetetive, stable.

What to automate is really the question on my humble opinion. Test Design works great when you are in an agile world. This adds tracability to user stories and how to manually test them if automation doesn’t work out.

Based on my experience I have seen different people thinking about test cases in so many different ways. Sometimes the test cases are high level (giving flexibility for testers to explore the module more), in some cases the test cases are more granular (they exactly mention what needs to be done to the exact details).

I believe no matter what the situation is, a test case needs to describe some flow in the system with clarity. Just reading it should give someone a basic understanding of what flows are mentioned and what is being tested. For this you can use BDD, you can use detailed test cases with steps, actual and expected results, could use exploratory testing charters and have high level flows that needs to be tested etc.

In your specific example of API testing, I do not see why we cannot have some test cases testing each individual API specifically and some test cases testing the interaction between different API’s as one end to end flow. So, it could be a combination of both. Also, you could mix in some unit and UI level tests with it.

-Raj