Perhaps a slightly provocative title, but here we go!
We have fairly recently started automated API testing, but aren’t very happy with the way it is going and are having a rethink about the approach from ground up. We think a big weakness is our approach to test cases, and whether we really know what they are, or whether our test cases fit at all with a move to agile methodologies.
We currently have test cases based on each individual remote API method, happy flows and negative scenarios for each. We are struggling to keep these concise and be able to effectively capture the prerequisites and verification steps. We are also having problems because we are using the APIs themselves to handle a lot of the set up, a perpetual struggle of attempting to keep tests isolated, but wonder what the point is if we are hitting every API to set up tests further down the chain.
What are your general approaches to creating a test case?
Is going for the “These are the test cases for API X, these are the test cases for API Y” the right approach?
Is more of a scenario based approach better, a journey through the application calling the APIs in turn?
How formally do you define a test case, do you go for a [prerequisite, steps, results] approach, or something else?
More generally, can anyone suggest any good reading on the subject?