API Automation: whats the right approach with a lot of depending endpoints?

Hello everyone,

i automate a RESTApi with Postman/Newman, and that was a lot of fun for the last months. But I come close to the point where i have to overthink my strategy.

Right now, i have a lot of endpoints who needs input to run, and their response will give me input to get the next endpoint working. I also haven´t thought that we will develop so many new ones during the last weeks so now i am on the point where i have a LOT of endpoints chained together in a way, they all run on our CI/Build, but when one breaks, the whole chain brakes. I know which one caused the problem, but it´s not satisfying. And i have to make a lot of compromises to create that (meanwhile) very long chain.

As i see it, there are 2 ways with their pros and cons:
Keep the weird monsterchain alive
+I test EVERY endpoint we have
+I dont have to put in more work, as that is the point where I am right now

-The compromises i have to agree on starts to bug me. I have to do weird moves and slowly have to accept redundancy in the collections, to get every endpoint into my execution chain.
-I dont feel that´s the right/professional approach anymore (with that number of endpoints, we got now. As i started, we had way less)

Build a Collection of Use cases where i chain up max 3-5 Endpoints as small testscenario
+I feel thats more professional right now
+I dont have to go uncommon ways, i work the common Use cases the user would also take. A lot of bad compromises are gone

-The rebuilding would take A LOT OF TIME
-ENORMOUS Redundancy of endpoints, if one changes, i have to edit a whole lot of endpoints, and every other problem that comes with a lot of redundancy (as in postman, there is no thing as inheritance as in programming languages with endpoints, afaik)
-I wouldn´t test EVERY endpoint in our ecosystem
-A whole lot of time i used the last months would be useless (when we dont count the large experience i made with apiTesting in that time)
-A lot of inputData would be “hardcoded”, instead of getting data in the large chain. Sure that way i can precise my assertions way more but i feel something could slip away from me (bugs)

I think with that approach I could miss more bugs/failing-or-not-working-anymore-endpoints then with the 100% solution (the weird superchain) on top.

What do you think? Maybe there is a way better 3rd option I just don´t see right now. I would be very thankful for every tip.

I have to add: we don´t have a isolated testenvironment yet, which is very bad. We are working on that and something will come in the next months, but right now my superchain “spams” our DevEnvironment. That would also happen with the Use-Cases-approach in my point of view.

Best Greets

1 Like

Hello Patrick!

I was intrigued with your challenges: multiple endpoints with dependencies, test data, and no isolated test environment to name a few.

If I were to start with a new suite of tests, I would approach their design in a more isolated fashion. For example, you mention a scenario that runs against endpoint A. The response from endpoint A is an input for endpoint B. This dependency, as you have seen, tends to add brittleness to the test suite. That is, the execution of the suite will break if an endpoint replies with an error or unexpected data.
In my opinion, having a scenario or scenarios for a single endpoint reduces the brittleness. You could repeat this pattern for as many endpoints as needed creating a suite of small, quick tests. While the results demonstrate only that your endpoints operate in an isolated fashion, it provides a baseline result that could be used as a smoke test. It is simply the first suite used to exercise new and changed code, and should provide good feedback to developers even before deploying (that’s a hint to engage your developers in your testing).

You mention building a collection of use cases that chain a small number of endpoints. I’m guessing this demonstrates or evaluates how the endpoints might be used by customers. These are certainly more valuable tests. When deciding on which chain of endpoints to use in a scenario, I suggest you assess endpoints or chains or scenarios by risk. If an endpoint or chain or scenario failed, what is the impact to the customer? Start building test cases around the largest risk.
I must confess some concern around dependencies in between APIs (also known as endpoints and I use the terms interchangeably). While my exposure to API testing has been a single, large project, a primary design tenant was dependence on one or two pieces of data in any one endpoint. It may be too late in your project for this suggestion but influencing the API design to reduce dependence could improve endpoint testability.

Test data is your best friend when evaluating endpoint behavior. As I’m sure you have seen, you can vary the payloads to explore boundaries, data validations, and errors in depth using Postman. I encourage you to think of test data as an independent entity especially for the smoke tests suggested above. With the dependency between endpoints, there may be an opportunity to explore variations in the data. This helps to explore deeper behaviors in a succeeding endpoint. What I mean is once you have a response from endpoint A, you can submit it to endpoint B without change (happy path) or change one or two values (exploring more behavior). How you change the values can be driven by risk as described above.

I have found Postman to be very valuable. The ability to explore endpoints and share scenarios really helped to build a collaborative environment for my project. I invite to consider other tools that may provide more flexibility in test design or more independence during test execution. One tool is MS Test; there are many others.

Lastly, I hope you can have a test environment established soon. As you are probably aware, a single environment can change often, or can become unavailable. This impacts your project by slowing down both development and testing.

Happy Testing!
Joe

Hello Joe,

thanks a lot for the answer!

Yes, it´s way too late to try to reduce dependence between endpoints. Thats the large problem. But I think i will stick to Use Cases. I don´t think i can test more then 30% of the endpoints without kind-of-chaining at least 3-5 together. That is no great solution i guess :frowning:

I also think the idea of having some specific test data for certain endpoints is a good idea.

Your time and learning has not been wasted either; you’ve learnt these shortcomings and working towards trying to fix them. It’s not wasted time if you’re learning. Then in the future, you’ll know exactly what to do (smaller, more manageable tests rather than one big monster test).

And also, IMO the monster test you have is still alright, as afterall if one part fails, it’s showing something is broken.
Have you looked in to using not Postman for API testing? Maybe it’s too late, maybe you don’t have time though. I have had lots of success with using mocha and nodejs and some API request package to write quick, modular REST API tests.

Hey Patrick,

I know it’s been a while since you posted this question so maybe you have already made a lot of progress on it :slight_smile: Let us know how it is going if you have!

I have a couple of thoughts on this. The first thing to note is that Postman does allow for parameterization and scripting in some pretty powerful ways. You said that:

in postman, there is no thing as inheritance

However, there are ways to share data between requests in postman. Using those might help a lot with maintenance as you can set up things so that you only need to change something in one place to have it update across all the requests. I’m not sure if you have looked into that much?

Also, a general rule of thumb I use with any automation is that the most expensive part is the maintenance so if you want effective feedback you need to consider how to make them easy to maintain. Falsely failing tests are expensive in many ways. I would consider refactoring, with a maintenance first attitude. How can you make them easy to update when they do fail? What things can you check that are very unlikely to change (i.e. that are properties of the system)? When thinking about automate regression tests we really do need to thing about a different problem set than manual testing.

Many things that make sense to run once (like a multi linked chain), don’t make sense to run over and over in a changing system. My thought is you are usually better off using small, more targeted tests when automating. Hopefully that helps you out!