What are the pro's and con's of writing an API test automation framework at code level as opposed to using a tool

As I look into writing automation frameworks at the API layer(relatively new to this area), a question that keeps cropping up in my mind is…why should I write an automation framework at code level in C#(with RestSharp) or Java(RestAssured), as opposed to using a tool like Postman or SoapUI and utilising what they have to create a framework?

What would be the pro’s and con’s of each?

2 Likes

Whilst you can write C# Restsharp, if you haven’t already I’d checkout Postman. There’s some great examples here how to use it https://github.com/DannyDainton/All-Things-Postman#example-guides, I’ve also written a very short blog on Postman http://vivrichards.co.uk/testing/postman-api-testing-introduction. Postman also has a nice interface which enables you to write checks for 200 responses , repsonse times, response body etc. This is great if you have other people who may want to test the services but aren’t so clued up on writing C# etc…

You can also export the Postman tests to run on TeamCity or other Continuous Integration software using a free runner called newman or you can export the tests in to c# restsharp. I’ve tended to go down the Postman root due to the ease of creating the tests, I feel Postman is really great for exploratory testing (toggling things on/off etc) and I’ve not got so much of an overhead then having to build and maintain another c# framework.

I guess you have to try things and see what works best for you really, it’s not a one size fits all.

This makes a lot of sense, thank you Viv, some interesting things for me to take away.

Out of interest what benefits would come about from exporting the Postman tests to c#? And what benefits might it bring to do that?

1 Like

To be honest I’ve not done a lot of exporting from Postman, I think the benefits may be that it support a range of different export to languages, it’s not got lots of bloat in the code it generates. What you may find though is that the tests you export start to have a lot of duplicate code and so then you’d possibly need to extract these things in to a reusable helper. As soon as you then start to have lots of helpers you may then find the code you export from Postman would need to be tweaked to then work with your custom helpers… As mentioned I’ve stuck to just using Postman and exporting the tests up to now so can’t really offer any more advice on Restsharp.

1 Like

I don’t know, but I have a bunch of questions that might help you know:

  • What programming languages do you know, and what languages do your team know?
  • What’s the UX of the tools you’re using?
  • How fast and accurately do your tools achieve your goals?
  • How much do your tools hide from you? Those abstractions are lossy, how much does that matter?
  • What granularity do your tools give you to perform actions and checks? How much does that matter?
  • You are starting a new development project when you start a new coded check suite - who is doing it, what do they know, can they understand what you’re doing, who’s maintaining and cleaning it, what standards are you using, how will your code reviews be done… all the usual questions for a new dev project are applicable here. A more bloaty tool enforces process and format and lowers barrier-to-entry, but at the cost of granularity and control and vendor-locking, as well as more loss in the abstraction and hidden complexity.
  • What feedback do your tools give you, and in what format?
  • Do you want to trigger the automatic check suite as part of your deployment, and where would your tool fit? You’re going to have to wire in the running of the checks and feed their output back in.
  • Can you then easily run the checks separately?
  • Can you still perform partial runs to speed up redeployment when your check suite inevitably breaks or gives false positives?
  • Do you want to run one test well, or many checks poorly? Both are valid for different contexts.
  • What does your tech stack look like?
  • You might want to test API under certain conditions - modules deactivated, servers down, bandwidth throttling, failsafe mechanisms, high volume of requests, high size of requests… can your tool achieve these things in your context? Does it need to?
  • Does your tool need to / can it handle your performance testing, security testing, cross-browser, cross-platform, different user types/permissions/sales packages, documentation, usability, error-handling, alerting (e.g. email)… what are you going to write checks to do, and can your tool do that in your context?
  • Does your team/company have a preference?
  • Does your infosec permit third-party tools? Do you need sign-off and can you get it? Where are the checks stored?
  • Are you happy to rely on the up-time, updates and volatility of changes involved in third-party tools, to whatever level they occur? What if they start charging you or charging you more, is that okay?

Hopefully that’s a useful start, there are loads of questions. I find that the pragmatic concerns of tooling are the bigger stoppers.

2 Likes

If you can test everything you need to with API interactions, than the tools are fine.

One big reason you might pay the cost of writing your own test framework in another language would be if you want custom features - environment toggles, data setup and teardown in the DB, configuring services to point at mocks, cross-checking API results with persistence layers, interacting with non-HTTP interfaces (e.g. we use rabbit queues as a messaging layer, so being able to read and publish to queues is nice to have).

That’s interesting you talk about reading and publishing to queues as it’s likely to be something we need to do at the place where I work currently. At a high level how would you go about doing that as part of an automation framework

Where I’m at now, we just have super thin wrappers around the raw python libraries, i.e. we have client libraries that are tiny wrappers around requests, i.e. methods like create_account(payload) where payload is the python dict representation of the payload.

Similarly, we have convenience methods that wrap pika (rabbitmq library for python) that publish to queues and read from queues, as well as bind temporary queues to multiplexing exchanges so that tests can read the queues w/o affecting the applications under test that are also reading the queues for downstream processing.

And then we just wrap all that together in a test method in the unittest library. A sample test for us would likely have the creation of the ephemeral queue in the setup, and a test might make a request, then we read something from the queue, run our assertions, and then have a teardown that unbinds the queue.

So I’ve done both of these (in my case, automating some API testing in Python, and also doing some in Postman). Interestingly perhaps, they were both with the same project.

The reasons I did this, where that initially I had to do some inline data manipulation and some odd login workflows to make things work. Python gave me a lot more control over details like this and so I could actually proceed with the work I needed to do.

Once the API matured a bit I was able to automate what I needed with Postman and so I switched over to using that since it was easier to handle things like passing around data between various test set and made it easier to share the tests with the rest of the team etc.

Which approach to use I guess depends on your goals and how the API works etc.

1 Like

Hi ernie, could you please elaborate on need of cross-checking API results with persistence layers using rabbit queue. New to this field, want to have some more information about it.