As I already said, Iโm new in this API testing field
In my past adventures in testing, mainly in Waterfall or mini-waterfall with pretentions to be Agile/Kaban, we used to write test cases in HP ALM and then created test run to execute tests and this way we had the metrics that some people love (me included , how can we show management that we are really necessary ? some management are really difficult to convince without numbers )
But my true struggle is: how to write the tests? Iโve tried Gherkin, looked at test rail โฆ
I donโt figure out the best way so farโฆ and Iโm expecting that someone could share her expertise
For your true struggle, a lot of the posts here will direct you to helpful things.
I canโt really help you with metrics though, since I donโt know what your management might want, Iโm more likely to steer you down the wrong path than the right one.
how to write the tests? Iโve tried Gherkin, looked at test rail
If the problem is how to communicate/document what you will test, I would not worry on the specific tool if it provides:
Easy access to everyone involved - which includes both credentials and understanding of the tool
Rapid updating features:
Versioning attached to the application;
Barely to none required steps/fields - so you can adjust to your context;
Living documentation => It can indicate what needs to be update for any given version of the application*;
If the problem is how to test APIs, Michael Bolton wrote a bit about it. In essence, itโs not different for testing through a GUI - itโs still a risk-based analysis and exploration.
Speaking about tools, I am writing a series on Postman. On the first post, I do all CRUD flow on the entities of the Trello API: You can check it out here.
Metrics would be too context specifc; it all depends on what questions are you asking. Performance testing would have different metrics than scenario testing.
* Not necessarily automation (although highly recommended) - in Gherkin, you can have features and sub-features files and associate these with User Stories that will change them.
On our API project, we approached testing from a technical and behavioral level. Note that this depends on good rapport and collaboration within the project team. The technical tests were written and executed by the developers. These tests demonstrated that the implementation met the business intent of story cards. The behavioral tests were written by test engineers (created directly from Gherkin). They could be executed by developers (a big win!) or the test engineers. These tests demonstrated the API behavior from a system perspective.
The technical tests verified the API did the correct thing, and the behavioral tests verified the API interacted correctly with systems. The suite of tests grew over the course of the project and were used as our regression suite. We were able to execute the suite at every check in, and before every deployment.
In my opinion, our project team saw benefit from building the APIs in very small increments. The story cards were purposefully small in scope and very focused. In that manner, they were also very testable. Building on small successes, we were able to create a complex set of APIs needed to fulfill our business purpose.
We did not maintain a formal set of metrics. We put our trust in the test suite and it told us of errors due to changes in the code. With just 21 defects over 20 iterations, our team was able to focus on delivering the APIs.