I just logged off of work for the day, and I am having #QAThoughts. In particular, #PostmanThoughts, #SDETThoughts, and #AutomationThoughts.
Here’s the scenario: I had, what I would consider, a reasonably routine update across the handful of API test suites that exported from Postman, running via Newman and Github Actions. I make the updates in my local Postman app, export the JSON, update via Github, verify, rinse and repeat until sqeaky clean. The challenge was an assertion update for all of the requests. Not just a few requests. All of them.
But Jen, you think to yourself - I’m sure that was like, maybe 20 copy/pastes, right? No. It was a lot more. And it was compounded because I was making improvements to error handling, so as I copy/pasted, I thought more on it, and had better ideas, and then went back and copy/pasted again. I did this a few times instead of leaving the “good enough” code alone, like I should have.
Did Postman have a Find and Replace feature? Yes.
Did it work for multiple lines of test assertions? No.
Was it useless to me? Yes.
Did I manually open up every request in every collection, feeling my relationship with this program that had brought so much to my career direction dry up with every +C and +V? Yes.
Is our relationship status changed? It’s complicated.
Did I only just now consider finding/replacing in the exported JSON collection files and then exporting it back in? Sure, but that’s gross, and you and I both know that. That’s dumb and no one should have to do that.
I’m not sure how I can keep making updates across multiple requests going forward without manually going into each one without a workspace-wide find and replace, for example. This has made me realize that there are likely other issues that will come up as I make more test suites and need to maintain them within this ecosystem.
What strategies have helped you with creating and maintaining your Postman API test suites, in particular if they are larger sequences of requests (ie. end to end tests)?
Some of the challenges I have:
- End to end test scenarios that require 15+ request prior to set up each test scenario, therefore a collection can be on the larger side
- Non-prod environments that have flakey performance, so I’ve abandoned speed testing per request because it’s so inconsistent and there’s no prioritization to improve it regardless
- Difficulty finding time for new automation creation or time to maintain existing test suites in addition to current manual testing workload - in other words, feedback that helps with maintenance efficiency is also appreciated here also!