I have heard from my clients that when they integrate with FedEx services and test their applications with FedEx Sandbox, it causes testing issues. The test data is not available, services are slow to respond, and intermitently not available. This means that testing typical scenarios sometimes takes days instead of hours.
What other services or APIs case you problems during testing and why?
I once hat something similar with with the PayPal sandbox.
The availability and data was fine.
But we accessed it via Browser automation and they changed the UI form every 1-2 month.
Was not a big hassle, but on regular base our automation coded need to be adapted to match the new DOM locators.
I did not researched, but I wonder if there was way to get informed about new deployments.
It always surprised us.
I dont work with backend, but the testers using the Safesforce sandboxes complain all the time of fragility. A thing that scares me is that we don’t actually have the time to set up and “canary” lamps.
I/We have a canary in our client test environment, but to be fair it’s not that reliable either, and that’s testing an internal API surface. At least it’s performant, but most of the time sandboxes are not performant seems to be the issue, because we keep loads of simulation data in the sandbox, and this perhaps impacts any new record sync disproportionately. I luckily don’t deal with it, but I can see the tech debt pain in my colleagues eyes.
In my current role, I tend to use the third approach mentioned in the linked piece, building mocks or stubs if the vendor sandbox/integration environment is problematic.
We’ll periodically do manual verifications against the sandbox/integration environment, but for most of the development life cycle, we’ll be running against internal mocks and stubs. Most of the time we’re not changing our integration, so once we’ve got something working, the risk around the integration is low.
The only place where this becomes really challenging is the performance case, but for those cases, it’s important to get numbers from the vendors, SLAs, etc, and engineer around that, rather than hoping their non-prod environment is going to be analogous to prod in terms of performance.
How much would you say you spent on maintaining the sandbox browser automation for test data setup? 3 days a year? 10 days a year?
What third party tools do you use for creating the mocks? Do you use ready-made mocks or you you build them yourself using a third party tool?
By rule of thumb 1 day per month.
I think the effort was in general fine.
But it was not foreseeable when it would happen. It always struck us with surprise.
Whatever is appropriate for the problem at hand. Often, Wiremock is sufficient, but other times we’ll write a little custom server in Python/Java/Go/etc, whatever makes sense.
Things that guide this decision are what kind of persistence is needed, can we fake it, etc.
How much time would you spend on creating those mocks internally via Wiremock or custom server typically? 1 day to create a mock and 1 day a year to maintain it? 10 day to create a mock and 3 days a year to maintain it?
There’s not a simple answer here - it depends on what we’re mocking. If it’s simple enough for us to do via Wiremock, it probably costs us less than the 16 person-hours/year you’re suggesting. If it’s bigger, and requires a custom server/code, then it costs us more. If we’re not changing our integration/needing to add new features to the mock/etc, the cost is 0.
In general, we’ve figured that the benefits of a reliable test environment/not having to deal with flaky vendor integration environments (and one we can run locally via containerization) outweigh the upkeep costs.
I’d recommend trying to be agile/iterative, rather than trying to determine all these answers up front. The calculus for the cost-benefit here is too nebulous, so spike on one of your smaller dependent services, try out using a mock for it, see how expensive it is for your team vs. what the benefits are, etc.