Writing Automation Tests that rely on third party sites

My test team have been task with writing some automation tests for a tool that interacts with a number third party pieces of software. We have no control over this software and is likely to change at any point in the future. My preferred option is to create a test bed or ‘fake site’ for our application to test all configuration on as the ‘fake site’ will not change unless we change it.
As time is tight the Product Owners want us to write tests against the top 10 third party software that use our tool directly. For me this seems more of a monitoring tool than a testing tool? It could also be difficult to maintain as if the third party changes the software we will also have to modify our tests.

It is also worth noting that we have no contact with these third parties that our tools run against.

I see the value of the ‘Monitoring Tool’ but feel we need to test the tool first before we go on to the Monitoring tool so we can distinguish what the problem is, the tool or the 3rd party site changing

1 Like

I assume you don’t control when you upgrade/make version changes for the 3rd party tool? So you can’t be sure the response is always going to be the same? If my assumption is correct then this is the root cause of your issue…

The first question I have in regards to your “monitoring tool” is what are you testing? Are you ensuring that the response from the 3rd party tool isn’t changing? That could be valuable as a short term measure.

Is it maybe also valuable to write tests for your own functionality and include some stubbed responses, based on the results from your monitoring tool? That test could highlight where a change in response causes your own product to fail. For example, if the response changes from {“a, b, c”} to {“a”, “b”, “c”}, does this cause any functionality in your own product to fail?

Neither of these options give you any sort of stability though if my first assumption is correct, but it might help you identify issues earlier?

I’m seeking clarification on the business case here, let me rephrase, since I’m trying to frame the “value”.

We have an SDK/Web-Service, it consumes 3rd party tools outputs, tools we don’t control.

Is this what you are doing? This sounds like a typical “service availability” dashboard type problem, where you want to try to build system tests that verify that your biggest 10 customers won’t drop you like a hot potato, or are you talking about the top 10 “tools” consumed? Because the latter feels easier to validate objectively using some metrics and static analysis. I’m however guessing that you mean top 10 customers, can you clarify Sam?

Thanks for your replies. I agree i do feel this is more a ‘service availability’ Dashboard problem than a testing problem. To give more context we do not have have control over the third party web sites that our tool runs against and hence we have no notice when things change on that third party web site and our tool stops working against it.

The Monitoring system the PO has described would make sure the tool that runs agains the 3rd party web site outputs expected data

Like @alihill mentioned my preferred way is to test our tool against a stub so we can cover all different types of outcome rather than testing a moving object(3rd Party web sites)

I feel testing against the third party web sites will in-cure a high maintenance cost as when ever something changes we will need to fix our tests

@conrad.braam The Top 10 is the top 10 3rd party web sites that our tool runs against

1 Like

Thanks Sam, now understand better.

Still not a fan of having a static mock website as the main project goal, since it does not deliver business value. It’s a smoke test only, and as such delivers developer pipeline value, but no business value to test your product against a golden “static” site. I would leave it up to the developers to write this kind of static test and run it as part of the nightly build. But to reflect production it would want to morph anyway with environmental changes on the web, that are not necessarily salient. Things like site certificates expiring et al. A “dashboarding” type view project that defocuses enough to remove bias would be my preferred route, but it would put you into a “dev-ops” kind of “always on” mode. A live dashboard project will also force you as the tester to discover all of the ways your app interacts with customer sites, rather than just one site that covers all cases. The very real downside is that a “dashboard” frameworks’ test probes code will have to be very fault tolerant.

Which feels like what the industry love to call “left-shifting”, a term I love to hate. But please do let us know how you progress, I’m getting carpul tunnel from doing all this buzzword quoting.