My test team have been task with writing some automation tests for a tool that interacts with a number third party pieces of software. We have no control over this software and is likely to change at any point in the future. My preferred option is to create a test bed or âfake siteâ for our application to test all configuration on as the âfake siteâ will not change unless we change it.
As time is tight the Product Owners want us to write tests against the top 10 third party software that use our tool directly. For me this seems more of a monitoring tool than a testing tool? It could also be difficult to maintain as if the third party changes the software we will also have to modify our tests.
It is also worth noting that we have no contact with these third parties that our tools run against.
I see the value of the âMonitoring Toolâ but feel we need to test the tool first before we go on to the Monitoring tool so we can distinguish what the problem is, the tool or the 3rd party site changing
I assume you donât control when you upgrade/make version changes for the 3rd party tool? So you canât be sure the response is always going to be the same? If my assumption is correct then this is the root cause of your issueâŚ
The first question I have in regards to your âmonitoring toolâ is what are you testing? Are you ensuring that the response from the 3rd party tool isnât changing? That could be valuable as a short term measure.
Is it maybe also valuable to write tests for your own functionality and include some stubbed responses, based on the results from your monitoring tool? That test could highlight where a change in response causes your own product to fail. For example, if the response changes from {âa, b, câ} to {âaâ, âbâ, âcâ}, does this cause any functionality in your own product to fail?
Neither of these options give you any sort of stability though if my first assumption is correct, but it might help you identify issues earlier?
Iâm seeking clarification on the business case here, let me rephrase, since Iâm trying to frame the âvalueâ.
We have an SDK/Web-Service, it consumes 3rd party tools outputs, tools we donât control.
Is this what you are doing? This sounds like a typical âservice availabilityâ dashboard type problem, where you want to try to build system tests that verify that your biggest 10 customers wonât drop you like a hot potato, or are you talking about the top 10 âtoolsâ consumed? Because the latter feels easier to validate objectively using some metrics and static analysis. Iâm however guessing that you mean top 10 customers, can you clarify Sam?
Thanks for your replies. I agree i do feel this is more a âservice availabilityâ Dashboard problem than a testing problem. To give more context we do not have have control over the third party web sites that our tool runs against and hence we have no notice when things change on that third party web site and our tool stops working against it.
The Monitoring system the PO has described would make sure the tool that runs agains the 3rd party web site outputs expected data
Like @alihill mentioned my preferred way is to test our tool against a stub so we can cover all different types of outcome rather than testing a moving object(3rd Party web sites)
I feel testing against the third party web sites will in-cure a high maintenance cost as when ever something changes we will need to fix our tests
@conrad.braam The Top 10 is the top 10 3rd party web sites that our tool runs against
Still not a fan of having a static mock website as the main project goal, since it does not deliver business value. Itâs a smoke test only, and as such delivers developer pipeline value, but no business value to test your product against a golden âstaticâ site. I would leave it up to the developers to write this kind of static test and run it as part of the nightly build. But to reflect production it would want to morph anyway with environmental changes on the web, that are not necessarily salient. Things like site certificates expiring et al. A âdashboardingâ type view project that defocuses enough to remove bias would be my preferred route, but it would put you into a âdev-opsâ kind of âalways onâ mode. A live dashboard project will also force you as the tester to discover all of the ways your app interacts with customer sites, rather than just one site that covers all cases. The very real downside is that a âdashboardâ frameworksâ test probes code will have to be very fault tolerant.
Which feels like what the industry love to call âleft-shiftingâ, a term I love to hate. But please do let us know how you progress, Iâm getting carpul tunnel from doing all this buzzword quoting.