How to handle client testing as a service provider (GraphQL)

We have a service oriented architecture which is available to clients under an ā€œumbrellaā€ GraphQl API. The underlying services are only exposed through this API. The underlying services are deployed independently by the dev teams and we have about 10 teams.
GraphQL in itself is a very flexible API compared to REST for example which means itā€™s hard to know exactly how clients are using the API. My impression is that itā€™s impossible to get 100% test coverage on it.
We have both internal and external consumers of this umbrella API and the challenge now is to make sure these are working. We have tried a few different approaches but havenā€™t really found the perfect way to do this, so I would like to know what you guys think is a good general approach to the problem that also scales.
We have tried these approaches but not really found them fit. I wonā€™t go into details why they havenā€™t been very successful but two of the major disadvantages have been,

  • Does not seem to scale very well.
  • Communication between teams is hard so the expectations between them are not in sync.

Approaches tried:

  • PACT (does scale, but the GraphQL support is limited)
  • Using dashboards for client teams to show their test results
  • Including client tests into service teams pipelines
  • Creating client tests that can be run on-demand
  • Alerting (in time) when something in test environment fails.

Iā€™m starting to think the underlying problem is lack of communication. So;

  • How do you communicate these things efficiently between teams? Any tools?
  • What information client / service status do you communicate between teams? Alerts, test results, questions, concerns etc?

Looking forward to some insightful comments! :slight_smile:

How do you communicate these things efficiently between teams? Any tools?

I built a pytest BDD tool which I have used for testing and combined inter-team communication on REST APIs. We would create tests like this one and use them to generate corresponding markdown docs like this. It wouldnā€™t take much to adapt it to GraphQL.

We would combine stories with a schema, which we would also generate a markdown doc from. Iā€™m not sure what the GraphQL equivalent would be of the schemas we used, but Iā€™m certain there is one.

Most conversations we had about existing or future behavior would center around the story examples and schema docs (i.e. non-vegetable BDD).

What information client / service status do you communicate between teams? Alerts, test results, questions, concerns etc?

Mainly just the generated docs - from stories and schema. If there was functionality left broken I suppose we would have communicated that.

Uptime / alerting is something that should also be shared, but I see that as opsā€™ domain, not testers.

In your situation one other thing I probably would consider is creating a sandbox / mock version / dockerized of the API for use in consumer test suites and before we release a new version, try and run all consumersā€™ tests against your release candidate.

I have an open source library where Iā€™ve been meaning to try this - itā€™s marked as ā€œused byā€ by 1,300 users on github. I wanted to set something up where I could get the top 100 most popular, run all of their tests with 0 changes, swap out the library with my release candidate and run their tests again.

2 Likes