We have a number of teams working on a variety of applications, some are internal web apps and others are microservices. One team looks after the customer facing front end and basically is a consumer of data from the other teams.
The challenge we have is that each team has a number of testers who test the applications in their area, including validating data against the relevant source - db or API, but the front end team do not have their own db or API’s to test so to validate that data shown in the UI or on a report means validating against a number of source applications or API’s that other teams develop.
The issue we have is that the front end testers would need to know most of the backend applications and API endpoints to do the data validation for anything that is newly developed - ie a new screen or report.
Options - we could train the testers on the underlying applications but its going to be time consuming for them to do this, and its a lot to learn and know.
We could ask the testers in the other teams to do the validation by looking up the values from ‘their’ application and share with the front end team, but they have their own deliverables and it would impact on those.
We could create an integration test role and make it their responsibility to find the data to validate against, but is it a good way to do this?
Once we have something written and working, we can create automated regression tests to continue to validate that the data is as expected, but its how best to do the acceptance testing once a feature is added that I’m trying to work through.
I’d be interested to know if anyone else has had the same problem and how you overcame it.
Hope all of that makes sense, I’ve rewritten it a few times