We have a number of teams working on a variety of applications, some are internal web apps and others are microservices. One team looks after the customer facing front end and basically is a consumer of data from the other teams.
The challenge we have is that each team has a number of testers who test the applications in their area, including validating data against the relevant source - db or API, but the front end team do not have their own db or APIās to test so to validate that data shown in the UI or on a report means validating against a number of source applications or APIās that other teams develop.
The issue we have is that the front end testers would need to know most of the backend applications and API endpoints to do the data validation for anything that is newly developed - ie a new screen or report.
Options - we could train the testers on the underlying applications but its going to be time consuming for them to do this, and its a lot to learn and know.
We could ask the testers in the other teams to do the validation by looking up the values from ātheirā application and share with the front end team, but they have their own deliverables and it would impact on those.
We could create an integration test role and make it their responsibility to find the data to validate against, but is it a good way to do this?
Once we have something written and working, we can create automated regression tests to continue to validate that the data is as expected, but its how best to do the acceptance testing once a feature is added that Iām trying to work through.
Iād be interested to know if anyone else has had the same problem and how you overcame it.
Hope all of that makes sense, Iāve rewritten it a few times
Iām not 100% sure I understand your problem, but letās start with a couple of questions to get some more clarity.
When you say
āvalidating data against the relevant source - db or APIā
Do you want to make sure that you get the correct data returned or do you want to make sure that the front end can just show whatever it getās returned from the back end?
but the front end team do not have their own db or APIās to test
Youāre mainly talking about testing, but how do developers get around this? Do they also have no db or API they use when developing the front end?
Iām having a bit of hard time seeing yet how front end and back end in your situation are interacting. Is the front end purely for data entry and showing, and the back end for storing and validating ? Or is there also data logic somewhere in the front end already?
You could develop a Gold Copy for each new feature.
A gold copy is a standard data set which can be used as a greenfield for data.
Letās say Team A (back-end team) creates a new back-end feature and team B (front-end team) creates the front-end features. Then team A will provide a āgold copyā for team B to work with.
So a standard data set is available, there will still be edge cases but those have to be created manually ā¦
For me, personally, this is a good way to work on new features. We have manual testers & for a new feature we developed a script that creates data for them coming with the new features/APIās.
I struggled to write the post in a way that made sense, so I am not surprised it was hard to fathom out!
Your first question is a really good one - we are looking at the data being shown from the backend. The team writing the API in the first place will have mapped the data correctly and thats part of their remit, so the front end team need to ensure they show the correct data in the UI or the downloaded statements. This means they have to know all the endpoints in order to do so, or we rely on others to help validate.
Devs use mocked data so there is no integration until it reaches the test environment.
Thanks for this - its not something I have come across but sounds like a good idea. We do have some constraints though. Some of our systems are legacy, and the cost of creating new environments is prohibitive for us, so we use a set of interlinked test environments that contain ofuscated copy of prod data. So things like product codes are valid but names are scrambled - and it means we have a full size data store. We then add data ourselves if adding new products. I think if we were to use your idea, we would need all the related applications to also have the same Gold Copy data - could be a logistical problem but its something Iāll investigate.
Steve
On the team make-up/learning problem : This is not an unusual situation or unusual kind of question really, every org has unique challenges. There are so many people happy to ignore our context out there. You have identified risks, and want to mitigate or at least manage those risks. I think you are on a useful track with wanting to create an āintegrationā or ops team. The worst you could do was not make any structural changes to the way teams are composed and interact.
Iām going to suggest mixing things up. The best thing you can do is to convince people into and create a sprint cycle and get very interacting team into that same cycle, even teams that donāt use an agile board can still run in sprints. this makes it easy to do experiments and swap people between teams until you gain confidence that either knowledge is getting transferred, or that your process has been fixed. If an experimental sprint fails, you have lost 2 weeks of work, if it succeeds (and most will) you might have saved a lot of time. My team runs kanban because we do the ops, but we still āuseā a sprint board, merely to focus and because we are the team that doe the sprint demo. We donāt let devs demo the stuff thy deliver to the company.
It doesnāt have to be a new environment. It could be a script that injects data in your current environment also.
I never said it was easy but yea Gold Copyās take some time to setup but once itās there you can reuse it over and over, the point of the gold copy is to make it reusable and so that it rarely changes.
Weāve built it bit by bit.
Some scripts just inject data into the DB directly and some through API.
Iād push the data issues down the stack as far as possible, and make sure that each microservice has appropriate tests for returning data in the expected forms. That is, for all the edge cases the front end team is currently doing data validation for, the microservice teams should have tests that verify the data is returned the way as expected. You mention some issues with having these backend teams learn this, but it seems like itās a critical aspect of their service, and seems like a big miss if theyāre not testing these cases already?
Just like I wouldnāt expect someone integrating with a 3rd party service (e.g. a payment processor or something) to have end-to-end test cases to cover all the exceptions/failures/etc, effectively testing the 3rd party, I wouldnāt expect the front end team to be testing/exercising the micros.
Ideally, the front end team could then be testing against mocks (wiremock or similar) of the dependencies, and wouldnāt need to be verifying data in the database is reported properly through the microservce, and running the occasional smoke test against live instances of everything to just make sure no one has broken any contracts.
On the other hand, if your real issue is that youāre using the UI to validate the data in the DB is correct, then it sounds like you should be improving the testing around how the data is created and/or writing checks of the data itself.