How to handle test environments?

Hello, how are you?
I’m having a little inconvenient at handling automated tests.
The SUT is an API that responds data from a SOLR.
I made a large suit of automated tests in soapui, managed in git.
The tests search in the databases (de data sources of the SOLR) of each environment to take data to make comparisons.
The problem is… The tests are the same in each environment so, they are repeated in develop, integration and pre-production.
The BIG problem is… in develop’s environment there is no security in the data of the databases, so there are a medium probability to take data that make “false” errors.
There are a good practical or advice in this scenario?
Maybe make little tests for develop, an deep tests in integration and a smoke test in pre production?
I don’t know and, being the tests needed to pass to promote the solution to production make a big headache when the tests fails because data error.

Regards

1 Like

Hello @brunog!

It seems you have a mix of information objectives that may be making the testing more challenging than it needs to be.

With respect to the tests, all the tests do not have execute in all environments. Once the evaluation of behaviors have been successfully demonstrated in Develop environment, there is not a good reason to execute them in another environment.

The Integration environment is where I recommend evaluating things that are important at that level: security, configuration, and connectivity. Smoke tests are adequate for this. Usually, the execution of tests against one or two APIs, these attributes can be evaluated. I recommend security be set up similarly to production with environment differences in configuration files. IDs and Passwords are a different matter and should be addressed as guided by your Security teams.
With the just smoke tests in Integration and Production, perhaps there is a better possibility of the tests providing reliable information.

Joe

1 Like

This sounds like the problem where a test has no internal oracle, and relies on data fed and on a table of expected responses. Which pretty much guarantees you are never going to find new defects if you keep expecting the same inputs to give the same outputs and have no way of fuzz testing where you can not only stress test the system by shoving huge amounts of data that are inconvenient to store through, but also never catch bugs introduced by environment changes that may or may not be accidental. I’m thinking you want to look at making sure that a suite can fetch it’s data from any source.

100% with Joe, do NOT run the same tests and same data in all deployments, it’s like pressing a button 100 times and expecting a different outcome the 99th time, all that does is waste electricity. You want to work harder to find confidence in your test reports, from the developers who know this lie.