Good automation practices for executing ANY test case against ANY customer environment

Iā€™d like to find out if there are any suggested good practices for UI automation testing in terms of being able to execute any test case against any customer database/environment?

Hereā€™s the backgroundā€¦

I maintain a UI test automation solution for web and mobile products that is used by a variety of different customers, all of which have completely different data and configuration (essentially they all create and process ā€˜jobsā€™, each with their own unique system configuration).

I understand that good practice is for our tests to be environment-agnostic, so for example they could run against dev, test and production environments (e.g. by creating page objects that donā€™t tie them to any one environment).

But what would be a good practice in terms of executing any given test case against any given customer environment?

I know that for a simple login script for example, by keeping the test data (URL, username, password etc.) separate from the test case - as we do currently in ā€˜execution profilesā€™ - that this means we could run that test against any customer environment by simply having a different execution profile for each customer.

But inevitably for more in-depth test cases, such as creating a new job, so much of that test is going to be specific to that customer in terms of their data (for example, Customer Aā€™s list of places will be different to Customer B).

Even if the data-specifics could be maintained in the execution profile, the likelihood is that the workflow will vary between customers because of their individual system configuration, which could mean that weā€™d need a separate ā€˜create jobā€™ test case for each individual customer, thereby defeating the objective?

I would really appreciate if anyone would be able to share their experiences and knowledge of this or similar scenarios, or provide links to any potentially useful resources.

For what itā€™s worth, we use Katalon, but Iā€™m really hoping to hear about the principles of how your solution has been implemented, more so than the tools/solutions used (unless of course theyā€™re fundamental to the solution).

Thank you :slight_smile:

ps apologies if this has been covered already - I did a quick search of these forums and google generally but with no luck

2 Likes

But what would be a good practice in terms of executing any given test case against any given customer environment?

Limitation : I donā€™t know your product and test cases. Therefore I just can guess and answer generic.

I did much automation, but had not a similar situation like yours. Maybe I still can help you.

I doubt the ā€œanyā€ in your question. ā€œappropriateā€ is imo what you looking for.
I suggest you to ask your developers where things maybe look different but technically are the same. So you have lesser cases. Do not test ā€œblindlyā€ everything, but informed appropriate.

How good is your shared understanding of what should be automated at all?
Maybe the devs are also having coded test cases which cover different szenarios.

which could mean that weā€™d need a separate ā€˜create jobā€™ test case for each individual customer, thereby defeating the objective?

At the extrem I fear that might be the case.
But maybe that is also appropriate.
Discuss it with with your team and/or product manager. This decisions should no tester (automation engineer etc.) make on his own. This is about effort, money and risk.

On concrete idea: Work with abstraction.
Make an abstract class with represents the overall idea of that test case. It includes a concrete ā€œrunā€ method which includes calls of several calls of abstract methods.
This other abstract methods are like test steps and you implement differently for every customer.

// Java-Code which should be similar in other langues
abstract class GeneralScenarioX {
    void run() {
        doX()
        doY()
        doZ()
    }
    abstract doX()
    abstract doY()
    abstract doZ()
}

class ScenarioXForCustomerA extends GeneralScenarioX {
    @Override
    def doX() {    }
    @Override
    def doY() {    }
    @Override
    def doZ() {    }
}

I see seldom easy solutions in automation for abstractions like that. Often you have to implement the scenarios explicitly.
It finally comes down to me to: Is the effort of development and maintenance worth the effort? Do we get enough confidence out of this? Or can we spent the time and money better?

At the very last I would say your to-be-coded test cases are different for every customer. They are just on a abstract level, by their idea, equally. The concrete details differ.
By different customer configurations you test different code of your product.

3 Likes

Sebastian, thank you for your reply, I really appreciate it.

A number of the questions youā€™ve posed in your response are similar to what Iā€™m pondering myself right now; what is appropriate to automate? Do we get value and confidence from this or are we better off directing time and effort elsewhere.

I like to follow the 80/20 rule in test automation so I try to get as much out of the time I have for automation (which is not as much as Iā€™d ideally like), and thatā€™s why itā€™s so important that anything we automate offers value in return.

I think I understand the principle of your code and what youā€™re suggesting (even if I donā€™t fully understand the code itself - we use a ā€˜codelessā€™ automation solution) - if Iā€™ve understood correctly youā€™re essentially saying that we have a ā€˜baseā€™ test that contains common aspects for all customers for ā€˜create jobā€™, then have additional test steps pertinent to each particular customer?

As youā€™ve alluded to, I agree that the question of what value this offers needs to be established - my initial feeling is that there is lot of extra work for what could be a relatively small amount of gain potentially (although I do agree that by testing different customer configurations, we are indeed testing additional code that may not be tested otherwise).

And yes I agree that ā€˜appropriateā€™ is more what weā€™re looking for than ā€˜anyā€™ - the idea of ā€˜any test against any customerā€™ is a notion that has been put to us, rather than one weā€™ve devised ourselves as a test team. Our current automation approach is to test ā€˜shallow and wideā€™ - i.e. test a little bit of everything (or as much as possible), before then going deeper in to specific (e.g. known to be problematic) areas.

Your response has given me plenty of food for thought - thank you :slight_smile:

3 Likes

:raised_hands:

Yes.

Iā€™m sad to hear and suggest you to withstand this.
Maybe your management is bit helpless here and ā€œjustā€ wanted to apply a simple solution. But there none.
Basically its a lack of confidence and trust. Talk to those people with your ideas and I guess you will create trust.
If Iā€™m right with my basic guess of the source.

Iā€™m happy to read that :slightly_smiling_face:
You are welcome.

1 Like

Hi Sebastian,

Your guess of the source is correct :slight_smile: But this ā€˜suggestionā€™ was put to us in a positive way and is seen as an aspiration rather than a must-have, and indeed my questions here are part of my work to evuluate the possibility of achieving this aspiration; any and all concerns I have will be aired and discussed!

Thanks once again,
Kevin

2 Likes

Iā€™m glad to hear that. That sounds like a constructive environment.
CU

1 Like