We presently use Automation Framework with MStest and Selenium C# POM framework for the e-commerce domain in our organisation.
Currently, organisations are moving toward an AB testing strategy for a few pages.
Therefore, I am unsure of the testing strategy in existing Automation Framework I should employ so that the code and pages will function properly across all design approaches.
I am absolutely new to AB testing, so any assistance would be greatly appreciated.
We were in a model where anyone could propose an A/B or even A/B/C/D test and we’d often run 4 or 5 at the same time, sometimes you throw away them all and others you may keep one.
You need to consider what triggers the variation, sometimes its by user, others maybe by IP but once assigned that user is often stuck with that variation until after the experiment is concluded.
One of the dev built had a browser addon so we could quickly reset and control the variation for testing, you would need that level of control.
Now with so many variations just being cast aside some maybe even a day later if you got the data to justify, you need to consider is it worth automating the variation.
A/B tests normally start with a hypothesis, that’s where I start testing by questioning the hypothesis. Could be something as simple as “different colour tones are potentially more inviting than others” lets try an experiment to see of more users click a picture depending on the colour.
Because I have already spent time getting invested in the hypothesis stage am generally like to run through the variations to get a personal feel for it but naturally this means I am also getting testing cover, could be 10 mins effort in small variations.
Our developers did most of the automating so yes they could add basic checks into each variation but in many cases they opted against it and only added the automation when a new version was agreed on as primary.
I’d recommend on an extra level of team consideration of regression risk of a variation then you need to balance up that risk with potential value of the experiment and coverage of that risk.
Our approach worked for us but maybe your needs are different.
Yes, in our situation, A/B testing will be conducted for several users; for example, one person may receive Design A for the basket page, while a different user may receive Design B.
But for the Automation Framework, the important concern is how to handle both the designs POM and how our framework can choose which code to check for if it receives Design A/B for the basket page.
Which, is probably, since as @andrewkelly2555 points out AB testing is throwaway code, and we know that not only will the throwaway code pollute our automation scripts over time, but will also cause test triage load.
It will become cheaper to test manually. It does not scale over many releases.
As other pointed out, it would be ideal if you could avoid automating the additional flows of A/B[/C/D] testing. But if you did have to do it for whatever the reason, this post of mine might give you some tips & ideas on how to do it as I’ve done it before myself: A/B testing and Selenium automation | autumnator.