As a tester what has been your experience of contributing to A/B testing?

I’d appreciate some views from those testers with real experience of A/B testing, in particular how they as a tester contributed and maybe a few pointers on what sources you leveraged from to build knowledge and skills in this area?

Here is some of my recent experience in this area.

The team I was working with had a healthy culture of explore, discover and experiment to help drive the product forward and A/B testing was a big part of this.

As a tester I was able to shift left getting involved in the Hypotheses discussions with the team, here we were testing a lot of ideas to design the A/B experiments.

Our OKR’s played a part in the hypotheses discussion, why do we think this experiment could have a positive impact on our business and do we think they could influence or OKR’s? for example.

There was still some hands on testing to do as often the B was new to the product, a different design, a potentially more efficient user route or perhaps comparing two features that have the same goal for the customer preferred option.

Testing had some challenges as often a variation is locked into a user profile so that once they get a variation they get it consistently, they team were great on this when I flagged this as a testability discussion and one of the developers built an add-on so I could quickly select any variation from a list regardless of device and user or other identifier that would normally force consistent variation.

In shifting right we had the control and analytics tools, these actually did some level of analysis automatically but almost always we had some level of discussion to interpret results.

Many times we often ruled out variation B, that was still a successful experiment as we learned something new even if it was not immediately a positive change to the product.

I had a side thought regarding whether A/B testing was actually ‘testing’, with my view that testing is all about asking awesome questions that guide the team and product forward, I’d have to say it is, here we are directly asking the end users questions with our variations and that’s testing 101 in my view.

So my exposure whilst limited has been very positive from a tester and testing perspective and something I am keen to do more of.

In particular that experiments culture really fits with my testing.

If you have experience of this, I’d really like your thoughts, they may help me contribute more with my tester hat on the next chance I get. Thanks


That’s a big question. Here we have done various kinds of A/B testing both in the UI and API tiers. Not as much as I’d like but their are multiple examples.

I mostly just evangelise, it’s not necessary for testers to be involved in everything going on. It’s only important it happens. I’m more concerned about the teams or areas that are not A/B tested. Those areas need my help more, perhaps they need help getting into A/B testing.

Your question reads more as one of, how do I stay relevant as testing practices change. I.e. how do I stay relevant. I think that resonates with a lot of us as testing become automated and more ownership of quality tips towards developers because of that.

There isn’t a simple answer. How you stay relevant depends on you, your skills and interests, as well as the needs of your organisation. It’s always a good idea to keep building those relationships though. I’d be asking the team to help me understand what they did and what worked and what didn’t. Then I would bring that to other teams when suitable projects pop up!


Hi @andrewkelly2555 To share my experience on A/B testing .
In one of my OTT project , Me and My team members used to do A/B testing but its more to capture analytics data in production rather to test in closed testing environment. In testing environment we can only see the A/B flow works as expected or not and as I also faced a similar issue like "locked into a user profile " etc and to overcome this either we need the hardcode build from developers or to use some proxy tool from where we can change config file etc as per A / B


I think that a useful contribution a tester can make is by taking a step back, asking “Are you sure?” and “Why do you think that?”, and checking assumptions. I.e. common tester attitude, but applied to the process rather than the software. Largely based on this video Martin Goodson - Most Winning A/B Test Results are Illusory - YouTube

Regarding testability and A/B testing, one needs to consider how the A/B test implementation affects regression testing (old workflow, say “A”), as well as coverage of new workflow (say “B”), particularly with respect to test automation. Manual testing can be more easier to work around.

On test automation aspect, there’s also the matter of minimizing the amount of change to the test code to support both the old A and new B workflows, and there are tricks for that in terms of element location strategy. This post of mine may be useful in that regards: A/B testing and Selenium automation | autumnator

1 Like