Whats the best test strategy that fits the use of feature toggles

Hello everyone, we are implementing the usage of feature toggles in our release strategy, and, as you probably know it is not the easiest thing to deal with when you have to adapt the automated test suit to execute only tests that have the feature toggle ON and ignore those with feature toggle OFF and vise versa. We have been trying to figure out the best approach and how to adapt our test strategy accordingly. Hence, I will like to hear your thoughts on the subject.
Thanks a lot in advance.


Hi Kevin. Welcome to the most awesome software quality community in the milky way. (not a verifiable fact) But it’s still the right place to be, glad you are here now, with a good question, that has perhaps been asked before in another fashion.

Hmmm what if we invert your question:
What if we word it like - I have an A/B test? or perhaps we phrase it as the product has multiple license tiers, or even as the product has multiple security groups, which tests do I run? If you do currently limit your tests to run in only specific scenarios, then you probably want to just keep on doing the same thing, write a custom test for each feature combination you care about. I prefer a more META approach and table-driven tests to solve this as it tends to scale better. But that’s just my opinion, why no read what the community thought when this question was asked 3 years ago. 30 Days of DevOps Day 25: Feature Flags


Hi @kelvin1 welcome.

I’ve done all of these before:

  1. A test for feature toggle off and a test for feature toggle on. Delete as appropriate when feature toggle is switched on or off for good.
  2. Make a bet that one of the scenarios is the more likely one to persist and only write an automated test for that.
  3. Manually test both scenarios. Automate once the feature toggle is turned on or off for good.

When I did 1, building test automation scenarios was cheap and we had no manual testers on hand. When I did 3, building test automation scenarios was expensive and we had two manual testers on hand.

I’d say that this is entirely an investment decision. It depends upon the cost of writing these tests and the relative cost of manual testing. Ideally everything should be cheap to automate and all scenarios should be covered (including ones that will soon be deleted), but in reality the cost/benefit trade offs don’t alway work that way.

The cost/benefit trade off for manual testing is a lot more amenable for scenarios that will not stick around. If there are 3 releases between introducing a feature toggle and turning it on or off for good that potentially means only 3 manual tests of that scenario are required in total.

1 Like

I’ve generally had the backend tell me the feature toggle states and just have a line in the start of my test like this: (for global features)

function testThing(t) {
     if(!feature.enabled("asdf")) t.skip("not applicable")

Then, if there’s a chance of the feature being enabled before the next release, I’ll spin up a few staging environments to test different combinations of features being enabled. If they’re just going to stay off, I typically punt testing that specific feature until later.

For account specific features, I usually have the setup phase create me each for the necessary scenarios.

function testThing(t) {
     setup({ requireFeatures: ["asdf"] })

Hope that helps :slight_smile:


Hello Everyone, Thanks a lot for your valuable contributions on the topic. Very much appreciated. We finally realized that feature toggles are not what we need, as we are not yet mature enough to handle the different combinations of toggles for different customers in our releases. Hence, for now, we will stick to release toggles, which are much easier to manage, while we prepare for the future situation of managing different combinations of toggles. In the meantime, I will collect all feedback, study and analyze it to see how the solution can be applied when we are mature for it.

1 Like

Thank you for sharing with the community more of the journey. Feature flags are a big step, I had never thought about them as a purely maturity defined strategy, but you are seeing what I see, they require more stable base to build from in some very specific ways. All the best @kelvin1 , keep the questions coming.