Test groups for "progressive testing delivery"

Hello,

We are currently considering the gradual roll out (for testing) of our features based on test groups. The idea would be to have a certain number of groups that would have access to a feature according to its progress, similar to the Xbox Insider program, but using server flags.

In our initial drafts, we identified five groups:

  • Developer: members of the feature team
  • Tester: group with internal QA
  • Internal: group with all company members
  • External-tester: group with external testers
  • Beta: all our beta users

These groups would be nested: the further down the list, the more polished the feature is.

There is a lot of documentation on โ€œprogressive delivery,โ€ but little on โ€œprogressive testing deliveryโ€ (if I can call it that).

Do you have this kind of roll out to test your features?
What levels/granularity have you adopted?

Thanks for your feedbacks.

3 Likes

Iโ€™m not used to this and a bit confused: Has group 1 no testers at all?
If yes, I see that as a another problem.
If no, Iโ€™m fine.

I work in a different setup and have full access to the code and can build&deploy the product on my own PC. If I want to I could make changes to product (and did it already at least in translation files).
I sometimes test even on their feature branches.
And would like to have always like this. The closer Iโ€™m to developers the better I can provide value by my work and have less wast and delay.

If your organization is like you pointed out it is not your fault. I suggest to advocate for changing this.

We do have similar groups, but less hard-wired by server flags.
We have different server, some internally some at the customer site, where we deploy builds of different branches.
As our application is very heavily integrated into others and does mostly processing of data, we can not test/simulate all things internally but have also a test system at the customer. Here we can test the communication with the other system as well as we get much richer data.
As final step before production we have kind of User Acceptance Test system where the power users test if they fine with what we have delivered.

We differentiate hard testers / power users from the customer from us.

But internally we are more lax. In general we differentiate 2 groups, its the main assignment, but once in a while we everyone does tasks and tests from the other group. Internally can access everyone everything and its mostly a matter of task assignment who does what and where. Access to the customer test environment is more restricted as it is more administration to do and also costs.
By and large some people do more the testing internally (including) me while others concentrate more on the tests at the customer environment.

How does this helps you?