Butch, big thanks for the thorough answer. Seems you guys have a solid process.
Big ups for feature flags. Iāve been geeking out on dynamic environments recently - the on-demand env per feature approach, but I think the next step will be to explore feature flags a bit more.
The reason I was asking is because I had hectic experiences in two separate companies with the release train. Both were unorganized places though: one was super young and was taking pride in heroic efforts, the other was just a classic place almost from Dilbert. Neither planned 5 days to iron out regressions after code cutoff. The startup had like a couple of hours, the other place like a day or so, but the release was always postponed⦠Both places would have been a nicer experience if the time was allocated for the release process imo.
In the startup after some time we decoupled the feature releases, and had dynamic environments to test each one separately. Which worked well, there were like six teams and they could deploy anytime they were ready. (feature flags were used to coordinate with the marketing releases, if it needed to have a proper launch, as you wrote it too). It did solve all the coordination problems, and things were not hectic anymore.
Testing though was not perfectly solved imo, there were unit tests, but the regression suite was not necessarily up to par with the release strategy. With the release train we could do the necessary manual testing, but when we did several deploys a day on the various feature branches, the effort needed to have the same level of manual testing multiplied too. I donāt think we did as much testing in that approach, given our regression suite was not that advancedā¦
It feels to me that you guys actually have more things in place to do continuous integration/deployment than we did
Extensive use of feature flags, plus the confidence in the automation suite are two very strong signs imo.
Thanks again for the thorough answer!