Testing in the Land of Continuous Deployment

I was interested in some backlash on this Twitter thread:

Which included this article:

And it got me thinking about Continuous Deployment and how it affects human testers and testing and good enough quality. I’m regarding CD here to mean that any change that’s made and that doesn’t trigger a failure in the automatic check suite gets automatically deployed to production.

There’s words about CD here: http://www.satisfice.com/blog/archives/856

We want to test a product very quickly. How do we do that? It’s tempting to say “Let’s make tools do it!” This puts enormous pressure on skilled software testers and those who craft tools for testers to use. Meanwhile, people who aren’t skilled software testers have visions of the industrialization of testing similar to those early cabinet factories. Yes, there have always been these pressures, to some degree. Now the drumbeat for “continuous deployment” has opened another front in that war.
We believe that skilled cognitive work is not factory work. That’s why it’s more important than ever to understand what testing is and how tools can support it.

I’ve never worked in a CD environment so I might be missing the contextual information that makes CD work for a company. The warning against speed over knowledge about quality seems like a valuable one, as well as against the industrialisation of cognitive tasks. I’d be interested to hear opinions on CD’s impact on testing as well as from any skilled software tester on how they cope with testing in a CD environment. I’m happy to hear about opinions and experiences in Continuous Delivery too (any change is considered deployable to production but it’s not automatic, someone has to do it).

What I really want to know is can it work as a process and the product still be properly tested? Or is it sacrificing real knowledge about quality for speed, or improving quality by just pushing a lot of small bugs to customers?


Following. We’re currently talking about CD but I’ve not worked in a company who used CD before (same context as above). Would be keen to hear about success stories from a test perspective and things to watch out for from those who’ve been involved.

1 Like

I suggest you to read the great book written by Katrina Clokie “A Practical Guide to Testing in DevOps” available on Leanpub. You will find lots of interesting stories and how a tester can still be very valuable when deployment doesn’t wait any non-automatic feedback by testers.


Except that his role changes a lot.
He’s now the assistant in quality, or even the quality assurance guy.
He has to manipulate people into caring about him and involving him.
He has to find the ways software can fail and fails so that he can find a fast strategy to diminish the impact.
He has to work a lot more with monitoring, logging, alerts;
He has to take care about different people(devs/ops) and their work more than before.
He has to prove value somehow otherwise his role will change drastically to something like software engineer in test, or quality administrator, or release manager.
I see testing in devops more about quality assurance and less about testing.


Thank you for referencing my article “Has Continuous Deployment become a new Worst Practice?” My concern is that testing is not seen (or valued) as a skill in and of itself. Test automation is seen as a replacement for human testers, instead of complementing human testers. I believe Jame Bach wrote about this.

I attended the 2017 Agile & Beyond conference and a company boasted about how they do 200 releases to production a week. This got me thinking “If it’s so easy and painless to deploy to Production, why spend a lot of time testing the code prior to deployment? Just throw it out to Prod after passing some rudimentary tests. If we hear about something going wrong, we will just make a fix and deploy again.” But in reality, how many users will take the time to report a bug? How easy will it be to report it? How will the user know when it’s been fixed?

I see a dangerous trend in software development where test automation is everything and testing (test techniques) is nothing. And this comes at a time when software is becoming more complex (autonomous vehicles) and interconnected (IoT).


Excellent answer John!
If there was a trend of about 80% of the sw projects failing…without testing (or with “automatic testing”) this is going to be epic!
I think it´s very sad that in a time where “agile projects” could make a difference, the speed that projects are trying to pursue will ruin every (human) testing effort.

Muchas gracias, Juan.

In many agile projects, managers often want to continually increase the velocity (number of story points delivered in a Sprint). This creates poor quality and game playing (story point inflation). These managers always seems to forget one of the pillars of agile - work at a sustainable pace.

1 Like

I’ve worked on projects practicing both forms of CD (Continous Delivery and Continous Deployment) and I don’t think the issue is with the practice itself but organizations or teams implementing it without fully understanding it and how it applies in their context.

It’s worth noting the 200 releases per day are likely to be very small commits or changes rather than whole user stories e.g. it could be a couple lines adding a new dummy button as a demand experiment for a possible new feature. The smaller the change and smaller the batch size, the less likely there is a problem with the deployment - I recommend reading The Principles of Product Development Flow by
Donald G. Reinertsen to learn about this concept and more of the theory behind CD.

Other types of testing can fit into the CD process and there are two ways I’ve done this;

  1. Add new stage into to the CD pipeline where releases deploy to a Test environment and the change doesn’t proceed down the pipeline until a manual trigger. The problem with this technique is the new stage becomes a bottleneck and lowers the time to production.
  2. Deploy to production and use feature toggles to isolate new features or risky changes from the customer. Feature toggles allow internal staff or the testers access to the new change so they can do exploratory testing before the toggle is flipped so all users access. This is my preferred method.

Thanks for the reply; just on the point of feature toggles what do you see is the practical advantage (if any) of a continuously deployed product with the features turned off over a product deployed with its features only when it’s tested?

1 Like

From a release manager perspective, it is vastly less risky and painful to have teams releasing feature toggled items regularly than going off on a branch for months then BAM! Megamerge time!


One big advantage is any operations or configuration issues have already been solved if I’m testing in production. One problem with Test environments is they are production like but never 100% the same as production. I’ve seen problems where I have been testing in the QA environment and let it go through to production for new issues to appear relating to configuration problems

@annaheyonbaik raises a good point about avoiding merge hell with long-living branches (We have a time limit on branches if we use them)


I too would prefer the second option - being able to turn it off and on in prod.

I wonder how many companies have data that is easily segregated for testing? I worked at a company where you could use a reserved set of Account Numbers in production for testing and these data from these accounts were “ignored” in the real production reports, so it didn’t distort the stats.

1 Like

That is absolutely a horrible situation. I would quickly have a talk with anyone trying to encourage this practice. I hate to think about having to put up with a manager type pushing that garbage.

This video from Spotify talks about Release Trains and the ability to toggle off features if they’re not ready for production.

Bach’s premise “we want to test a product very quickly” is not the aim of CD. Also “the industrialization of testing similar to those early cabinet factories” is not what CD is trying to achieve.

I have helped 2 organisations move to CD, feel free to ask me any question

1 Like

Continuous deployment is the testers friend. Hands on, manual exploratory testing is a crucial part of our CD pipeline, and is a great way of maximising the value testers bring (alongside automated tests for regression checks).

The key thing is a definition of done for a unit of work that includes it being testable and releasable to production. This gives the test engineers hands on control of how something progresses through the pipeline. In this situation testing is simply a one of the stages of the CD pipeline. It works well for everybody because the feedback loop is kept fast and tight, and releases are kept small with much lower risk than a ‘big’ release.

Here’s how we run things:

A user story is created, and acceptance criteria created as part of this (either with the whole team or a minimum of the three amigos). These AC are the jumping off point for testing the story.

The development is done on a new feature branch created for that story. The developer will create unit tests and potentially integration tests and GUI level automated tests (often in conjunction with a test engineer). After code review, the story goes to a test engineer, who can deploy it onto a test environment, check the AC are met, and do further exploratory testing based on the level of risk. If any issues found the story is passed back to the developer. Otherwise it goes to the product owner for final signoff.

Once signed off by the PO, the tester merges the branch, which means that code is now ready to be released. The new master branch is deployed to a staging environment where a suite of the automated regression tests are run and the test engineer does some quick sanity checks around the new changes. If the test engineer is happy, they push the release to production. We do sometime use feature switching too, but the code will have been tested with the feature enabled.