I was interested in some backlash on this Twitter thread:
Which included this article:
And it got me thinking about Continuous Deployment and how it affects human testers and testing and good enough quality. I’m regarding CD here to mean that any change that’s made and that doesn’t trigger a failure in the automatic check suite gets automatically deployed to production.
There’s words about CD here: http://www.satisfice.com/blog/archives/856
We want to test a product very quickly. How do we do that? It’s tempting to say “Let’s make tools do it!” This puts enormous pressure on skilled software testers and those who craft tools for testers to use. Meanwhile, people who aren’t skilled software testers have visions of the industrialization of testing similar to those early cabinet factories. Yes, there have always been these pressures, to some degree. Now the drumbeat for “continuous deployment” has opened another front in that war.
We believe that skilled cognitive work is not factory work. That’s why it’s more important than ever to understand what testing is and how tools can support it.
I’ve never worked in a CD environment so I might be missing the contextual information that makes CD work for a company. The warning against speed over knowledge about quality seems like a valuable one, as well as against the industrialisation of cognitive tasks. I’d be interested to hear opinions on CD’s impact on testing as well as from any skilled software tester on how they cope with testing in a CD environment. I’m happy to hear about opinions and experiences in Continuous Delivery too (any change is considered deployable to production but it’s not automatic, someone has to do it).
What I really want to know is can it work as a process and the product still be properly tested? Or is it sacrificing real knowledge about quality for speed, or improving quality by just pushing a lot of small bugs to customers?