Ask Me Anything: CI/CD and Delivery Pipelines

Our final Ask Me Anything of 2018 with the very talented @bangser.a on the topic of CI/CD and Delivery Pipelines took place tonight. As usual, we had a lot of great questions :slight_smile:

If we didnā€™t get to your question, why not ask it here? Maybe youā€™re catching up on the Dojo and have thought of some more questions or follow up ones to those answered by Abby tonight. All questions welcome!

1 Like

Unfortunately my own internet connection was not the strongest tonight so I wasnā€™t able to be as much of a link ninja :disappointed:

A relevant Club post to the discussions had tonight

Abby also referenced Gatling, you can find more about that here

0 bug tolerance

The state of DevOps

And the unanswered questions:

  1. Which comes first in CI/CD, tooling or techniques?
  2. Any tips on building automation/DevOps communities of practice in software houses with more traditional approaches to software delivery? (waterfall, on premise, etc.)
  3. Does CD also mean Continuous releaseā€¦ ? for me Iā€™ve experienced CD meaning pump out new code to prod but turned off/behind split.io etc.
  4. What are the key indicators that you look at to check if a companyā€™s infrastructure is ready or not for CI/CD?
  5. Whatā€™s your CI/CD pipeline, testing workflow; how the context of project/product shapes it?
  6. What will be the very first step for the team to start building a proper CD process? What is the testerā€™s role in this process?
  7. Can you see/suggest any benefits to investing in CI/CD on projects which start and stop. We tend to deliver a change/feature then donā€™t have a project until the customer wants another change/feature.
  8. How do you cope with such a huge amount of ATs that it needs -say- an hour or more to run on Jenkins? (Excluding deployment time)
  9. Could you talk about the role of feature flags in the branching process along with CI/CD?
  10. How important are notifications from a pipeline? How much should it notify and to whom? Also, can a pipeline be integrated with any ChatOps workflows?
  11. What are some monitoring tools you would recommend?
  12. Any hint on how third-party services can be mocked/replaced at best in a CI pipeline?
  13. As you were saying your pipeline used to take up to 24 hours to run, where do you stand on only having tests in there that have been written to cover defects only? Or is that too far the other way?
  14. How do you handle maintenance of mocks when you may use different technologies for different types of tests, e.g. unit, API, Consumer driven contract, UI and a mock of the service.
  15. Going forwardā€¦ do you feel testing itself needs to adapt dramatically wrt testing before release vs testing in prod/practicing recovery?

:wave: Hi all! I am around right now for a bit, will try and get some of the unanswered questions in but feel free to shoot some more over :smiley:

1 Like

We talked a bit about this, but definitely techniques and culture. There are a few things which can make tools limiting (e.g. hem not being able to be run from a command line during the CI process) but there are WAY more things which can be culture limiting. Getting the team used to sharing not yet finished code is first and foremost. It can be really scary to show not perfect work. If you have a cut throat culture around mistakes this can be a very big hurdle. But it is a necessary one to get started! Even if you have the right culture, the techniques around ALL changes in source control (yes I mean DB migrations, yes I mean config changes, yes I mean the actually pipeline configurations) and being able to quarden off the unfinished bits using techniques like feature toggling and/or branches by abstraction can be hard and take practice. You donā€™t want to be trying this stuff when you are already going straight to production.

1 Like

Start solving problems for people. People view some of these cultural and technique changes as scary and sometimes even posing a risk to their jobs. Combating this is arguments about how much it will help the company isnā€™t going to work. BUT if you can help someome get their work done in time to get home to their family or get recognition you have a shot. So basically try and stay away from fancy terms and ā€œbestā€ practices and keep everyone as close to home as you can. That will let these communities grow organically.

1 Like

Absolutely not, CD (delivery or deployment) does not speak to release. It is just as you describe, but for many people release and deployment are 1 and the same!

1 Like

I am going to take ā€œinfrastructureā€ liberally and include development practices. I would want to see that ALL changes that impact an environment are in source control and these changes explicitly versioned/labeled. I would also want to see either trunk based development or at least clear continuous integration. From there you are looking just at how you gain confidence in your changes, how much do you trust your automation? How well can you evaluate success of a deployment/release/live running of your system? How quickly can you fix a problem once you identify it? All of this needs to be evaluated against your appetite for risk.

1 Like

You project/product architecture absolutely impacts your pipeline architecture (and it can go the other way too!). Your ability to trust changes to one component independent of all other components is a key indicator of how well you can follow CD. So, if your application is all a single repository and a single service, you can probably get a simple pipeline up and running (and trusted) very quickly. However if you have 10ā€™s or 100ā€™s of services that have frequently changing interfaces and are tightly coupled your pipeline will need to deal with that complexity as well and it may be more challenging to gain confidence without setting up some more structures for how these services will interact.

1 Like

I believe that the very first step is identifying what is absolutely necessary to build confidence to release. This can be by looking at your current process and making that visible. But if you can get people to step away from current process and look at ideal you may end up in a better spot. Soā€¦do you require a security sign off? Really? For ALL changes? Ok, maybe only for certain changes. Great, now you can make visible that changes to a certain repository or type of code will be stopped by a manual gate but other changes will pass right through.

So try and write out the name of the stages you have and what risk you think that they are mitigating (or if you donā€™t have a defined process yet, identify what risks you want to mitigate then name the necessary stages). Put this up in a visible place and maybe write on a different colour slip of paper some example of changes that could be made. See if people feel those changes could be trusted after passing through the pipeline. This could identify missing steps for your pipeline and engage people in different parts of the business.

1 Like

I am not sure about the stop/start, but basically CI/CD (just like any sort of automated regression testing) is most valuable when something is under change. So, even if you often drop projects for months at a time, if you are highly likely to need to pick that project back up and make changes to it it may be worth while. This becomes even more worthwhile if you find that picking the project up again will mean needing to revisit the context and not being confident in what the behaviour of the system should be.

1 Like

Always back to value. If these tests fail, would you choose not to deploy? Then they have to be part of the pipeline. Could some of them fail and you would still deploy but the quickly follow on with a fix? Maybe you can split these out. But if all need to run and they take a long time (honestly an hour is much better than many!), look at ways to kick them off in parallel either with themselves or with other things. Sure, usually you may run API testing before UI testing, but if UI takes a long time you may choose to run them in parallel so as to get the feedback faster.

1 Like

Both of these techniques allow a code in the ā€œmasterā€ branch (or any other shared branch you choose to deem the one that gets deployed to production) to be deployed even if some changes arenā€™t ready to be released. I usually advocate for feature flags (even though they can be trickier to get used to) because they allow more flexibility in refactoring as well as more usefulness in ā€œroll backsā€. So with CI, if you are running either of these you need to add in to your pipeline a way to validated the not yet released features. This may mean exercising your application in a different toggle state than production or running your tests against branches that arenā€™t master. But I think this is the biggest reason that these ā€œWIPā€ techniques can impact how you do CI/CD.

1 Like

If a pipeline is your only way to get a change to production (which as I spoke about is an ideal way to work) then any failures would mean you are UNABLE to update production. This means if a critical big comes in for you would need to fix your pipeline THEN you may be able to fix the critical bug. Therefore, it is prefered that teams ā€œtool downā€ any time a pipeline is failing. To make this feasible you would need to be alerted immediately of a failing state through visible team dashboards as well as individual alerts either on your desktop, email or chat mechanism. However, if the team is not going to stop and fix things, or if the team is not even empowered to fix a broken stage, these alerts can be quickly fatiging and should not be done. You may want to read up on SRE alerting practices (for example in the Google SRE book or many articles around the web) to understand more about alert fatigued and how to create effective alerts.

1 Like

I think this depends a lot on what you are trying to monitor. Are you in an on prem/pets kind of an environment? This can allow for certain tools versus if you are in a more container or even serverless environment. The next set of questions are around what feedback you want. Are you looking for time-series data like CPU usage over time? Or are you looking for more action created data like a release occured, or a log was written? Most people need both but the tooling is very different. Finally you need to look at how you may use the outputs. Are you looking for something to track your knowns? Or are you looking for something to explore your unknowns? Each of these point to very differnet tools so I am not sure it is fair to name ones here without more context.

1 Like

Unfortunately this can be a tough problem to solve. What you need to evaluate is what you need from running thats service. Is there a set number of responses that you can mock out? Is there a contract you can trust? This is a great example where your application architecture can make a big impact on how complex your pipeline architecture needs to be. If you are able to siphen off your connection to external services to a very small and isolated bit of code you can stub much easier.

1 Like

It may be controversial, but I actually donā€™t think all defects should require a new test in the regression suite. Some defects can be fixed from actually removing functionality or changing things in such a way that needs to be monitored for or a different risk mitigation put into place. So back to your questionā€¦I think that the only things in the pipeline should be things that are absolutely required to trust a deployment. In the experience I spoke about, we did not need 24hours of tests to trust a deployment, but noone felt qualified or empowered enough to remove any of them without risking the finger being pointed at them :frowning:

So I stand very firmly on the ground of only putting deployment necessary tests in the pipeline which may or may not be things which have proven defective in the past.

1 Like

The biggest risk here is that the mock drifts from realty. To mitigate this risk I look to run the mock against both the thing it is immitating AND the thing it is being used for. You can look at a tool like Pact to better understand how this can be executed.

1 Like

Whew! Final one! So yes, I do think testing will evolve (potentially dramatically) but that is because of an expansion in our arsenal and not because of a lack of value. The idea of being able to double or triple the range of places where we can mitigate for problems is empowering. We can be thoughful about if it is worth immitating production load when we have the ability to just trial the new service under that load with low risk! I would highly recommend checking out what the likes of SREs and Chaos Engineering specialists are working on and come say hi in the testinginproduction channel in the slack. There are lots of interesting things happening! A great intro would also be to check out the talk from Test Bash Manchester by Marcel and Benny.

2 Likes

Hey!
Hope youā€™re doing great Abigail & Heather :slight_smile:
Am I still OK to ask two more questions? I missed my chance on the AMA session. I hope you wonā€™t mind :wink:

  1. Could you list a few pros and cons of splitting the responsibilities apart: building the package from the deployment. Especially in the case when these two things are done in two separate products like CircleCI (build) and Octopus (deploy). Iā€™m just used to a single CI/CD tool for the whole thing (i.e. Jenkins, Gitlab, Bambo). Iā€™d like to know your preferences, and itā€™s okay if theyā€™re subjective :slight_smile:
  2. Strictly tooling related question. Jenkins. In the XXI century, Jenkins looks pretty old school. However, I find it very ā€œformableā€. Which modern tools would you recommend to try out if Iā€™d look for a replacement? Letā€™s assume the flexibility and freedom of building jobs like in Jenkins declarative pipeline is the highest priority for me. Iā€™ve heard GoCD is excellent, but I hadnā€™t a chance to try it out.

Many thanks, and I appreciate if youā€™d have a chance to look at my questions.
Jarek