Help with trying out DevOps?

As it’s not testing specific, this seemed the best place to ask.

At my workplace, we’re looking to experiment with DevOps. To do so, a manager put a call out asking who would be interesting in building a working DevOps pipeline that utilises the cloud and automation.

The restriction is it can’t be reliant on any of our existing workplace infrastructure, tools or software. We are going into this with a clean sheet, and not letting our legacy hold us back.

The objective is to demonstrate:

  1. A cloud based client and target machine i.e. simulate a dev environment + a deployment target.
  2. Environments are transitory (they can be blown away and easily created ).
  3. The dev machine will host an IDE, test framework, analysis and security scanning software.
  4. The application we develop is not the focus, however, it must be able to demonstrate a red/green build/test/deploy.
  5. On check in of our build we need to trigger a pipeline that can :
    a. Scan for vulnerabilities (internal and external),
    b. perform static analysis,
    c. build,
    d. test,
    e. deploy,
    f. configure,
    g. generate a dashboard.

DevOps is a complete unknown to me, and likely the others who are interested in trying it out.
What resources, insights, suggestions (anything really), would people recommend?

I have grabbed a link to Dan Ashby’s model showing that testing can happen everywhere in DevOps, so if they try to say it can’t be done at any point, I will show them otherwise :sunglasses:

2 Likes

Wow you have quite a journey ahead of you!

Some pointer that I have learned is you selection of cloud platform has a huge impact. Specifically on what is readily available of the shelf and more importantly how is the documentation. I have been part of CD with AWS and with GCP. And the former is more expensive but it has by far more tools and better documentation. There are a lot of guides on how to setup a specific solution on that platform. For GCP you will have a harder time to get the same information. Have not tried this with Azure.

I harder lesson is that some decision are really important (as in figuring out later that you need to change it is super expensive). Cloud platform is one. Service orchestration is another. Things like monitoring / log capturing etc. is easier to change given that you use “standard solutions”.

Do not reinvent the wheel. For a lot of the major common problems in this domain there exists solutions.

Again to reiterate the cloud part. Is that in one place we spent a lot of time and effort in creating the structure on the platform. I.e. setting up billing / access / secret management / naming standards to scale with the organisation. That allowed for people and teams to be autonomous and find their own solutions. At another place we did more. I need a server in the cloud so I get that to work. I need a vpc so I set it up. I need this server to be able to speak to this server… ohoh… then we need to rework the vpc which breaks some weird solution in another team etc. Untangling that mess have taken some time. So for your trial and error i suggest to have one setup. And once you figured things out I suggest to spend time to create a “correct” setup. Don’t mix the two.

Also it is way easier to keep everything in one solution. So for instance taking the effort to migrate already existing infrastructure to the cloud is better in the long run. A simple example is handling of domain names.

Finally treat infrastructure as code and commit early to tools like terraform. Most guides show you have to setup a solution in the cloud console or with commands, but investing into terraform or similar will help you scale and keep a consistent quality in the work.

Good Luck!

1 Like

Thanks for your response Ola.

I believe we’re likely to use AWS, as that is what the company is moving towards for other things, and we have had been getting people trained on that. So it makes sense that we end up using it.

I’ve not heard of Terraform, so I’ll have a look into that and mention it to the devs who are involved, as they might know more about it.

Remember that tooling is only half the story in DevOps, getting the interaction and communication between the development and the operations staff flowing smoothly is critical to really making it work.

1 Like

I’ve just spent the past year in a Ops-heavy “Devops” team so I’ve got lots of thoughts and ideas to share on this very topic and I’m in the middle writing a blog post about it. I’m literally about to go on holiday though so I should have it written next week!
Terraform is a good shout, its for automating the orchestration of AWS resources. And you’ve also got Ansible that fits neatly in the same tech stack (using yaml) which configures your servers after you’ve provised them in AWS.
Then when you’ve got Ansible, you can use a neat testing tool called Molecule as a sort of unit test of the Ansible code.

But there are other tools and products available, such as Puppet and Chef. It depends what languages people like writing in.

I’m trying to avoid a long reply which will be the blog post but also definitely try and advocate for building in zero-downtime deployments early on in the design as that can be quite difficult to retrospectively apply.
And check out Chaos Engineering with regards to testing (particularly in a TDD sense).

Oh and I highly recommend Katrina’s book on the subject:

1 Like

Thanks for the response Matt.

Seeing the name of so many tools I’ve never heard of is a bit intimidating, but it is probable that the devs involved haven’t heard of them either.

I’ll definitely check out Katrina’s book, and I look forward to reading your blog post after you have you come back from holiday.

Seconding Katrina’s book, it is an excellent resource! I’d also point to Charity Major’s blog if you’ve not read it: https://charity.wtf/ (it covers a lot of obervability and human stuff but I think both of those things are super important as DevOps is a mindset change as well - suddenly you can spin things up and bring them down super quickly and push to live basically whenever your pipeline dictates. It’s a process, but a fun one :slight_smile: )

1 Like

Hi. We are transitioning to DevOps teams for some time now as we previously had a cloud product but released bi-weekly so I’ll try to put few learnings but feel free to ask anything more and technically oriented.

  • Despite name DevOps, it is IMO about the whole team approach. What looks as best at our company is this way of working: testing is outlined early in the process; story is developed by devs automated tests are developed by QA in paralel; then it is merged to master (let’s say 2-3 microservices and frontend), automated tests are passing; QA or Dev does exploratory testing and code continues via CD to QA environment (automated tests only) and to production.

  • Test needs to be pretty fast. You don’t want to wait 60mins after deployment to env till tests finish. It will feel as a blocker for continuous deployment. You simply want to have result fast.

  • Monitoring is important and think of that from the very beginning.

  • Know your cloud and be mindful with saving money. It can lead to hours wasted on investigation magical issues. E.g. saving money on EC2 spot instances is causing to recreate the environment and so services are not available for some time. This was an issue for us and resulting in automated test failures == need for investigation of failures.

  • as you already got replies everything needs to be defined by code. That’s essential.

2 Likes

Following up my short post with a longer one in blog form as promised:

A bit of a brain dump to be honest but its fresh at the moment.

1 Like