What is your companies DR (Disaster Recovery) strategy if Github goes down?

GitHub itself is hosted on AWS. If it shuts down, you could lose the reference to the remote repository. So youā€™re still left with your own clone right. Not too big a damage ā€˜per seā€™ in the individuals eyes, as someone would definitely know the status of their own repos, so theyā€™ll not panic so much. But then what? What is the strategy if this happens?

2 Likes

Not being on GitHub, but having your own Git repo hosting. Donā€™t ask for our disaster recovery if Teams or Outlook are down. Both are in the cloud now.

I think the repositories are the least problem. You can easily setup a shared repo to which all sync. And even if not that, people sync individually to each other. Iā€™m not saying its good, but its doable.
I guess loosing all the tools, automation, etc. is a bigger problem. Is that what you imply?

This may be a little less of a DR, but having ways to communicate external issues out to engineering is helpful.

  • We have a #is-it-down channel in Slack the reports when services that we use are having issues of any type
  • At one org I was a part of, we had a Discord channel where we could communicate with each other if Slack was out of commission (which happened once or twice)
  • I think knowing the extent of the external services used is a big step forward if that information is documented and maintained.

IMHO, having your own Git repo is even worse in case of real disaster as cloud is couple of orders of magnitude ā€œsaferā€ in that regard. If you lose your on-premise hardware thatā€™s it, no recovery.

As for OPā€™s questionā€¦ I havenā€™t been yet on any team that was actively talking about disaster recovery. Itā€™s possible however that DevOps have some DR scenario process tucked away in some obscure folder, who knows :smiley:

The point of Git is that you can recover your central repository from most local dev repos. Then you just need a another shared medium to put a repo to.
Git it in itself is a backup.

I relate exclusive to the repositories. Anything workflow related like GitLab (UI for the repo, PR forms, issue tracker, boards, CI/CD etc.) is gone for sure, if you donā€™t have backup.
But even CI/CD topics can be recovered comparable easy as more and more configuration is in the Git repo (often being different from to branch) and interpreted on the run. Not being stored hard on the CI server. Once we have new hardware, we could recover comparable easy our lost CI server and its agents.
I just asked. When we have new VMs we can recover within hours our CI server and build again.

1 Like

You have a point there. Iā€™m thinking out loudā€¦ if the repo has some of the ā€œinfrastructure as codeā€ YAMLs and other configs, you can have all the pipelines, k8s and whatnot set up in the repo itself so setting it all back up could be rather trivial task.

1 Like

Damn this is such a good question and my answer is we totally didnā€™t think about it! :slight_smile: thanks for the trigger!

1 Like

Nice, that I cloud bring my point across. :slight_smile:

More or less that is how it works for us with Jenkins and Jenkins files.
These are Groovy files which describe the whole process of building, automated checking and deploying the project and they are on every branch of our product. We can make branch-specific changes to the build process easily by that.

(In detail most details of the build and unit checks are configured in Maven files which are called by that Groovy file. The Groovy file gathers all results, displays them and provides the built files as artifacts.
We have a Java EE application and Maven is our build tool. Yours might be different.)

And we differ two types of jobs/Jenkinsfiles. One is for building and unit checking (here is maven) and one for running API and UI checks against a test server on which the build is deployed.
We donā€™t run API and UI checks for very build.

We use Declarative Pipeline:

In addition we use Multibranch Pipelines jobs. Here you mostly just add the product repo and configure the location of the Jenkinsfile. It will then discover on its which branches it can build. All details of the pipelines are stored in the Jenkinsfile.
By that is the Jenkins server more of a thin shell with just little configuration. And we haven even backups for that.