This is the second of three posts about building an advanced deployment pipeline.
Choose your tools wisely:
Whatever tools you have in place now, or whatever tools you choose to switch, the same rules apply:
- Grok your tools
- Use your tools the way they were intended to be used first, before you start experiementing
- Avoid building from scratch. Most problems are already solved and someone has probably open sourced some code to solve exactly the problem you’re stuck on (or at least 90% of it).
- Always choose open source before proprietary
- Always check that open source software you want to use has healthy and active community around them
- Be wary of moth-balled open source projects, unless you’re ready to start supporting it yourself
- If you choose to switch, make sure the new tool will solve your problem with the old tool
Source Control:
Ensuring control over changes in your VCS is pretty much the crux of the issue in Continuous Delivery. The most important axiom of CD is to keep your master branch releasable at all times. The wikipedia page about Continuous Delivery is actually the best at describing this. Out of that singular focus everything else will flow.
The converse is also true. If you are frequently breaking master, or your master branch is in an unknown state, then you have no hope of doing continuous delivery until you fix that problem.
What this means in practice, is that you want as much as possible of your verification to happen on the feature branch before merging to master. This means any type of building, compiling or packaging, deploying the application, automated testing and even manual testing if necessary. Code review should also be a part of any change before merging to master is allowed.
Lastly, you need to ensure that developers are actually integrating their new changes with the latest on master (i.e. the “integration” part of continuous integration).
If you’re using git and your branching look like this, the point is to get it looking more like this (the way Linus intended).
If your situation is this bad, then my preferred means of doing this with git is to use a fork/pull-request model and force fast-forward only merging to master. What this means is that everyone must rebase their branches every time something is merged to master and if something gets messed up, it’s isolated to the fork. Furthermore, by rebasing, it ensures that everyone is actually doing the “integration” part of Continuous Integration, reducing the state of entropy on the HEAD of master, and gets you that much closer toward your goal of Continuous Delivery. The beauty is that this system will take care of itself and you can get back to work to fix anything else that needs attention.
However, be prepared that this is a pretty drastic measure to take, and there will defeinitely be some developers who kick and scream that you’re killing ther mojo. Don’t let them win! Enforce it and set the bar to make sure everyone lives by the same rules.
- Git: Even though Linus has reportedy soured on it himself, I rank git as the gold standard of source control. It’s mature and there are lots of people familiar with it. Branching and merging is simple and fast (but unfortunately easy to make a mess of as explained above). The ability to rewrite history with rebase is incredibly powerful. But git is a complex tool and if you’re not comfortable on the command line you might struggle.
- Mercurial: Another fine distributed version control system, which at first appearance is similar to git in many ways, but with some notable differences in terms of how it handles branching and merging. It is arguably easier to learn than git, but you will still need to grok it to use it properly (just as any tool). Branching and merging work seemlessy and mercurial will work just fine with this workflow.
- Subversion: If you have nightmares about branching in subversion then you should probably switch to git or mercurial. Centralised VCSs are kind of an old and outdated idea, if you’re looking at Continuous Delivery, even though they are still heavily in use. However, if you’re happy with it and have all the kinks worked out in a clean way that allows isolating changes so they can be tested before merging, then it should be workable.
A note on code reviews:
The importance of an integrated solution with code review and source control cannot be understated. Popular services such as github and bitbucket have contributed significantly to making this a part of Best Practice. If your code review tool is a separate piece of infrastructure tacked onto a VCS such as ReviewBoard or Crucible, leaving it possible for changes to circumvent the requirements for review and testing then this is a good reason to switch to something with tighter controls.
Orchestration:
By orchestration I mean creating cloud resources (Virtual Machines, containers, etc), configuring cloud services (DNS, Networking, Load Balancing, etc), and preferrably also service discovery or communication between those resources when they are created or destroyed.
Orchestration in cloud computing is a confusing topic because there isn’t really a consistent definition of what “orchestration” actually means. The situation become even more complicated when you start looking at the plethora of tools out there that might at first seem like alternatives to each other, but the reality is it’s very much a venn diagram of overlapping functionality that you just won’t actually realise until you try them out. With that in mind, here is a list of tools that I do see as alternatives to one another.
- Cloudformation: this is a declarative, full-featured framework for managing cloud resource on AWS. It’s a powerful and flexible tool but it can get complicated pretty quickly as your usage of AWS expands (which is even more reason to use it). Some caution is required however, because even if you think you are a wiz at pushing buttons in the AWS console, cloudformation knows more about how to deploy AWS services than you do, and you will surprised if you make the wrong assumption. If you’re using EC2 then you should probably be using cloudformation, and sooner, rather than later, because it is not possible to add existing resources (such as an ec2-instance) into a cloudformation-managed stack.
- Heat: is OpenStack’s equivalent of Cloudformation. For the most-part, Heat is compatible with the Cloudformation syntax, so if you’re familiar with one, it’s pretty straight forward to learn the other. One of the benefits of Heat is that it has a bit more flexibility managing resources with operations such as abandon-stack that will allow you delete a stack without removing all the created resources, and later you can adopt-stack once you have fixed whatever you needed to. This is a pretty nice feature to help you through the learning curve of learning Heat templates because it can shorten the feedback loop, but you’re doing something wrong if that’s a function you need to use in production.
- Terraform: is another declarative orchestration framework with the major benefit that it’s platform agnostic. So if you are in a hybrid cloud scenario, for example you have a private OpenStack cloud, but also run on AWS, then Terraform is probably something to look into. If you’re looking to avoid vendor lock-in then Terraform might be the way to go.
- Docker-compose: is Docker’s declarative resource orchestration framework and is definitely worth a look if you’re interested in running containers. Compose has a layered aproach to handling different envrionments, whereby you start with a base configuration that all your environments start with and then you override certain values to turn it into dev/ci/stage/prod. That’s a really nice way to help maintain consistency between envrionments.
I chose all the above because I’m a big fan of the declarative-style system definitions (as opposed to imperative). I think that is the right way to think about systems and build systems. But there are also other methodologies and lots of options for you to choose from (for example PaaS is a wholly different type of orchestration, which technically fits into my definition above)
Configuration Management:
This is something often mixed in with “orchestration”. In some ways that is understandable, because a declarative orchestration template such as what is used in all of the tools listed above is essentially just a yaml or json file. Once that file is stored in version control it becomes a part of your Definitive Media Library (to use an ITIL term). This turns it into an essential part of the configuration definition of your environment required to reproduce it.
Although we can view Configuration Management and Orchestration as a single topic, doing so makes the landscape of tools more confusing to understand. We’re better off dividing the two areas so that we can discuss them separately, which means we need an appropriate definition of Configuration Management to complement our understanding of Orchestration.
So when I talk about Configuration Management, I’m really referring to internal configuration of a compute resource, such as a virtual machine or container. This means software such as programming languages and frameworks, components such as apache and nginx, databases if they are running on the compute instances (as opposed to some *aaS like DynamoDB or Trove, in which case it falls more under “orchestration”), all the way down to files, folders and permissions.
- Docker: by now Docker is many things, but the technology at it’s core is really a method of shipping containers from one host OS to another host OS. What’s inside the container remains the same regardless of the host Operating System, meaning that it solves the Configuration Management problem of how your application runs in different environments. Containers are not a new idea at all but it was only with Docker that their real potential has finally been unlocked by bringing conscensus in the industry that this is now without question the future of virtualization. If you have not started containerizing your application yet and you’re still fumbling around with Virtual Machines, then you better put it on your road map, or be prepared to let the competition run you over.
- Ansible: is probably the simplest Configuration Management tool out there for managing Virtual Machines. It’s extremely lightweight (the tar.gz is less that 1MB!), and works simply by ssh or winrm/powershell in the case of Windows when in push mode, or alternatively run it in pull-mode if you want to do things like auto-scaling. It’s written in python and is “batteries included” meaning that it has a load of inbuilt modules that work of the box. Ansible is idempotent (or trivial to make idempotent), but it is not convergent. Ansible Tower is their paid-for version which includes a centralised control and monitoring server that can be a real benefit in Production. The downsides of Ansible is that is can be slow compared to the alternatives, and you can occasionally run into cryptic errors that are difficult to deciper.
- Chef/Puppet: are the opposites of Ansible in many ways considering they do the same job. From the point since they initially diverged 8 or so years ago, they basically spent the next 6 years copying each other’s features, to the point today where they are stretching to differentiate themselves once more. They’re both complicated, big, and bloated, but powerful (which is not necessarily a good thing as that power tends to be abused rather than limited to what it was intended to do). Their server-agent architecture is possible to run without the server, however you need to replace it with something else, which, in all the cases I’ve seen ends up being Ansible. Both the chef-server and puppet-master are almost 500MB which gets you no where without the rest of the software you need. If you’re already using puppet or chef then that’s fine, however if you’re new to virtualisation then choose Ansible.
- Saltstack: Where Chef and Puppet played the game of competitively diverging and converging over time, in many ways the same goes for Ansible and Saltstack. To begin with, Salt was a similar server-agent architecture to Chef and Puppet, but salt-ssh is obviously mimmicking the agentless architecture of Ansible. In the server-agent mode, Saltstack is faster than Ansible, and in all modes simpler to use and learn than Chef and Puppet, without suffering from the bloat.
To summarise the points above, the primary decision to make regarding Configuration Management is whether you’re at a stage where you’re ready to make the leap to containerize your applications or not. So how on earth do you make that decision?
- First of all you need to want to be on the bleeding edge of technology, because you feel you can handle the risk.
- Secondly, your organisation also needs to be mature enough from a team work and culture perspective as well as having some senior technical people who know enough about things like microservices, SOA and REST architectures.
- Lastly, you should not be spending the majority of your time firefighting or struggling with technical debt.
If you meet those 3 criteria then you’re in a good place to begin containerizing your applications. If you only fit 1 or 2 then you’re on the margins and maybe you want to wait another 6-12 months so that you can fix some of your issues and wait for the other tools in the space to mature. If you don’t fit any of those areas then forget it, you have way bigger problems to solve.
If you are not ready to containerize but you want still want to virtualize in VMs then you will still need a tool like Ansible or Saltstack. I recommend against Chef or Puppet, unless you already have people who know these tools and know how to build systems with them without making a bigger mess. Even in that case, keep it in mind that we already can see the end of the road of the Virtual Machine. And yes I really mean that. The only reason why we ended up with Virtual Machines as the dominant virtualisation technology is because of Windows, and now even Microsoft has seen the light. That means you really want to carefully consider how much time and effort you are going to spend on something that is at best, at stop-gap.
We’ll finish off this long series in Part 3