This project is for anyone who is trying to do, or interested in learning about, Continuous Deployment with containers.
If you happened to have read my previous series on why a building deployment pipelines is still a challenge, you may have been left thinking… where’s the code???
If you felt that way then hopefully I can make you feel a little better with this latest update.
I want to introduce a project that I am working on that aims to deliver something close to that target. I decided to do this mostly for my own fun and learning but also to try and spread whatever knowledge I can to others who are also on the same journey.
Typically what we find when looking for information on these topics is short blog posts that age quickly. I will try hard to differentiate this content by actively working on completely publicly available code, writing updates on my findings as I go.
I’m doing this completely on my spare time so progress is a bit sporadic, with a steep learning curve on a broad range of topics where I’ll be making plenty of mistakes that I intend to explain in detail to be as candid possible.
This is the first formal update on my progress to date and by “formal” I mean written up on the Elastx blog. My day-to-day or “informal” progress will appear on my website www.philbert.io. I will defensively mention this is my first experience with django, so you will notice a difference in the quality of the sites :)
That website is currently a 5-container cluster deployed via docker-compose on a single CentOS7 docker host. It’s a simple architecture inspired by real python dockerizing django
I was experimenting for a couple of months trying to get compose to deploy reliably on a 3-node docker swarm cluster on version 1.11. It was a frustrating experience. If you want more details about why you can read about it on my website.
Now that docker 1.12 has arrived introducing Distributed Application Bundles, it’s clear that they’ve done some pretty major rethinking and re-architecting the means of deploying to swarm.
After some experimentation with drone-ci, I dropped it after Jenkins 2 was announced with native support for deployment pipelines. I now exclusively use pipelines or multi-branch pipelines which are viewable on jenkins.philbert.io.
All my code is available on my github. Here is a brief overview of the repos:
- This is my main django application that I’m deploying through the pipeline
- This is my nginx container that serves my static content
- Here is my compose repo that orchestrates those containers
- I used Terraform to deploy my swarm cluster, however that is now abandoned
- This is my version of Jenkins 2 that I’m using to drive all my deployments
My current workflow looks something like this:
The basic steps are as follows:
- Local development and testing of my apps and docker builds. I never push containers from my local machine
- After local verification and test success, I push my code to github
- A push webhook triggers a new jenkins build. A push to django-webapp or django-nginx repos will trigger test and build of a new container.
- If the test and build succeeds new images with be push to quay.io ready to be pulled in the next deployment
- A new deployment is triggered by a push to the docker-compose-django repo. Images are pulled from quay.io and docker hub and deployed to production via docker-compose.
The reason why I have one pipeline for building images and another for deploying containers is because the the nginx container and the django container right now are tightly coupled due to the way I am using the collectstatic management command. The static content must be decoupled from the image so that I can deploy the containers independently from each other.
What that means for the moment is that I’m only doing Continuous Delivery and not Continuous Deployment, because I need to coordinate (i.e. batch) the versions of django-nginx and django-webapp containers.
This is what the pipeline for my django-webapp container looks like
This is the deployment pipeline that will deploy a new version of the app
From a workflow perspective this works well enough that I wanted to document my progress so far. From an infrastructure perspective there is a lot of work to do. I’m am not yet using containers in what I would regard as the spirit that they should be used. The setup still has too many corollaries to how I would build this system using VMs.
So where to from here?
After seeing how things change in Docker 1.12, I’ve decided to abandon swarm and switch to Kubernetes. The cluster I plan to run will mostly be based off Kelsey Hightower’s Kubernetes the hard way, deployed via Terraform. I will probably use CoreOS for my hosts, but that should be an unimportant detail in the overall solution.
I then have to port my setup away from docker-compose so that it can run in Kubernetes pods. I then will work on my website and pipelines to decouple containers so that it’s possible to deploy them independently which will finally allow me to focus solely on Continuous Deployment.
That’s a ton of work more to go and I’m still just getting all the basics right…