Skip to content
devops docker ci-cd

Simple continuous deployment with docker compose, docker machine and Gitlab CI

Lars de Ridder ·

For local development of microservice-based systems running on Docker, we’ve found that docker compose is probably the best way to go, with the docker compose yaml file format being very usable for configuration as well. And for some projects, there really is no need to scale up to having multiple containers of a service, as you’ll be just fine with running all your containers on a single host. For these projects you want to get to production as smooth (or simple) as possible.

So after spending time learning about Mesos, Kubernetes, Amazon ECS and other proprietary technologies and learning a ton of new concepts, I concluded that they’re all awesome, but not really suitable for a simple move from local development with docker compose. They all have their own configuration formats (largely for good reasons) and all of them do orchestration quite a bit differently than docker compose, to facilitate more complex deployment environments.

Now you can of course just set up Rancher and use that. Rancher uses the docker compose file format and is quite powerful. However, my secondary goal was to do as little self-hosted as I can, because I’m lazy and don’t like system administration. So, let’s just use docker compose instead!

Prerequisites

To follow this article, make sure you’ve installed:

And that you have a project set up to use docker-compose on Gitlab.

Enter docker machine

Docker Machine is a tool that lets you install Docker Engine on virtual hosts, and manage the hosts with docker-machine commands. You can use Machine to create Docker hosts on your local Mac or Windows box, on your company network, in your data center, or on cloud providers like AWS or Digital Ocean.

So let’s get started setting up a machine. I went for Amazon EC2:

  • Set up the AWS CLI and run aws configure
  • Provision your new machine:
docker-machine create -d amazonec2 \
  --amazonec2-region eu-west-1 \
  --amazonec2-instance-type "t2.micro" \
  docker-compose-test

This last command takes a little while, but that’s really it. You now have a machine running docker! Let’s immediately do some deploying:

eval $(docker-machine env docker-compose-test)
docker ps  # test if it works
cd your-project
docker-compose up -d

You can find the remote IP address using docker-machine ip. To stop everything, run docker-compose down.

Docker machine on other clients

If you work in a team and would like to give someone else access to the machine, you can add an already running machine by creating a new machine using the “generic” driver:

  • Add the SSH key of the new client to the instance.
  • Run:
docker-machine create \
  --driver generic \
  --generic-ip-address=<INSTANCE_IP> \
  --generic-ssh-key <PATH_TO_PUBKEY> \
  --generic-ssh-user ubuntu \
  <MACHINE_NAME>

Note that this restarts your docker daemon.

Continuous deployment

To get continuous deployment working, your CI server needs to connect to the docker daemon over HTTPS using TLS. We generate certificates to allow CI to connect:

./generate-client-certs.sh ~/.docker/machine/certs/ca.pem ~/.docker/machine/certs/ca-key.pem

This generates a key and a self-signed certificate which, together with the certificate authority certificate that docker machine generated for you (ca.pem), allow you to connect to the docker daemon.

Integration with Gitlab CI

To set things up in Gitlab CI:

  • Make sure shared runners are enabled.
  • Add the client-key.pem, client-cert.pem and ca.pem as secret variables to Gitlab (as CA, CLIENT_CERT and CLIENT_KEY).
  • Create a .gitlab-ci.yml file:
stages:
  - deploy

deploy:
  image: docker
  stage: deploy
  before_script:
    - apk add --update py-pip
    - pip install docker-compose
  script:
    - mkdir $DOCKER_CERT_PATH
    - echo "$CA" > $DOCKER_CERT_PATH/ca.pem
    - echo "$CLIENT_CERT" > $DOCKER_CERT_PATH/cert.pem
    - echo "$CLIENT_KEY" > $DOCKER_CERT_PATH/key.pem
    - docker-compose build
    - docker-compose down
    - docker-compose up -d --force-recreate
    - rm -rf $DOCKER_CERT_PATH
  only:
    - master
  variables:
    DOCKER_TLS_VERIFY: "1"
    DOCKER_HOST: "tcp://[YOUR_INSTANCE_IP]:2376"
    DOCKER_CERT_PATH: "certs"
  services:
    - docker:dind
  tags:
    - docker
  • Push to your repo’s master branch.
  • Observe continuous deployment.

You will probably want to split up the build and deploy jobs, and push the images to a registry after the build. As container registry you can use Gitlab’s own.

Conclusion

So there you have it. The above workflow works quite well for relatively simple applications. When it gets more complicated however, you should have a look at Kubernetes, as it is quite nice. And if you’re not scared of self hosting your orchestration, Rancher and Docker Swarm are pretty cool as well.