At Meshcloud we use a continuous delivery process to deliver cloud infrastructure and software updates. Since we operate multiple cloud platforms on a variety of hardware configurations managed by our partner providers, we need a continuous integration platform that enables us to test updates in large number of configurations before we roll them out. The continuous integration server concourse.ci is a perfect match for these requirements with its immutable and flexible pipeline model. This pipeline model that can execute build jobs in arbitrary docker containers sets it apart from more rigid pipelines offered by other continuous integration servers like Jenkins or TeamCity.
It’s not a coincidence that Concourse was developed by Pivotal Software to meet the demands of the Cloud Foundry PaaS project. In fact, Meshcloud also operates Cloud Foundry PaaS as one of the service on our open cloud federation. The easiest way to deploy Concourse is to use the docker images provided by the Concourse team. So without further ado, let’s get right to it.
Virtual Network Setup
This tutorial assumes you have an OpenStack project with at least one floating (=public) IP. You should have created a private network called
concourse-net in your OpenStack project that is configured with DHCP and DNS. This network needs to be attached to a router that has public internet access. You will also need these security groups set up in OpenStack:
- ssh (TCP port 22)
- docker (TCP ports 2375, 2376)
The provisioning of this network structure is beyond the scope of this tutorial, but you can read an excellent introduction at the Openstack Superuser Blog.
Provisioning a Docker Host
The next thing we will need is a docker host that will execute the containers that make up Concourse. We will provision this host from our OpenStack cloud using docker-machine from the command line. Source your OpenStack credentials environment file (typically called openrc.sh) to load your OpenStack credentials:
$ source openrc.sh.
After that, we will create a VM in OpenStack using docker-machine.
docker-machine create --driver openstack \ --openstack-ssh-user ubuntu\ --openstack-net-name 'concourse-net'\ --openstack-image-name 'ubuntu-16.04'\ --openstack-flavor-name gp.large\ --openstack-floating-ip-pool public00\ --openstack-sec-groups default,ssh,docker\ concourse
Tune these parameters to match your OpenStack environment, e.g. if you don’t want to use Ubuntu or your floating-ip pool has a different name (usually the floatingip-pool has the same name as your public network in OpenStack that you use to connect to the Internet). Provisioning the machine may take a few minutes. Once complete, you should be able to do a
docker-machine ls and see your freshly provisioned docker host running:
$ docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS concourse - generic Running tcp://x.x.x.x:2376 v1.12.1
So now that we have a freshly minted docker host, let’s deploy Concourse.
A minimum concourse config consists of at least three components: a PostgreSQL database, the Concourse-ui and one or more Concourse-workers that run the actual builds. We will deploy these three components together on the same host using a
docker-compose.yml file. Before we can do that we need to generate RSA keys that the concourse components can use to work together.
Create a “concourse-deploy” directory and run the following command to generate those keys:
mkdir -p keys/web keys/worker ssh-keygen -t rsa -f ./keys/web/tsa_host_key -N '' ssh-keygen -t rsa -f ./keys/web/session_signing_key -N '' ssh-keygen -t rsa -f ./keys/worker/worker_key -N '' cp ./keys/worker/worker_key.pub ./keys/web/authorized_worker_keys cp ./keys/web/tsa_host_key.pub ./keys/worker
The next step is to create the actual docker-compose file. While the concourse docs have a minimum example, their configuration is not “production ready” as it misses a few critical things. We have extended their configuration to fix the following issues.
In the minimum example, the concourse-ui is not https protected. This is important when using https basic auth. To fix this, we want to deploy a nginx reverse proxy in front of concourse UI for SSL termination. This proxy should use a SSL certificate from Let’s Encrypt. We use Docker Let’s Encrypt Companion to provide the nginx and Let’s Encrypt scaffolding for us.
Limit Docker Log Size
Docker will by default collect all log output by a container and never roll these logs over. This will very quickly fill up your disk with logs if you’re not careful. We hence limit log output per container using:
log_driver: json-file log_opt: # limit log file size to prevent indefinite growth max-size: "10m"
Restart containers with host
Since concourse runs on a virtual machine on the cloud, we need to expect that this VM can be terminated and restarted anytime. Docker thus needs to automatically restart containers after the host reboots. We thus set
restart: always on each container.
You can find our full docker-compose.yml file in this gist. Save this file to the root of your “concourse-deploy” directory. Now target the concourse docker-host created earlier:
eval $(docker-machine env concourse)
We need to fill in a few parameters via environment variables. You’ll need to remember the generated passwords.
export POSTGRES_PASS=XXX # Insert a random database password export CONCOURSE_DOMAIN=example.com # the domain you'll use to host concourse export LETSENCRYPT_MAILemail@example.com # email to verify your let's encrypt account, must match CONCOURSE_DOMAIN export CONCOURSE_PASS=XXX # Insert a random password for the concourse main team.
Now we can finally deploy concourse:
docker-compose up -d
That’s it, you should now have a working Concourse installation at your configured domain.