Here at Brightbox we use Jenkins to manage most of our continuous integration and deployment jobs. It lets us do all the usual automatic builds, merges and deployments whenever we push changes to our git repositories.
Lately we’ve been using Docker to construct some of our test environments, which lets us easily run our test suites under various configurations. We’ve just recently started using it to automate the testing of our famous Ubuntu Ruby packages to make sure they install correctly on a vanilla installation of every version of Ubuntu.
So I thought I’d write a bit about how we’re doing it.
We already have a Jenkins manager server setup, using Ubuntu. It’s a standard install from the Jenkins Ubuntu package repository and doesn’t have Docker installed as the build jobs are all run on the build nodes.
So your Jenkins nodes need to be running Docker and what simpler way to get Docker up and running than with CoreOS?
And what simpler way to get Jenkins up and running on CoreOS than with our userdata service?
We have an SSH key pair that the Jenkins manager server uses to authenticate
with its build nodes which we have registered as a Jenkins credential (using the
username jenkins
)
We then build new CoreOS servers (using our official CoreOS images) with some userdata to setup the jenkins user with our public SSH key on first boot:
#cloud-config
users:
- name: jenkins
ssh-authorized-keys:
- ssh-rsa ...... jenkins-node
groups:
- docker
We build these servers in a server group that has a firewall policy applied to it that automatically opens SSH access to the Jenkins manager node.
Then, in the Jenkins interface we just add the new server as a node. Jenkins
needs Java on its build nodes and CoreOS doesn’t have it installed by default,
so we use Jenkins’ Prefix Start Slave Command
setting to install Java on
the first connection. Note the semicolon, that’s important:
( test -d /home/jenkins/java || (mkdir /home/jenkins/java && curl -s -L https://javadl.oracle.com/webapps/download/AutoDL?BundleId=83376 | tar -C /home/jenkins/java --strip=1 -zx) && ssh-keyscan -H github.com > /home/jenkins/.ssh/known_hosts ) ;
And two other node settings to tell Jenkins where everything is:
RemoteRootDirectory: /home/jenkins
JavaPath: /home/jenkins/java/bin/java
We also label these nodes as docker
, so we can make sure jobs that need
Docker get run somewhere that Docker is actually available.
Then these nodes are ready to receive jobs!
Some of our projects have Dockerfiles in the git repository so we can just use them in the build step like this:
$ docker build -f Dockerfile.trusty.ruby22 -t appname:trusty-ruby22-build${BUILD_NUMBER} .
$ docker run --rm appname:trusty-ruby22-build${BUILD_NUMBER}
We use the Jenkins “Matrix Project Plugin” to handle building multiple combinations of configurations (say, Ruby 2.2 and 2.1 on Ubuntu Precise and Trusty), so we use those additional variables too.
In a couple of cases (such as testing package installs) we have a more ad-hoc approach, where just pipe a shell script to run in a standard upstream image:
$ docker run -i ubuntu:trusty sh < install-tests.sh
Here we’re just using Docker as an elaborate chroot, but that’s fine.
And if we need access to the Jenkins SSH agent withinin the container for some reason:
$ docker build --rm -t appname:build${BUILD_NUMBER} .
$ docker run --rm -v $SSH_AUTH_SOCK:/ssh-agent -e SSH_AUTH_SOCK=/ssh-agent appname:${BUILD_NUMBER}
Installing Java directly onto the CoreOS server is a bit hacky really. An alternative approach would be to run Jenkins in a container on the node and give it access to the docker daemon on the host.
You could of course just do all this with an Ubuntu cloud server instead and use the standard Ubuntu Java packages (which cloud-init could install on boot even). You’d just have to take the extra steps of installing Docker and configuring automatic security updates etc.
We could also look at having the nodes register themselves with the Jenkins manager, or even have Jenkins directly create the build nodes using our API. More to come on this later.