Reinventing CI wheels in Docker

Last Thursday I joined the first meetup of 2015! of Docker Amsterdam. It was not a regular docker hurray meeting of 2014 => the general topic of the talks was security, where the build process of docker images leaves quite some concerns… However during these talks and later I could not get rid of “deja vu” feeling and seems now I know where it comes from

The initial and still quite dominating use of Docker is in Continuous Delivery or DEVOPS practices. As none of the meetup participants were 8 or younger, they come DEVOPS either from DEV or OPS side. And for those as me, who  came from DEV side, “infrastructure as code” can be a resolver of majority of DEVOPS challenges.

An example from Jenkins dockerfile

RUN curl -L http://mirrors.jenkins-ci.org/war/1.596/jenkins.war -o /usr/share/jenkins/jenkins.war

During the build of the docker image it simply download war file. It uses HTTP, meaning it can be exposed to man-in-the-middle attack, and there is no validation of successful file download. Another similar example is my own docker file for TeamCity server.

# Download and install TeamCity to /opt
RUN wget -qO- http://download.jetbrains.com/teamcity/TeamCity-9.0.tar.gz | tar xz -C /opt

The problem is even bigger as the downloaded archive remains in the image.

meetupYes, of course, the authors of these images could do a better job by using HTTPS and by validating signatures of downloaded files. However this approach is like improving ANT scripts instead of using mature build systems as Maven or Gradle.

Aren’t the same problems resolved in them. Instead of copying JAR files with dependencies from a network drive, these system allow us to simply declare your dependencies. These systems will figure our where to find the JAR files, will make them available for the build and will test the checksums of downloaded dependencies.

Instead of naively expecting authors of dockerfiles to be more accurate, why don’t we change the docker build to do these necessary validations. Instead of using “wget” or “curl”, ADD command could be used to download the files and test their checksums. Or maybe even better: start the dockerfile with declaration of dependencies on artefacts, which are to be found in Maven of Gradle repositories. So the build can choose the protocol to make these dependencies available for build.

DEVOPS is not new, it comes from DEV and OPS. And therefore it can certainly benefit from not forgetting how things are being done in both DEV and OPS

Leave a Reply

Your email address will not be published. Required fields are marked *

Please, enter correct number below to prevent spam (required)