Integrating Docker in CI/CD Processes
Part 1 – Building multi-module Java applications
Since several years ago, at Ensolvers we decided to use containers (Docker concretely) for all our infrastructure tasks. In particular, at the moment, several of our projects run through Amazon Elastic Container Service (ECS), which is the Service that manages Docker clusters and orchestrates them with other infrastructure elements like Load Balancers, Security Groups, etc. When AWS started giving support for Docker containers, EC2 machines were required to run them, but thanks to Fargate containers now can be run “as is”, with resources provisioned by AWS itself without the need of directly providing an EC2 host manually.
Because starting to use Docker across all the company implied a big shift from the current infrastructure perspective, we decided to design our images and processes in a generic way so we could reuse them along several projects. In this first tech note, we discuss how we use a custom Docker image and scripting to reuse the same process for building any Java module we have in an application.
Problem: Reusing images for deploying and running production apps
Since Docker images can be quickly customizable and built for every module that has to run in a productive environment, the common approach is to create a unique image for every module. This implies in general to have a custom Dockerfile for each one and a script that
- Builds the module and generates the binary – in this case, a Java web app
- Run the Docker building process using the Dockerfile spec which embeds the binary into a new version of the image
- Pushes the image into a remote repo
While apparently simple, this approach requires to have a Dockerfile for every module we want to build, and to have separate script processes that build the images. Thus, we decided to go with a more generic approach:
- Build the module just running a generic building script with the parameters required for it
- Copy the binary to a binaries repository
- Run the binary using a generic, optimized Docker image
This approach allowed to reuse a lot of scripting and reduced the complexity of managing a big set of images – and also saved building and uploading time. In this Tech Note, we are going to describe Step 1, while Steps 2 and 3 are going to be commented in a further article.
The buildspec.yaml file
For automating the building processes using Docker, we relied on AWS CodeBuild. CodeBuild basically runs a set of commands written in a YAML file in a Docker container. The most common simple building scenario implies cloning a repository, running a build command and copying the resulting artifact to be deployed. A set of ready-to-use images with building and bundling tools are provided with CodeBuild, but we decided to use a custom one with the specific tools we need – including JDK, Maven, Node.js, NPM, Python, etc. CodeBuild also allows a set of environment variables to be passed to the scripts, which is something crucial for using the same building process for two or more modules.
In CodeBuild, the commands that form part of the building process are described in a file called
buildspec.yaml. Instead of defining all the steps there, we decided to create a custom
build.sh script that takes a list of parameters and triggers the build for a specific module. This way, configuring several building process will imply triggering that script with the correct combination of parameters – literally, one line of code. Our script handles the logic for building a Java app and storing the binaries – and some other aspects that are outside of the scope of this Tech Note. This script is versioned in git with the rest of the code, so if there is any important change in the build logic, the script can be updated in the same commit.
If there were any specific flow in the build depending on any specific deploy (for example, in some frontend modules we need to run some specific commands for bundling React / Angular apps), it can be tackled in a so-called “Auxiliar Build”. This build can be as complex as required and can be implemented by a full-fledgded script to be called before running the “main” building.
The example below shows how the building script can be called through a pipeline and also lists the contents of the script itself. In this case, we are using AWS S3 for storing the binaries.
Allowing other repos than Github
One of the most common issues when using CodeBuild is that it only supports GitHub for automatic repo cloning. However, just importing a SSH key, registering and and running git clone properly (as is listed below) can do the trick.
In the first part, we saw how a generic script for building several Maven modules can be written to be called in CodeBuild or any other CI/CD pipelines that allow parameterisation. In the second part of the Tech Note series, we are going to describe how similar principles can be used to build a generic Docker image that we can reuse across several projects for provisioning production environments.