Levvel Blog - CloudBees + Docker

CloudBees + Docker

At Levvel, we are always looking to improve how we build, test and deploy. It’s why we love DevOps. At its core, DevOps is a balancing act between cost and effort. Today, we’ll share an approach that is easy and cost-effective.

A Jenkins Groovy Docker Container Pipeline using CloudBees Jenkins

A Jenkins Groovy Docker Container Pipeline

Today’s post is about setting up your own DevOps artifact pipeline for docker containers running on CloudBees Jenkins Enterprise (CJE). For those who are new to the CloudBees Pipeline-as-code, it is software for defining your own custom build lifecycle and deployment actions. With just a few lines of Groovy code, you can be building and pushing your own tested Docker containers to your container registry.

Keeping Things Under Your Control

As a developer, I want a Docker container pipeline for iterating on my Django + nginx + Slack + Sphinx docs stack that deploys using Docker Compose. This repository holds two containers (Django and nginx) and I want to have a build tool that can: build both containers, run some container tests, and then push them to a container registry using source code that I can manage. Eventually, I’ll want to validate that the Docker Compose deployment passes integration tests before pushing to my registry, but that is out of scope for today’s post.

While there are many tools available for building containers, the advantage of CJE is that we have control of where the build environment is running and can scale it with CloudBees Jenkins Platform (CJP). This is a win for organizations looking to keep their builds on-premises or within their own infrastructure. The advantages of using Jenkins free versus CloudBees Jenkins Enterprise are discussed below.

Getting Started

This section provides a detailed tutorial for how to setup a Docker Container Pipeline with CJE. If you’re just interested in the benefits of CJE and how it differs from the community version, skip ahead.

If you are new to Jenkins Pipeline or Groovy, these are some helpful primers:

Building the Docker Container Pipeline with CloudBees Jenkins Enterprise

We will be using the CloudBees Jenkins Enterprise Docker Repository for this post and following the CloudBees Docker Workflow.

  1. Start the CloudBees Jenkins Enterprise Docker container.

  2. Open the Jenkins Credentials page.

    Login to the CJE instance by browsing to: http://localhost:8080/ and then register for a free trial. Once logged in, click on Credentials.

    Docker Creds

  3. Setup your Docker Hub (or private registry) credentials.

    Here you can add your docker credentials to CJE. I am currently housing my containers in Docker Hub: Django image and nginx image so I added my login credentials and named it ** jayjohnson-DockerHub **. This will become the logical name associated to my user credentials for build jobs and items to use in the future.

    Set Credentials

  4. Click OK, then click the ** Manage Jenkins ** option to install Jenkins plugins. Once it is open, click on ** Manage Plugins** .

    Manage Jenkins

  5. Update the ** CloudBees Docker Pipeline ** plugin.

    Update the CloudBees Docker Pipeline plugin; restart Jenkins after it finishes installing.

    Add Cloudbees Docker Pipeline

  6. Add the Available Pipeline Utility Steps plugin and restart Jenkins again.

    Add Pipeline Utility

  7. Create a New Pipeline Item.

    Once Jenkins restarts, click on New Item and enter a name, then select it to be a ** Pipeline ** type.

    Click the OK button once you are done.

    New Pipeline Item

  8. Paste the Jenkinsfile Contents into the Pipeline Groovy Section.

    Per CloudBees best practices, the Jenkinsfile is stored in the repository. In the future, we can include this Jenkinsfile automatically with the ** Pipeline script from SCM ** option. For now, just copy the highlighted lines in this link into the ** Pipeline Definition ** text box.

    Add Grovvy Pipeline Script Item

  9. Start the Build.

    Now that our new Groovy Pipeline is ready to build and push the containers, we can kick off the Pipeline. Click ** Build Now ** to initiate the Pipeline job. This starts the Django and nginx container builds that auto-push to the Docker Hub registry. (Note: The build will fail since I am not distributing my Docker Hub credentials for this demonstration.)

    It may take a few minutes to download and install the containers from scratch.

    Docker Pipeline Build Results

  10. Verify the images and tags were pushed to the registry.

    Open the container registry and verify the ** testing ** tag was pushed. For this demo, I pushed these containers to Docker Hub.

    Docker Hub Test Images Ready

How the Groovy Pipeline Works

The repository uses this Jenkinsfile in the root of the repository. For now, it is using a single node to run the Groovy code, which means it will only run on one node/slave. Below is a breakdown of what the Groovy code is doing. Each major section is broken up by a ** stage ** declaration that makes it easier to debug when looking at the Stage View screen. In this sample code, each of the green boxes seen in the image from ** Step 9 ** above is part of a defined Pipeline ** stage ** declaration.

  1. Set up auth credentials

    This section allows you to configure the registry, login, and default build tag.

  2. Define the build repo

    This section defines the GitHub repo to target during the build.

  3. Target the Docker Registry for the Django container

    This section uses the CloudBees Docker Pipeline plugin to target a container registry (Docker Hub) and the ID for my Jenkins Credentials for the registry (jayjohnson-DockerHub). Learn more about Injecting Secrets with Jenkins Builds.

  4. Build the Django Docker container

    Assign the container maintainer and the container name, then read in the testing Docker env file, then kick off the build. A nice feature of the CloudBees Docker Pipeline is that it allows a ** Dockerfile ** in a repository’s subdirectory. This is why the ** django ** directory is added as an argument in the docker.build() method. View the Django subdirectory, which holds the Django on CentOS 7 Dockerfile.

  5. Start the Django container

    The code below starts the Django server by using the testing Docker env file, which testing-docker-compose.yml targets for integration testing using the testing tag. Using the environment file keeps our Groovy code a little cleaner than the original version, and we can run this Django server in ** DEV ** mode for curl testing later in our container validation steps. Being able to specify environment variables to run containers via the withRun method is great for validating the container works as expected with Docker Compose.

    Previously, we talked about how much easier Docker development is when using Docker Compose to drive the container’s configuration. In this repository, I am gluing a python Django server together with an nginx server through the use of a shared volume exposed from the host. This is handy for serving static assets (css, javascript, images, etc.) with a proven load balancer like nginx. It also allows for defining environment specific resources, such as where to post exceptions into Slack, without changing any code or the container. The values are also defined in the docker-compose.yml and testing-docker-compose.yml) files.

  6. Wait for the Django container to start

    Once the container starts running, we need to wait for it to initialize the Django server process. This can take a few seconds, which is why the waitUntil method is very helpful. This code block will continue retrying itself using an exponential backoff retry timer until it returns true. By using this block, we can ensure that the container is running and that the internal Django server is ready to respond to HTTP requests.

  7. Begin Django container tests

    For simplicity, I limited this example to three container tests. By default, the expected result for each test expects a 0 returned, but each test can control this expected value.

  8. Test the Django container shows the home page

    From outside the container, run a docker exec to issue a curl command. This command will return the contents of the home page, then count the number of “Welcome” occurrences and trim any newline characters out before writing the cropped output to a temporary file.

  9. Test the Django container is configured to listen on Port 80

    This is a redundant test of the curl command above, but I wanted to show how to utilize docker inspect from a Groovy script. This is helpful when you need to verify that Docker Compose deployed the composition correctly before promoting the container to production. Like the test above, this counts the occurrences of port 80 being open externally and trims the results to a temporary file.

  10. Test Django does not have an ESTABLISHED connection on Port 80

    This is another demonstration of how to log into the container and verify the internal Django process is running correctly. Since nginx is not being deployed at this time and there are no incoming connections with this test, Django should be in a LISTEN state without any ESTABLISHED connections on port 80. In the future, an integration test could verify the deployed composition successfully established connectivity from ngninx to the Django server during normal operation. As above, this counts occurrences of ESTABLISHED connections and writes the trimmed output to a temporary file.

  11. Exit and log an error for any unsupported tests

    After a bit of debugging, this code allows the Jenkins Pipeline to stop running tests immediately when it executes. The code will auto-exit with an error message if you increase the MAX_TESTS to something more than 3.

  12. Check that the test results match the expected results

    Each test will run this block and fail testing if the test results do not match the expected results. Make sure to remove that temporary file from the host afterwards to prevent test results overlapping by accident.

  13. Stop testing for an exception

    This code allows the Jenkins slave/executor to stop running tests if there is an exception (which is helpful when debugging complex Pipeline tasks).

  14. Push the Django container to Docker Hub

    If testing passes, this code will push the container image to the registry (Docker Hub) under the ** testing ** tag.

  15. Test and push the nginx container to Docker Hub

    This section builds the nginx container from the nginx subdirectory and then pushes the built image to the registry (Docker Hub) under the ** testing ** tag. It is a lightweight example for building a simple docker container pipeline.

Enhancing the Pipeline

I want to parallelize this build to run faster. To do this, I will perform the Django build step and the nginx build step using the native parallel support to put the build tasks onto two slaves or multiple executors. This speeds up the builds and at the same time increases the build’s complexity for the upcoming Docker Compose integration tests I want to run. The build currently uses the slave host’s docker engine to perform the build which, if parallelized, could end up running on two separate hosts in an environment running more than one Jenkins slave/executor.

Taking It to the Next Levvel

This fun little demo demonstrates Jenkins Pipeline’s power. However, while it works in the simple, one-developer working off one branch scenario, it breaks down when utilized across a team running multiple concurrent builds using a git flow multiple Pull Requests development model. Numerous potential bottlenecks around building and running containers on one Jenkins host exist. When I want to scale this to multiple slaves, I have to first look at where my containers are housed before trying to deploy with Docker Compose to run my integration tests. Making Docker container builds work seamlessly across large organizations is a much larger task, and some organizations will need help in preventing those bottlenecks that causes teams to lose time waiting on builds and deployments.

A solution that scales

Levvel has partnered with CloudBees to help our clients adopt DevOps practices that save them time and money. They are a great option to consider when looking to prevent Jenkins development and deployment bottlenecks.

Why should I pay for something that is free?

A fair question, so let’s look at the DevOps landscape and how we got here.

Jenkins has massive adoption spanning multiple verticals and solves numerous DevOps use cases. Before Pipelines, Jenkins was already a successful continuous integration tool and a friend to many, many developers. With Pipelines, Jenkins is positioned to tackle continuous deployments at scale. When setup correctly, Pipelines are a powerful tool that will bring even more users to Jenkins. Being able to handle artifact deployments and post-build actions empowers organizations with the tools to get features in front of customers faster.

Inevitably, it is this simple and powerful combination that can lead to issues. There are inherent complexities when running scaled-out multiple Jenkins master environments. To prevent these kinds of issues and overhead, CloudBees launched the CloudBees Jenkins Platform (CJP), which offers two variants: Enterprise Edition and Private SaaS Edition). By just starting with CJP, your organization gets:

All this, plus access to the best-in-the-Jenkins-business gurus for supporting your organization’s continuous integration and continuous deployment capabilities with proven tools such as the Jenkins Pipeline.

Where CloudBees provides the enterprise with a comprehensive DevOps management platform, Levvel helps organizations:

  • Migrate, setup and install CloudBees Jenkins Platform (Enterprise Edition or Private SaaS);
  • Deploy and scale large multiple Jenkins master environments that are managed with CloudBees Jenkins Operation Center;
  • Customize their Kibana Analytics dashboard;
  • Lock down their Jenkins environments.

What Comes Next

I hope to hear your thoughts about this DevOps artifact pipeline for docker containers running on CloudBees Jenkins Enterprise. I have been looking for a simple way to control where my Docker builds were running in-house. The code within this blog post is already building my containers, and I hope it helps you get started with your Jenkins Pipeline.

Going forward, I plan to add Docker Compose integration tests to validate that my deployed composition is ready for primetime. Eventually, I could see the CloudBees Docker Pipeline wanting this supported natively out of the box. Docker Compose is already an established part of docker ecosystem and is super helpful when organizing and deploying complex topologies. It just makes good sense to vet the Compose deployment works prior to opening it up to production traffic.

If your organization would like assistance building your DevOps artifact pipeline using Jenkins, adopting CJP and CJE, Docker development or production strategy, please reach out to us.

Extra Reading

How to stop, clean up, and troubleshoot your environment.

Stopping the CloudBees Jenkins Enterprise Container

Stop the container with:

Remove the container (which will delete your Pipeline) with:


From the command line, I had to set the permissions for the Jenkins container to access the docker socket:

If you want to build the CloudBees Jenkins Enterprise Docker container

Modifying the docker engine to make sure it utilizes the correct services


I run a local docker swarm so I had removed the docker socket at /var/run/docker.sock from the systemd service file. So I had to re-enable it:

Then restart the docker engine with the following start command:

Looking at the previous JenkinsCI work for help with Groovy I found this example helpful for getting the Pipeline working: https://github.com/jenkinsci/docker-workflow-plugin/blob/master/demo/repo/flow.groovy

Jay Johnson

Jay Johnson

Principal Consultant

Jay is an IT professional with 10+ years of experience in architecture, design, and implementation of large distributed, real-time systems across a variety of environments—focused on executing aggressive timelines by leveraging his expertise in technology, process, and best practices.

Related Posts