July 26, 2016
TABLE OF CONTENTS
Today’s post is about setting up your own DevOps artifact pipeline for docker containers running on CloudBees Jenkins Enterprise (CJE). For those who are new to the CloudBees Pipeline-as-code, it is software for defining your own custom build lifecycle and deployment actions. With just a few lines of Groovy code, you can be building and pushing your own tested Docker containers to your container registry.
As a developer, I want a Docker container pipeline for iterating on my Django + nginx + Slack + Sphinx docs stack that deploys using Docker Compose. This repository holds two containers (Django and nginx) and I want to have a build tool that can: build both containers, run some container tests, and then push them to a container registry using source code that I can manage. Eventually, I’ll want to validate that the Docker Compose deployment passes integration tests before pushing to my registry, but that is out of scope for today’s post.
While there are many tools available for building containers, the advantage of CJE is that we have control of where the build environment is running and can scale it with CloudBees Jenkins Platform (CJP). This is a win for organizations looking to keep their builds on-premises or within their own infrastructure. The advantages of using Jenkins free versus CloudBees Jenkins Enterprise are discussed below.
This section provides a detailed tutorial for how to setup a Docker Container Pipeline with CJE. If you’re just interested in the benefits of CJE and how it differs from the community version, skip ahead.
If you are new to Jenkins Pipeline or Groovy, these are some helpful primers:
Start the CloudBees Jenkins Enterprise Docker container.
Open the Jenkins Credentials page.
Login to the CJE instance by browsing to: http://localhost:8080/ and then register for a free trial. Once logged in, click on Credentials.
Setup your Docker Hub (or private registry) credentials.
Here you can add your docker credentials to CJE. I am currently housing my containers in Docker Hub: Django image and nginx image so I added my login credentials and named it ** jayjohnson-DockerHub **. This will become the logical name associated to my user credentials for build jobs and items to use in the future.
Click OK, then click the ** Manage Jenkins ** option to install Jenkins plugins. Once it is open, click on ** Manage Plugins** .
Update the ** CloudBees Docker Pipeline ** plugin.
Update the CloudBees Docker Pipeline plugin; restart Jenkins after it finishes installing.
Add the Available Pipeline Utility Steps plugin and restart Jenkins again.
Create a New Pipeline Item.
Once Jenkins restarts, click on New Item and enter a name, then select it to be a ** Pipeline ** type.
Click the OK button once you are done.
Paste the Jenkinsfile Contents into the Pipeline Groovy Section.
Per CloudBees best practices, the Jenkinsfile is stored in the repository. In the future, we can include this Jenkinsfile automatically with the ** Pipeline script from SCM ** option. For now, just copy the highlighted lines in this link into the ** Pipeline Definition ** text box.
Start the Build.
Now that our new Groovy Pipeline is ready to build and push the containers, we can kick off the Pipeline. Click ** Build Now ** to initiate the Pipeline job. This starts the Django and nginx container builds that auto-push to the Docker Hub registry. (Note: The build will fail since I am not distributing my Docker Hub credentials for this demonstration.)
It may take a few minutes to download and install the containers from scratch.
Verify the images and tags were pushed to the registry.
Open the container registry and verify the ** testing ** tag was pushed. For this demo, I pushed these containers to Docker Hub.
The repository uses this Jenkinsfile in the root of the repository. For now, it is using a single node to run the Groovy code, which means it will only run on one node/slave. Below is a breakdown of what the Groovy code is doing. Each major section is broken up by a ** stage ** declaration that makes it easier to debug when looking at the Stage View screen. In this sample code, each of the green boxes seen in the image from ** Step 9 ** above is part of a defined Pipeline ** stage ** declaration.
This section allows you to configure the registry, login, and default build tag.
This section defines the GitHub repo to target during the build.
This section uses the CloudBees Docker Pipeline plugin to target a container registry (Docker Hub) and the ID for my Jenkins Credentials for the registry (jayjohnson-DockerHub). Learn more about Injecting Secrets with Jenkins Builds.
Assign the container maintainer and the container name, then read in the testing Docker env file, then kick off the build. A nice feature of the CloudBees Docker Pipeline is that it allows a ** Dockerfile ** in a repository’s subdirectory. This is why the ** django ** directory is added as an argument in the
docker.build() method. View the Django subdirectory, which holds the Django on CentOS 7 Dockerfile.
The code below starts the Django server by using the testing Docker env file, which testing-docker-compose.yml targets for integration testing using the testing tag. Using the environment file keeps our Groovy code a little cleaner than the original version, and we can run this Django server in ** DEV ** mode for curl testing later in our container validation steps. Being able to specify environment variables to run containers via the withRun method is great for validating the container works as expected with Docker Compose.
Once the container starts running, we need to wait for it to initialize the Django server process. This can take a few seconds, which is why the waitUntil method is very helpful. This code block will continue retrying itself using an exponential backoff retry timer until it returns
true. By using this block, we can ensure that the container is running and that the internal Django server is ready to respond to HTTP requests.
For simplicity, I limited this example to three container tests. By default, the expected result for each test expects a
0 returned, but each test can control this expected value.
From outside the container, run a
docker exec to issue a curl command. This command will return the contents of the home page, then count the number of “Welcome” occurrences and trim any newline characters out before writing the cropped output to a temporary file.
This is a redundant test of the curl command above, but I wanted to show how to utilize docker inspect from a Groovy script. This is helpful when you need to verify that Docker Compose deployed the composition correctly before promoting the container to production. Like the test above, this counts the occurrences of port 80 being open externally and trims the results to a temporary file.
This is another demonstration of how to log into the container and verify the internal Django process is running correctly. Since nginx is not being deployed at this time and there are no incoming connections with this test, Django should be in a LISTEN state without any ESTABLISHED connections on port 80. In the future, an integration test could verify the deployed composition successfully established connectivity from ngninx to the Django server during normal operation. As above, this counts occurrences of ESTABLISHED connections and writes the trimmed output to a temporary file.
After a bit of debugging, this code allows the Jenkins Pipeline to stop running tests immediately when it executes. The code will auto-exit with an error message if you increase the MAX_TESTS to something more than 3.
Each test will run this block and fail testing if the test results do not match the expected results. Make sure to remove that temporary file from the host afterwards to prevent test results overlapping by accident.
This code allows the Jenkins slave/executor to stop running tests if there is an exception (which is helpful when debugging complex Pipeline tasks).
If testing passes, this code will push the container image to the registry (Docker Hub) under the ** testing ** tag.
This section builds the nginx container from the nginx subdirectory and then pushes the built image to the registry (Docker Hub) under the ** testing ** tag. It is a lightweight example for building a simple docker container pipeline.
I want to parallelize this build to run faster. To do this, I will perform the Django build step and the nginx build step using the native parallel support to put the build tasks onto two slaves or multiple executors. This speeds up the builds and at the same time increases the build’s complexity for the upcoming Docker Compose integration tests I want to run. The build currently uses the slave host’s docker engine to perform the build which, if parallelized, could end up running on two separate hosts in an environment running more than one Jenkins slave/executor.
This fun little demo demonstrates Jenkins Pipeline’s power. However, while it works in the simple, one-developer working off one branch scenario, it breaks down when utilized across a team running multiple concurrent builds using a git flow multiple Pull Requests development model. Numerous potential bottlenecks around building and running containers on one Jenkins host exist. When I want to scale this to multiple slaves, I have to first look at where my containers are housed before trying to deploy with Docker Compose to run my integration tests. Making Docker container builds work seamlessly across large organizations is a much larger task, and some organizations will need help in preventing those bottlenecks that causes teams to lose time waiting on builds and deployments.
Levvel has partnered with CloudBees to help our clients adopt DevOps practices that save them time and money. They are a great option to consider when looking to prevent Jenkins development and deployment bottlenecks.
A fair question, so let’s look at the DevOps landscape and how we got here.
Jenkins has massive adoption spanning multiple verticals and solves numerous DevOps use cases. Before Pipelines, Jenkins was already a successful continuous integration tool and a friend to many, many developers. With Pipelines, Jenkins is positioned to tackle continuous deployments at scale. When setup correctly, Pipelines are a powerful tool that will bring even more users to Jenkins. Being able to handle artifact deployments and post-build actions empowers organizations with the tools to get features in front of customers faster.
Inevitably, it is this simple and powerful combination that can lead to issues. There are inherent complexities when running scaled-out multiple Jenkins master environments. To prevent these kinds of issues and overhead, CloudBees launched the CloudBees Jenkins Platform (CJP), which offers two variants: Enterprise Edition and Private SaaS Edition (private cloud deployments on AWS or OpenStack with Jenkins running on Mesos for High Availability). By just starting with CJP, your organization gets:
All this, plus access to the best-in-the-Jenkins-business gurus for supporting your organization’s continuous integration and continuous deployment capabilities with proven tools such as the Jenkins Pipeline.
Where CloudBees provides the enterprise with a comprehensive DevOps management platform, Levvel helps organizations:
I hope to hear your thoughts about this DevOps artifact pipeline for docker containers running on CloudBees Jenkins Enterprise. I have been looking for a simple way to control where my Docker builds were running in-house. The code within this blog post is already building my containers, and I hope it helps you get started with your Jenkins Pipeline.
Going forward, I plan to add Docker Compose integration tests to validate that my deployed composition is ready for primetime. Eventually, I could see the CloudBees Docker Pipeline wanting this supported natively out of the box. Docker Compose is already an established part of docker ecosystem and is super helpful when organizing and deploying complex topologies. It just makes good sense to vet the Compose deployment works prior to opening it up to production traffic.
If your organization would like assistance building your DevOps artifact pipeline using Jenkins, adopting CJP and CJE, Docker development or production strategy, please reach out to us.
Stop the container with:
Remove the container (which will delete your Pipeline) with:
From the command line, I had to set the permissions for the Jenkins container to access the docker socket:
If you want to build the CloudBees Jenkins Enterprise Docker container
Modifying the docker engine to make sure it utilizes the correct services
I run a local docker swarm so I had removed the docker socket at
/var/run/docker.sock from the systemd service file. So I had to re-enable it:
Then restart the docker engine with the following start command:
Looking at the previous JenkinsCI work for help with Groovy I found this example helpful for getting the Pipeline working: https://github.com/jenkinsci/docker-workflow-plugin/blob/master/demo/repo/flow.groovy
In this new video series, our industry experts discuss the benefits of migrating systems to the cloud, what organizations often overlook when it comes to cloud adoption planning, and when businesses can expect to see a return on investment.
This report highlights the main issues organizations face when deploying cloud environments, offers solutions to overcome these technological challenges, and serves as an educational guide to determine your migration and cloud management plan.
Experts from Levvel and Truist discuss insights gained from our 2020 Legacy Modernization Report. We'll cover companies’ legacy systems dependence, as well as how to tackle digital strategy and achieve business goals with modernization efforts.
Many organizations struggle to understand how to best innovate and improve. However, doing things the way you’ve always done them will prevent your business from scaling effectively.