Levvel Blog - Containerizing A Modern Application, Part 1: A Running Container

Containerizing A Modern Application, Part 1: A Running Container

As many in the technology space know, Docker is an extraordinarily popular technology, and one in which many tech companies are beginning to integrate. At Levvel, we help contribute to that growing number. Since Levvel first became a Docker Authorized Consulting Partner, we have introduced the Docker framework as an integral part of the operations of many of our clients. Docker is part of our CI/CD and configuration management toolkits, and it plays a central role in the broader container architecture space. However, although it is an important technology, it can be a difficult one for engineers to learn. Levvel has shared some instructions on Docker in the past, but now we’d like to take it another step further. In this series, we’ll focus on helping application developers that are new to containers and are hoping to gain some firsthand experience.

In the following case study, we will walk through putting an app into a Docker container. The application in the case study happens to be a Ruby application, but our commentary will include guidance that is relevant regardless of the application’s footing. Along the way, we’ll explain important concepts that illuminate what makes containers so popular and powerful. Let’s get started!

Step 0: Installing Docker CE

First, you’ll need Docker installed on your machine to follow along. You can get it for free here. You can also sign up for a free Docker Cloud account if you like. Easy.

Step 1: Selecting a Base Image

Though Docker is flexible enough to perform a range of potential applications, such as evaluating clustering performance of supporting services, Docker’s primary use case is powering web services. To take our first step in containerizing our app, we need to answer the question of “what process is going to respond to my web requests?” In our Ruby on Rails application, that will be an application server like puma. Node.js applications might respond from any number of options exposed by npm start. Java applications usually pick a server like Apache Tomcat or Jetty. We want to prepare our Docker container to start this process.

In order to create a container, we’ll first need to create a container image. Docker can start any number of containers on any number of machines from the same image. The blueprint, factory, and template metaphors are all in the right ballpark for a container image. One of the underlying concepts of containers is a layered, read-only file system for these container images that allows common layers to be distributed and shared. For example, both the official Ruby and Node container images are built on top of the buildpack-deps image, which ensures that many common system dependencies are already installed regardless of the container image you choose to inherit from. So the first choice you’ll make when creating your container image is where you’ll begin to take over from official maintainers. In our case, we’ll be picking the official Ruby image that matches the version of Ruby we intend to use.

Here are some good choices:

There’s more where that came from. I recommend having a look at Docker’s documentation regarding their official images program, as well to see what you’re getting (and not getting) from Docker via their “official library” designation.

Step 2: Assemble the Supporting Cast

After we’ve taken the first step towards running a process inside our container, the next step is to make sure that any supporting processes can do their jobs. What’s a supporting process? Have a look at these questions:

  • What processes support the request handling?
  • Does my application require special help to get running in the first place?
  • Are there one-time processes that support my app while it’s running?
  • Must some other supporting processes run at the same time as my application that don’t require a port assignment?
  • Is there some other artifact I need to acquire so my app can start or run as intended?

Some engineers might not initially understand the relevance of these questions, and if anyone has trouble running their application later on, refer back to this section. For some lucky applications, the questions aren’t relevant at all — just starting the applications up is enough! If the latter is the case for you, feel free to skip to the next post (coming soon).

However, some readers will need to package the Java Cryptography Extension Jurisdiction Policy files. Maybe you need to download a special font. Maybe some of your background processing depends on a command-line package being installed and available for use. You never know what web services will be up to these days! In our case, like a lot of Ruby applications, we have an asset pipeline that lets us write SASS instead of CSS, Coffeescript instead of JavaScript, and more. Since these assets need to be generated before servicing requests, they fit our criteria above.

Our example is going to be able to take advantage of the common ancestry of the Ruby and Node containers we noted in the previous stage. This Dockerfile shows how the concept of a multi-stage build can allow us to bring multiple container images together. The official node image is responsible for installing all the needed binaries and libraries, and we are able to copy them over into our image. This allows the rackup command to find the node binary in order to power sprockets! Pretty nifty.

Next Time

In our next post, we’ll look at getting your application code into the container.

Jim Van Fleet

Jim Van Fleet

Principal Consultant

Jim Van Fleet is a Principal Consultant at Levvel. After an early career focused on helping startups successfully grow and scale, Jim now applies those lessons to the software development lifecycles at Fortune 500 companies.

Related Posts