A Docker Container Pattern – Compose Configuration

Blog

April 12, 2016

TABLE OF CONTENTS

Introduction

Today I want to share how I approach building a Docker container. When I started building containers I found it difficult to take my newly built container image and deploy it to production without rebuilding it every time. The disconnect between development and production has turned many people away from utilizing Docker, and I hope today’s post can help clear up how to build a container image that is ready for deployment on production. With this approach I have been able to take existing applications and drop them into containers in a few minutes as opposed to days waiting on docker build. I have refined and iterated on this approach by building various types of containers with different uses: MySQL Schema Prototyper, Redis Clusters with HAProxy,RabbitMQ, Rails with Travis, Spring XD, Qt IDE using X11, and most of them are on my Docker Hub too.

I have been searching for a good way to describe this pattern for building containers, and so far the only name I have settled on is “Compose Configuration“. So here’s how it works:

  1. Build for Docker Compose
  2. Configuration driven by Environment Variables
  3. Use a Trusted Registry

How do I Take the Same Container Image From Development Into Production?

After waiting on enough Docker RUN directives (which take even longer when building on Docker Swarm), I decided to try passing in environment variables via Docker Compose so I could build a container image one time and reuse it on any environment. With this approach in mind, I now develop Compose-centric containers that requires a piece of configuration management for handling pre-start events based off the values of the environment variables.

For the remainder of this post I will be referencing the MySQL Schema Prototyper. This project was built for rapid-prototyping a database schema using a MySQL Docker container that deploys your own ORM schema file and populates the initial records on start up. By setting a couple environment variables, you can provision your own Docker container with a usable MySQL instance, browser-ready phpMyAdmin server, and your database including the tables initialized exactly how you want. I will be using this example for demonstrating how Compose Configuration works even after the container has been built and pushed to Docker Hub.

Compose Configuration in Action

Here is a simple Compose Configuration workflow that does not require rebuilding the MySQL Schema Prototyper container each time there is a change to the docker-compose.yml file:

Using The MySQL Schema Prototyper’s PHPMyAdmin

If the container is running you can login to the Apache-hosted phpMyAdmin instance.

  1. Find the Login Credentials (default login credentials:dbadmin / dbadmin123
  1. Open a Browser and navigate to: http://localhost:81/phpMyAdminDockerSchemaProtoTypingHints-1024x503

  2. View the Stocks Database and Stocks Table DockerSchemaProtoTypingStockDB-1024x482 This repository comes with a sample stock Schema file and MSFT and IBM data stored in CSV files, and I use this as part of my machine learning experiments on the stock market.

How Does This Approach Work?

Now that we have looked at a simple workflow example, here is how the generalized approach works.

Build For Docker Compose

My affinity for Docker Compose gets stronger each time I use it in development and see it work in production. Docker Compose helps solve the complexities of development and production environments with one container image that exposes specific, tested environment variables for changing how the application(s) start up.

What Should be an Environment Variable?

I find this question to be the hardest part when I am containerizing an application. There is no one-shoe-fits-all advice I can give, but here are my starting points for helping decide and it usually circles around:

What are the differences between my development and production environments?

  1. Are there different API endpoints (ssl vs non-ssl), resources, databases, or services?
  2. Are there different user or application credentials/keys?
  3. Are there different DNS records that require a different FQDN?
  4. Are there any connectivity differences (external and internal ports for the hosts and containers)?
  5. Does production require persistent files that are scaled across multiple instances?
  6. Do I need a different type of networking configuration for high availablility?
  7. What is the container start sequence for my application(s)?
  8. What kind of pre-start events does each application need?

As a sample, here is the MySQL Schema Prototyper docker-compose.yml file:

What else can be changed in a Docker Compose file?

While the MySQL Schema Prototyper is a simple example, I use Docker Compose for helping build and deploy a single container image that exposes only the necessary resource(s) of the environment to run the containers. For those that are new to Docker Compose it also allows us to cleanly define environmental dependencies requiring custom:

  1. Mounting volumes for persistent files
  2. Networking across multiple hosts
  3. Environment Variables
  4. Ports for Connectivity
  5. Container Naming
  6. Logging integration

Configuration Driven by Environment Variables

As a DevOps fan I want to build and deploy the same container image using tools like Travis to automatically publish to a Docker Trusted Registry once the tests finish and pass. To do this, I utilize environment variables that drive the configuration management of the container (usually only on the first time the container starts).

From the “Compose Configuration in Action” section above, we used the REBUILD_DB_ON_STARTenvironment variable to change how the container worked without rebuilding it. Here is the configuration management that uses this environment variable and shows how the /tmp/startcontainer.log contents changed after the container restarted. Beyond the development versus production cases, I also wanted to reuse this container for loading different data than Stocks and the underlying Stock Schema so I added DBNAME, DBSCHEMA and DBINITIALIZER for these future use cases (more machine learning).

While there are many configuration management tools that work with this approach (Ansible, Chef, Salt, Puppet) for speed and development I usually start with bash and migrate to something more modern once my application stabilizes.

Please keep in mind, using this approach means that if you change the docker-compose.yml file’s environment variables that you will need to stop the composition before restarting it to see the changes reflected inside the container.

In general, I use environment variables to drive the configuration to help me:

  1. Handle custom application configuration during container initialization
  2. Allow Docker Compose to cleanly define custom environment resources (endpoints, mounts, network, etc.)

Use a Trusted Registry

I privately host a Docker Trusted Registry (DTR) and use Docker Hub for my open source projects. When it comes to reducing time I spend worrying about how do I take the same container image from development to production a DTR is a good way to go. As a developer, I use DTR as my handoff point for QA validation and for storing my container image artifacts that can be rolled out to a production Docker Swarm.

DevOps Using Compose Configuration

Here is my general DevOps workflow with this approach:

  1. Push a change (usually in my configuration management code) to the repository
  2. Repository issues an automatic webhook to a CI server (Travis) to start a regression test
  3. If the build passes, the CI server automatically pushes the container image to a Registry (DTR or Docker Hub)
  4. The container image can be deployed to QA or production environment(s) using Docker Compose

Conclusion

Wrapping things up, here is why I develop Docker containers using Compose Configuration

  1. Configuration – Develop one container image that uses configuration management for driving pre-start events
  2. Extensibility – Use environment variables with Docker Compose to define custom environment resources (endpoints, mounts, network, etc.)
  3. CI/CD – Build and test the container image and publish it to a Registry
  4. Deployment – Docker Compose can take the same image from a Registry and deploy it across a multi-host production Docker Swarm

Well that’s all for now, and I hope to hear your thoughts about this approach. For me it makes containerizing easier to develop and harden for production at the same time. Going forward I am sure this pattern will continue getting refined, and I will update this post as I find nuances.

If your organization would like assistance determining your Docker container development and production strategy, please reach out to us at Levvel and we can get you started.

Thanks for reading!

Authored By

Jay Johnson

Jay Johnson

Meet our Experts

Jay Johnson

Jay Johnson

Let's chat.

You're doing big things, and big things come with big challenges. We're here to help.

Read the Blog

By clicking the button below you agree to our Terms of Service and Privacy Policy.

levvel mark white

Let's improve the world together.

© Levvel & Endava 2023