April 12, 2016
TABLE OF CONTENTS
Today I want to share how I approach building a Docker container. When I started building containers I found it difficult to take my newly built container image and deploy it to production without rebuilding it every time. The disconnect between development and production has turned many people away from utilizing Docker, and I hope today’s post can help clear up how to build a container image that is ready for deployment on production. With this approach I have been able to take existing applications and drop them into containers in a few minutes as opposed to days waiting on docker build. I have refined and iterated on this approach by building various types of containers with different uses: MySQL Schema Prototyper, Redis Clusters with HAProxy,RabbitMQ, Rails with Travis, Spring XD, Qt IDE using X11, and most of them are on my Docker Hub too.
I have been searching for a good way to describe this pattern for building containers, and so far the only name I have settled on is “Compose Configuration“. So here’s how it works:
After waiting on enough Docker RUN directives (which take even longer when building on Docker Swarm), I decided to try passing in environment variables via Docker Compose so I could build a container image one time and reuse it on any environment. With this approach in mind, I now develop Compose-centric containers that requires a piece of configuration management for handling pre-start events based off the values of the environment variables.
For the remainder of this post I will be referencing the MySQL Schema Prototyper. This project was built for rapid-prototyping a database schema using a MySQL Docker container that deploys your own ORM schema file and populates the initial records on start up. By setting a couple environment variables, you can provision your own Docker container with a usable MySQL instance, browser-ready phpMyAdmin server, and your database including the tables initialized exactly how you want. I will be using this example for demonstrating how Compose Configuration works even after the container has been built and pushed to Docker Hub.
Here is a simple Compose Configuration workflow that does not require rebuilding the MySQL Schema Prototyper container each time there is a change to the docker-compose.yml file:
If the container is running you can login to the Apache-hosted phpMyAdmin instance.
Now that we have looked at a simple workflow example, here is how the generalized approach works.
My affinity for Docker Compose gets stronger each time I use it in development and see it work in production. Docker Compose helps solve the complexities of development and production environments with one container image that exposes specific, tested environment variables for changing how the application(s) start up.
I find this question to be the hardest part when I am containerizing an application. There is no one-shoe-fits-all advice I can give, but here are my starting points for helping decide and it usually circles around:
As a sample, here is the MySQL Schema Prototyper docker-compose.yml file:
While the MySQL Schema Prototyper is a simple example, I use Docker Compose for helping build and deploy a single container image that exposes only the necessary resource(s) of the environment to run the containers. For those that are new to Docker Compose it also allows us to cleanly define environmental dependencies requiring custom:
As a DevOps fan I want to build and deploy the same container image using tools like Travis to automatically publish to a Docker Trusted Registry once the tests finish and pass. To do this, I utilize environment variables that drive the configuration management of the container (usually only on the first time the container starts).
From the “Compose Configuration in Action” section above, we used the REBUILD_DB_ON_STARTenvironment variable to change how the container worked without rebuilding it. Here is the configuration management that uses this environment variable and shows how the /tmp/startcontainer.log contents changed after the container restarted. Beyond the development versus production cases, I also wanted to reuse this container for loading different data than Stocks and the underlying Stock Schema so I added DBNAME, DBSCHEMA and DBINITIALIZER for these future use cases (more machine learning).
While there are many configuration management tools that work with this approach (Ansible, Chef, Salt, Puppet) for speed and development I usually start with bash and migrate to something more modern once my application stabilizes.
Please keep in mind, using this approach means that if you change the docker-compose.yml file’s environment variables that you will need to stop the composition before restarting it to see the changes reflected inside the container.
In general, I use environment variables to drive the configuration to help me:
I privately host a Docker Trusted Registry (DTR) and use Docker Hub for my open source projects. When it comes to reducing time I spend worrying about how do I take the same container image from development to production a DTR is a good way to go. As a developer, I use DTR as my handoff point for QA validation and for storing my container image artifacts that can be rolled out to a production Docker Swarm.
Here is my general DevOps workflow with this approach:
Wrapping things up, here is why I develop Docker containers using Compose Configuration
Well that’s all for now, and I hope to hear your thoughts about this approach. For me it makes containerizing easier to develop and harden for production at the same time. Going forward I am sure this pattern will continue getting refined, and I will update this post as I find nuances.
If your organization would like assistance determining your Docker container development and production strategy, please reach out to us at Levvel and we can get you started.
Thanks for reading!
API design is crucial, giving structure to application interaction. Given cross-functional teams and applications, development time is reduced with a clear, intuitive way to access data. API development often follows two approaches: REST and GraphQL.
As of June 2018, the state of California passed a new privacy law that could lead to more consequences for US-based companies than the European Union’s General Data Protection Regulation (GDPR). Here's what you need to know and how to be compliant.
Before your data scientists wring value out of your reams of data, it has to be accessible and, on some basic level, coherently arranged. To harness all that brainpower, you need to keep the data wrangling to a minimum. Enter the data lake.
Legacy applications get no respect. The developers who wrote them have aged out and no new developers want to work on career-killing software stacks. But they are still faithfully doing the job they were created to do long ago. So what's the problem?