December 9, 2015
TABLE OF CONTENTS
Docker is a young, powerful and exciting technology that is garnering interest across verticals and in companies ranging from small, nimble startups to powerful enterprise IT organizations. Many consider using Docker in production as a way to quickly scale service oriented applications, to isolate and manage dependencies across environments, and to stabilize their development environment. Docker is easy to use, but can be difficult to learn to use well. There are many opportunities for Docker to add value, but it can be difficult to identify where, since a large, cohesive body of work around best practices has yet to be established.
As a developer of a Rails application with the intent of eventually deploying Docker containers into a Production AWS environment I would like to:
In order to allow homogeneity of development environments, I’ve decided to begin by using a well tested technology to manage rapid virtualization and configuration, Vagrant.
You’ll want to download and install both Vagrant and VirtualBox on your host computer before proceeding.
The main idea here is that you edit your code locally on your host laptop but you execute specs, run database migrations and run development and test servers inside of Docker containers running on a Vagrant VM. Vagrant allows us to use an Ubuntu VM as the OS to run the Docker Server (as Ubuntu is the main development environment for Docker and boot2docker is deprecated), to rapidly install and configure a Docker environment, and to share our code via NFS into the Vagrant/Docker environment. Now all the heavy developmental lifting is done on the exact same Docker image that you’ll containerize in Production.
If you’d like to start with a project that’s already built, fork this repo: RailsDockerVagrantDevKit
CD into the directory and let’s start by looking at the Vagrantfile, which will leverage Vagrant and VirtualBox to create and build our Docker development environment.
#./Vagrantfile # -*- mode: ruby -*- # vi: set ft=ruby : Vagrant.configure(2) do |config| config.vm.box = "phusion/ubuntu-14.04-amd64" config.vm.synced_folder "./", "/app", type: 'nfs' config.vm.network "forwarded_port", guest: 3000, host: 1234 config.vm.network "private_network", ip: "192.168.33.10" config.vm.provider "virtualbox" do |vb| vb.memory = "6000" end config.vm.provision "shell", inline: <<-SHELL sudo docker 2> /dev/null if [ $? -eq 0 ] then echo "Docker exists, skipping..." exit 0 else sudo curl -sSL https://get.docker.com/ | sh sudo curl -L https://github.com/docker/compose/releases/download/1.4.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose fi SHELL config.vm.provision "chef_apply" do |chef| chef.recipe = <<-RECIPE group 'docker' bash 'docker_sudoer' do code 'sudo usermod -aG docker vagrant' end service 'docker' do action :start end RECIPE end end
In the Vagrantfile, we see that we are going to use a publicly available Ubuntu box to house our Docker environment, we are going to copy all the source code in our local directory over to the Ubuntu vm at
and we are going to make running applications accessible locally using port forwarding with
config.vm.network "forwarded_port", guest: 3000, host: 1234
The Shell provisioner box in Vagrant allows us to configure the Ubuntu VM as desired. In our case, we’re installing Docker and Docker-Compose in an idempotent manner.
config.vm.provision "shell", inline: <<-SHELL sudo docker 2> /dev/null if [ $? -eq 0 ] then echo "Docker exists, skipping..." exit 0 else sudo curl -sSL https://get.docker.com/ | sh sudo curl -L https://github.com/docker/compose/releases/download/1.4.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose fi SHELL
To the extent possible, we’ll use a standalone Chef variant called Chef Apply as the second type of provisioner. Here we’re simply creating groups and permissions so we don’t have to sudo all of our Docker commands.
config.vm.provision "chef_apply" do |chef| chef.recipe = <<-RECIPE group 'docker' bash 'docker_sudoer' do code 'sudo usermod -aG docker vagrant' end service 'docker' do action :start end RECIPE end
From the root of the cloned project, go ahead and run
$ vagrant up --provision to get the process started.
You should begin to see some output and you might be prompted for your password in order to allow NFS to share the code into the VM.
Bringing machine 'default' up with 'virtualbox' provider... ==> default: Importing base box 'phusion/ubuntu-14.04-amd64'... ==> default: Matching MAC address for NAT networking... ==> default: Checking if box 'phusion/ubuntu-14.04-amd64' is up to date... ==> default: Setting the name of the VM: RailsDockerVagrantDevKit_default_1449676821725_64984 ==> default: Clearing any previously set forwarded ports... ==> default: Clearing any previously set network interfaces... ==> default: Preparing network interfaces based on configuration... default: Adapter 1: nat default: Adapter 2: hostonly ==> default: Forwarding ports... default: 3000 => 1234 (adapter 1) default: 22 => 2222 (adapter 1) ==> default: Running 'pre-boot' VM customizations... ==> default: Booting VM... ==> default: Waiting for machine to boot. This may take a few minutes... default: SSH address: 127.0.0.1:2222 default: SSH username: vagrant
Now we have an Ubuntu VM with Docker installed and our local source code copied over. Let’s ssh in just to make sure.
$ vagrant ssh $ cd /app $ ls $ docker ps
vagrant@ubuntu-14:~$ cd /app vagrant@ubuntu-14:/app$ ls app config db Dockerfile Gemfile.lock log Rakefile spec vendor bin config.ru docker-compose.yml Gemfile lib public README.md Vagrantfile vagrant@ubuntu-14:/app$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Great! all of our code is here and ready to be worked with. We have the source code that runs our application. Now let’s talk about the Dockerfile, which much like the Vagrantfile, contains all the directives for creating a Docker image from which we will build a container.
Here’s our Dockerfile.
Now I know if you want a basic Ruby Docker container, there are many out there and they are dead simple to use. I wanted to use Ubuntu14.04 instead of the default Debian that is offered, and I also wanted to show the granularity that Docker provides for describing and creating your environment.
Basically here we install a specific version of Ruby, we create some directories, copy over our source code into the Docker image, and issue a directive that whenever we start a Docker container based on this image, the command
bundle exec unicorn -p 3000 will be run.
Let’s build our first Docker image by running
docker build -t demo . where demo is the name of the image you are building.
You should see something like:
Sending build context to Docker daemon 316.9 kB Step 1 : FROM ubuntu:14.04 14.04: Pulling from library/ubuntu 0bf056161913: Downloading [============> ] 16.2 MB/65.67 MB 1796d1c62d0c: Download complete
This process can take a little while, but once the Docker image is built, we won’t need to build it again and can reuse it as we like. Additionally, updates are easy to add into the build, only requiring the delta of changes to be applied.
Now let’s see if we’ve got a local Docker image created by running
docker images .
vagrant@ubuntu-14:/app$ docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE demo latest 673d591cb3b9 10 seconds ago 973.4 MB ubuntu 14.04 89d5d8e8bafb 22 hours ago 187.9 MB ruby 2.0.0 bf45d178c137 3 days ago 706.5 MB
We see that Docker has built and cached the image we created “demo” and the two images we used as blueprints in the Dockerfile, “ubuntu” and “ruby”.
Let’s run a quick test to see if our demo Docker image can run by running
docker run -dP --name demo_container -p 3000:3000 demo
Here we’re telling Docker to run a container in a detached state with the name ‘demo_container’ and to map ports so that port 3000 is accessible from inside the container -> out to the Ubuntu VM => out to our local host computer.
Now let’s look at our docker-compose.yml file which will describe how to pair an application Docker container with an postgresql DB Docker container. We all know you shouldn’t run DBs in containers, but in a development environment, it’s totally fine since we don’t care about true persistence.
Here is our docker-compose.yml.
db: image: postgres web: build: . #builds container from colocated Dockerfile in root of directory command: bundle exec unicorn -p 3000 volumes: - .:/app ports: - "3000:3000" links: - db
Here we see Docker-Compose’s native linking of containers. We’ve got a postgres container which we’ll pull from a community image (saving us time) and a web container which will build from the local Dockerfile we’ve just tested.
and you should see:
agrant@ubuntu-14:/app$ docker-compose up Pulling db (postgres:latest)... latest: Pulling from library/postgres d630ddc7aec4: Pull complete c74beba9de66: Pull complete 77a33b9160e7: Pull complete c27cdb916a4a: Pull complete f114dda4521b: Pull complete 11f38e247778: Pull complete d13a3ea275ee: Pull complete d85c3186396c: Pull complete a6fc0a49fe73: Pull complete edf89d70f219: Pull complete 113ae1719852: Extracting [==========================================> ] 34.93 MB/40.84 MB aaaba168d6ad: Download complete
Once all the containers are built, linked and started, the end of the output should look like this:
Starting app_db_1... Starting app_web_1... Attaching to app_db_1, app_web_1 db_1 | The files belonging to this database system will be owned by user "postgres". db_1 | This user must also own the server process. db_1 | db_1 | The database cluster will be initialized with locale "en_US.utf8". db_1 | The default database encoding has accordingly been set to "UTF8". db_1 | The default text search configuration will be set to "english". db_1 | db_1 | Data page checksums are disabled. db_1 | db_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok db_1 | creating subdirectories ... ok db_1 | selecting default max_connections ... 100 db_1 | selecting default shared_buffers ... 128MB db_1 | selecting dynamic shared memory implementation ... posix db_1 | creating configuration files ... ok db_1 | creating template1 database in /var/lib/postgresql/data/base/1 ... ok db_1 | initializing pg_authid ... ok db_1 | initializing dependencies ... ok db_1 | creating system views ... ok db_1 | loading system objects' descriptions ... ok db_1 | creating collations ... ok db_1 | creating conversions ... ok db_1 | creating dictionaries ... ok db_1 | setting privileges on built-in objects ... ok db_1 | creating information schema ... ok db_1 | loading PL/pgSQL server-side language ... ok db_1 | vacuuming database template1 ... ok db_1 | copying template1 to template0 ... ok db_1 | copying template1 to postgres ... ok db_1 | syncing data to disk ... ok db_1 | db_1 | WARNING: enabling "trust" authentication for local connections db_1 | You can change this by editing pg_hba.conf or using the option -A, or db_1 | --auth-local and --auth-host, the next time you run initdb. db_1 | db_1 | Success. You can now start the database server using: db_1 | db_1 | postgres -D /var/lib/postgresql/data db_1 | or db_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start db_1 | db_1 | **************************************************** db_1 | WARNING: No password has been set for the database. db_1 | This will allow anyone with access to the db_1 | Postgres port to access your database. In db_1 | Docker's default configuration, this is db_1 | effectively any other container on the same db_1 | system. db_1 | db_1 | Use "-e POSTGRES_PASSWORD=password" to set db_1 | it in "docker run". db_1 | **************************************************** db_1 | waiting for server to start....LOG: database system was shut down at 2015-12-09 18:51:35 UTC db_1 | LOG: MultiXact member wraparound protections are now enabled db_1 | LOG: database system is ready to accept connections db_1 | LOG: autovacuum launcher started db_1 | done db_1 | server started db_1 | ALTER ROLE db_1 | db_1 | db_1 | /docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/* db_1 | db_1 | LOG: received fast shutdown request db_1 | waiting for server to shut down....LOG: aborting any active transactions db_1 | LOG: autovacuum launcher shutting down db_1 | LOG: shutting down db_1 | LOG: database system is shut down db_1 | done db_1 | server stopped db_1 | db_1 | PostgreSQL init process complete; ready for start up. db_1 | db_1 | LOG: database system was shut down at 2015-12-09 18:51:36 UTC db_1 | LOG: MultiXact member wraparound protections are now enabled db_1 | LOG: database system is ready to accept connections db_1 | LOG: autovacuum launcher started db_1 | LOG: received smart shutdown request db_1 | LOG: autovacuum launcher shutting down db_1 | LOG: shutting down db_1 | LOG: database system is shut down db_1 | LOG: database system was shut down at 2015-12-09 18:53:51 UTC db_1 | LOG: MultiXact member wraparound protections are now enabled db_1 | LOG: database system is ready to accept connections db_1 | LOG: autovacuum launcher started web_1 | I, [2015-12-09T18:54:34.510086 #1] INFO -- : listening on addr=0.0.0.0:3000 fd=9 web_1 | I, [2015-12-09T18:54:34.510467 #1] INFO -- : worker=0 spawning... web_1 | I, [2015-12-09T18:54:34.511048 #1] INFO -- : master process ready web_1 | I, [2015-12-09T18:54:34.513285 #8] INFO -- : worker=0 spawned pid=8 web_1 | I, [2015-12-09T18:54:34.513873 #8] INFO -- : Refreshing Gem list web_1 | I, [2015-12-09T18:54:36.870447 #8] INFO -- : worker=0 ready
We see that our web service is running on port 3000. If we look at our Vagrantfile again, we know that this has been forwarded to our host at port 1234.
Let’s access this site in a browser on our host machine @ localhost:1234
Now you should see a familiar “Pending migrations” error for Rails.
Let’s jump back to the console and ctrl+c the Docker process running our two containers.
docker-compose ps should show 2 exited processes.
vagrant@ubuntu-14:/app$ docker-compose ps Name Command State Ports ----------------------------------------------------------- app_db_1 /docker-entrypoint.sh postgres Exit 0 app_web_1 bundle exec unicorn -p 3000 Exit 0
Now let’s run
docker-compose up -d in a detached state so we can execute commands against the containers.
vagrant@ubuntu-14:/app$ docker-compose up -d Starting app_db_1... Starting app_web_1...
Let’s check the output of
docker-compose ps to see our running containers and port mappings.
vagrant@ubuntu-14:/app$ docker-compose ps Name Command State Ports --------------------------------------------------------------------------- app_db_1 /docker-entrypoint.sh postgres Up 5432/tcp app_web_1 bundle exec unicorn -p 3000 Up 0.0.0.0:3000->3000/tcp vagrant@ubuntu-14:/app$
Firstly, let’s migrate the database.
docker exec app_web_1 bin/rake db:migrate
And then let’s run RSpec (which we can do in Dev or repeat for Test).
docker exec app_web_1 bin/rake spec
Now we can access our basic CRUD application at http://localhost:1234/ and interact with a containerized app and a separate containerized DB.
This is great, and it get’s better. You can run this exact same setup in TravisCI using a .travis.yml file.
Part of your build requirements could be running specs, executing arbitrary scripts against a Docker container and making sure that your containers build correctly as production artifacts. All of this adds a greater amount of security and stability in the development environment, making sure that the code is married to the environment very early on in the development lifecycle. Being able to connect Docker to TDD, CI/CD, distributed development and version control makes for a powerful development paradigm that allows teams to solve the two big IT hurdles; increased velocity AND increased stability.
If you’d like to learn more about where to go from here with your production artifacts or with more complex issues like clustering with Docker, please check out my colleague Jay Johnson’s post on Distributed Spring with Docker Swarm or RabbitMQ clustering in Docker.
If you’re interested in bringing your team up to speed in modern DevOps tools and processes, we here at Levvel would love to engage with you.
Kubernetes has become the standard when it comes to containerization. While raw Kubernetes is not easy to deploy and manage, cloud services providers such as AWS, Azure and IBM Bluemix provide managed services that significantly ease adoption.
Sharing insights from the Go developer conference in San Diego, affectionately referred to as GopherCon, to try and answer a fundamental question: should Go be used for enterprise applications?
How to use the Amazon API Gateway in a multi-account environment where one instance can be used to manage a variety of APIs deployed across multiple accounts.
This video blog series is a comprehensive guide to component-based architecture and design and how a business can benefit from it.