Blog
January 20, 2016
TABLE OF CONTENTS
When working in large distributed environments, Docker can simplify the use of service discovery tools by packaging each service in separate containers. There are multiple tools built around the Docker ecosystem that make using Docker in your infrastructure an advantage in regards to service discovery.
In this post I’m going to use Consul, Terraform, Docker, and Ansible to build a small distributed web application. We’ll see how to use Terraform and Ansible to build and provision the infrastructure, and how to use Docker and Consul to make different components discover one another.
Consul has multiple components, but as a whole, it is a tool for discovering and configuring services in your infrastructure. It provides several key features:
Consul is designed to be friendly to both the DevOps community and application developers, making it perfect for modern, elastic infrastructures.
Another great tool created by Jeff Lindsay is called Registrator, which automatically registers and derigesters services for Docker containers. It inspects the Docker container and register its address and exposed ports.
By using these tools we can solve most of the problems around service discovery with Docker containers.
Terraform is used to build, create, and version infrastructure effortlessly with some basic code in its configuration file and it’s a great example of infrastructure as code.
Terraform is used to create and manage different infrastructure resources, like physical machines, VMs and even Docker containers. It supports different Cloud service providers like AWS and DigitalOcean, and it can also manage low level components such as instances, storage, and networking. It can even manage DNS entries in Cloudflare, databases in MySQL, and Consul key/value store, all of which can be managed by a simple configuration file that defines several resources related to different services and providers. Even though Terraform is a multi-purpose tool that can manage low and high level components, it can’t offer the full control over configuration management which is where Ansible comes in.
This article assumes that you have basic knowledge about the following technologies:
In this setup we’re going to deploy a simple, yet powerful publishing platform called Ghost. Ghost is a NodeJS application that uses SQL database backend (PostgreSQL, MySQL, or SQLite). The application is configurable through one file called config.js in its root directory and can be used to set the values for the database and mail configuration.
The setup will consist of the following:
All the applications including Nginx, Ghost and MySQL are dockerized, and we’ll use AWS as cloud service provider. Each server will be created using Terraform, each application will require knowledge of database server address. The load balancer will need to know about the Ghost application server’s addresses and ports.
Each server will contain a Registrator container that will add the services to the Consul server:
root@lb1:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
e64708af135c nginx:latest "nginx -g 'daemon off" 23 hours ago Up 22
hours 0.0.0.0:80->80/tcp, 443/tcp nginx-lb1
9797b30402df gliderlabs/registrator:latest "/bin/registrator -ip" 23 hours ago Up 23
hours registrator-lb1
The registrator will register the Nginx container with consul so that Consul will be aware of Nginx service that has port 80 open:
root@lb1:~# docker logs registrator-lb1
2016/01/08 20:42:04 Starting registrator v6 ...
2016/01/08 20:42:04 Forcing host IP to 52.25.58.215
2016/01/08 20:42:04 consul: current leader 172.17.0.2:8300
2016/01/08 20:42:04 Using consul adapter: consul://52.34.89.135:8500
2016/01/08 20:42:04 Listening for Docker events ...
2016/01/08 20:42:04 Syncing services on 1 containers
2016/01/08 20:42:04 ignored: 9797b30402df no published ports
2016/01/08 20:42:40 ignored: e64708af135c port 443 not published on host
2016/01/08 20:42:40 added: e64708af135c lb1:nginx-lb1:80
The servers will be built on AWS using Terraform and the following Is the configuration file that will spin up 6 servers. First it will configure the AWS provider using the following section:
# Configure the AWS Provider
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "us-west-2"
}
This will tell Terraform that I will use AWS with the variables access_key and secret_key, and all servers will be in us-west-2 region.
# Configure Security group
resource "aws_security_group" "default" {
name = "aws_default"
description = "Default security group"
# inbound access from anywhere
ingress {
from_port = 0
to_port = 65535
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# outbound internet access
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
The last step before creating the servers is to define the authentication key that will be deployed to each server:
resource "aws_key_pair" "auth" {
key_name = "key"
public_key = "ssh-rsa xxxxxxxx"
}
The following section will define the aws_instance resource and will create the load balancer server. Note that I used “group” tag to be used later with Ansible, I will use Ubuntu-14.04 LTS OS on each server.
# Create load balancer
resource "aws_instance" "lb" {
connection {
user = "ubuntu"
}
instance_type = "t2.micro"
tags {
Name = "lb1"
Group = "lbs"
}
security_groups = ["aws_default"]
ami = "ami-5189a661"
key_name = "key"
}
In the same manner, the following will describe how to create application servers, database server, and consul server.
# Create apps
resource "aws_instance" "app1" {
connection {
user = "ubuntu"
}
instance_type = "t2.micro"
tags {
Name = "app1"
Group = "apps"
}
security_groups = ["aws_default"]
ami = "ami-5189a661"
key_name = "key"
}
resource "aws_instance" "app2" {
connection {
user = "ubuntu"
}
instance_type = "t2.micro"
tags {
Name = "app2"
Group = "apps"
}
security_groups = ["aws_default"]
ami = "ami-5189a661"
key_name = "key"
}
resource "aws_instance" "app3" {
connection {
user = "ubuntu"
}
instance_type = "t2.micro"
tags {
Name = "app3"
Group = "apps"
}
security_groups = ["aws_default"]
ami = "ami-5189a661"
key_name = "key"
}
resource "aws_instance" "app4" {
connection {
user = "ubuntu"
}
instance_type = "t2.micro"
tags {
Name = "app4"
Group = "apps"
}
security_groups = ["aws_default"]
ami = "ami-5189a661"
key_name = "key"
}
# create db
resource "aws_instance" "db1" {
connection {
user = "ubuntu"
}
instance_type = "t2.micro"
tags {
Name = "db1"
Group = "dbs"
}
security_groups = ["aws_default"]
ami = "ami-5189a661"
key_name = "key"
}
# Create Consul
resource "aws_instance" "consul" {
connection {
user = "ubuntu"
}
instance_type = "t2.micro"
tags {
Name = "consul"
Group = "consul"
}
security_groups = ["aws_default"]
ami = "ami-5189a661"
key_name = "key"
}
Before creating the servers, you should run $ terraform plan
which will describe the execution plan of creating the instances and resources:
$ terraform plan
Refreshing Terraform state prior to plan...
The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed.
Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.
+ aws_instance.app1
ami: "" => "ami-5189a661"
availability_zone: "" => "<computed>"
ebs_block_device.#: "" => "<computed>"
ephemeral_block_device.#: "" => "<computed>"
instance_type: "" => "t2.micro"
key_name: "" => "key"
placement_group: "" => "<computed>"
private_dns: "" => "<computed>"
private_ip: "" => "<computed>"
public_dns: "" => "<computed>"
public_ip: "" => "<computed>"
root_block_device.#: "" => "<computed>"
security_groups.#: "" => "1"
security_groups.3578963758: "" => "aws_default"
source_dest_check: "" => "1"
subnet_id: "" => "<computed>"
tags.#: "" => "2"
tags.Group: "" => "apps"
tags.Name: "" => "app1"
tenancy: "" => "<computed>"
vpc_security_group_ids.#: "" => "<computed>"
+ aws_instance.app2
ami: "" => "ami-5189a661"
availability_zone: "" => "<computed>"
ebs_block_device.#: "" => "<computed>"
ephemeral_block_device.#: "" => "<computed>"
instance_type: "" => "t2.micro"
key_name: "" => "key"
placement_group: "" => "<computed>"
private_dns: "" => "<computed>"
private_ip: "" => "<computed>"
public_dns: "" => "<computed>"
public_ip: "" => "<computed>"
root_block_device.#: "" => "<computed>"
security_groups.#: "" => "1"
security_groups.3578963758: "" => "aws_default"
source_dest_check: "" => "1"
subnet_id: "" => "<computed>"
tags.#: "" => "2"
tags.Group: "" => "apps"
tags.Name: "" => "app2"
tenancy: "" => "<computed>"
vpc_security_group_ids.#: "" => "<computed>"
+ aws_instance.app3
ami: "" => "ami-5189a661"
availability_zone: "" => "<computed>"
ebs_block_device.#: "" => "<computed>"
ephemeral_block_device.#: "" => "<computed>"
instance_type: "" => "t2.micro"
key_name: "" => "key"
placement_group: "" => "<computed>"
private_dns: "" => "<computed>"
private_ip: "" => "<computed>"
public_dns: "" => "<computed>"
public_ip: "" => "<computed>"
root_block_device.#: "" => "<computed>"
security_groups.#: "" => "1"
security_groups.3578963758: "" => "aws_default"
source_dest_check: "" => "1"
subnet_id: "" => "<computed>"
tags.#: "" => "2"
tags.Group: "" => "apps"
tags.Name: "" => "app3"
tenancy: "" => "<computed>"
vpc_security_group_ids.#: "" => "<computed>"
+ aws_instance.app4
ami: "" => "ami-5189a661"
availability_zone: "" => "<computed>"
ebs_block_device.#: "" => "<computed>"
ephemeral_block_device.#: "" => "<computed>"
instance_type: "" => "t2.micro"
key_name: "" => "key"
placement_group: "" => "<computed>"
private_dns: "" => "<computed>"
private_ip: "" => "<computed>"
public_dns: "" => "<computed>"
public_ip: "" => "<computed>"
root_block_device.#: "" => "<computed>"
security_groups.#: "" => "1"
security_groups.3578963758: "" => "aws_default"
source_dest_check: "" => "1"
subnet_id: "" => "<computed>"
tags.#: "" => "2"
tags.Group: "" => "apps"
tags.Name: "" => "app4"
tenancy: "" => "<computed>"
vpc_security_group_ids.#: "" => "<computed>"
+ aws_instance.consul
ami: "" => "ami-5189a661"
availability_zone: "" => "<computed>"
ebs_block_device.#: "" => "<computed>"
ephemeral_block_device.#: "" => "<computed>"
instance_type: "" => "t2.micro"
key_name: "" => "key"
placement_group: "" => "<computed>"
private_dns: "" => "<computed>"
private_ip: "" => "<computed>"
public_dns: "" => "<computed>"
public_ip: "" => "<computed>"
root_block_device.#: "" => "<computed>"
security_groups.#: "" => "1"
security_groups.3578963758: "" => "aws_default"
source_dest_check: "" => "1"
subnet_id: "" => "<computed>"
tags.#: "" => "2"
tags.Group: "" => "consul"
tags.Name: "" => "consul"
tenancy: "" => "<computed>"
vpc_security_group_ids.#: "" => "<computed>"
+ aws_instance.db1
ami: "" => "ami-5189a661"
availability_zone: "" => "<computed>"
ebs_block_device.#: "" => "<computed>"
ephemeral_block_device.#: "" => "<computed>"
instance_type: "" => "t2.micro"
key_name: "" => "key"
placement_group: "" => "<computed>"
private_dns: "" => "<computed>"
private_ip: "" => "<computed>"
public_dns: "" => "<computed>"
public_ip: "" => "<computed>"
root_block_device.#: "" => "<computed>"
security_groups.#: "" => "1"
security_groups.3578963758: "" => "aws_default"
source_dest_check: "" => "1"
subnet_id: "" => "<computed>"
tags.#: "" => "2"
tags.Group: "" => "dbs"
tags.Name: "" => "db1"
tenancy: "" => "<computed>"
vpc_security_group_ids.#: "" => "<computed>"
+ aws_instance.lb
ami: "" => "ami-5189a661"
availability_zone: "" => "<computed>"
ebs_block_device.#: "" => "<computed>"
ephemeral_block_device.#: "" => "<computed>"
instance_type: "" => "t2.micro"
key_name: "" => "key"
placement_group: "" => "<computed>"
private_dns: "" => "<computed>"
private_ip: "" => "<computed>"
public_dns: "" => "<computed>"
public_ip: "" => "<computed>"
root_block_device.#: "" => "<computed>"
security_groups.#: "" => "1"
security_groups.3578963758: "" => "aws_default"
source_dest_check: "" => "1"
subnet_id: "" => "<computed>"
tags.#: "" => "2"
tags.Group: "" => "lbs"
tags.Name: "" => "lb1"
tenancy: "" => "<computed>"
vpc_security_group_ids.#: "" => "<computed>"
+ aws_key_pair.auth
fingerprint: "" => "<computed>"
key_name: "" => "key"
public_key: "" => "ssh-rsa xxxxxxxxx"
+ aws_security_group.default
description: "" => "Default security group"
egress.#: "" => "1"
egress.482069346.cidr_blocks.#: "" => "1"
egress.482069346.cidr_blocks.0: "" => "0.0.0.0/0"
egress.482069346.from_port: "" => "0"
egress.482069346.protocol: "" => "-1"
egress.482069346.security_groups.#: "" => "0"
egress.482069346.self: "" => "0"
egress.482069346.to_port: "" => "0"
ingress.#: "" => "1"
ingress.1403647648.cidr_blocks.#: "" => "1"
ingress.1403647648.cidr_blocks.0: "" => "0.0.0.0/0"
ingress.1403647648.from_port: "" => "0"
ingress.1403647648.protocol: "" => "tcp"
ingress.1403647648.security_groups.#: "" => "0"
ingress.1403647648.self: "" => "0"
ingress.1403647648.to_port: "" => "65535"
name: "" => "aws_default"
owner_id: "" => "<computed>"
vpc_id: "" => "<computed>"
Plan: 9 to add, 0 to change, 0 to destroy.
Ansible can be used with Terraform by providing a dynamic inventory (terraform.py) which will connect to systems by reading terraform’s .tfstate file. It currently supports AWS and you can find the dynamic inventory script here. The dynamic script can be used to group each set of servers in its own group according to the tags we specified earlier with each server.
Before running $ terraform apply
, we should define the Ansible roles that will provision each component on the servers. Each role in the Ansible playbook will describe how to get the components up and running and the Ansible playbook will look like this:
---
- hosts: all
sudo: yes
roles:
- { role: common, tags: ["common"] }
- { role: docker, tags: ["docker"] }
- hosts: "aws_tag_Group=consul"
sudo: yes
roles:
- { role: consul, tags: ["consul"] }
- hosts: "aws_tag_Group=dbs"
sudo: yes
roles:
- { role: db-run, tags: ["db-run"] }
- hosts: "aws_tag_Group=apps"
sudo: yes
roles:
- { role: app-run, tags: ["app-run"] }
- hosts: "aws_tag_Group=lbs"
sudo: yes
roles:
- { role: lb-run, tags: ["lb-run"] }
The common and docker role will be installed on all the servers, and each group of servers will have its own role, i.e. consul group will contain consul role and so on.
This role will simply run the gliderlabs/docker-consul Docker image and it will expose the corresponding ports of the Consul server. It’s not required to run a cluster of Consul servers, but it’s recommended in a production environment to provide HA systems:
- name: Pull and run Docker consul
docker:
name: consul-{{ ansible_hostname }}
image: progrium/consul
restart_policy: always
state: started
command: -server -bootstrap -ui-dir /ui
ports:
- 8500:8500
- 8400:8400
- 8600:53/udp
This image provides a web UI interface which will display the services and nodes registered with Consul server.
This role will essentially do three main tasks:
The Registrator task will be common between the roles (lb-run, db-run, and app-run):
- name: Pull and run registrator container
docker:
name: registrator-{{ ansible_hostname }}
net: host
image: gliderlabs/registrator:latest
volumes:
- /var/run/docker.sock:/tmp/docker.sock
command: -ip {{ public.ip }} consul://{{ consul_url }}
Note that the registrator image uses consul_url variable to connect to consul server, and uses public.ip to tell the consul server the public ip of the registered service. The second task will run the create the configuration directory for Nginx, and will pull the nginx configuration and the ctmpl file which is the template that will be used by consul-template:
- name: Create configuration path
file:
path={{ nginx_docker_path }}
state=directory
mode=750
owner=root
group=root
- name: Download Nginx configuration for the apps
git:
repo={{ nginx_config_repo }}
dest={{ nginx_docker_path }}
clone=yes
ignore_errors: yes
- name: Pull and run Nginx Docker container
docker:
name: nginx-{{ ansible_hostname }}
image: nginx:latest
restart_policy: always
ports:
- 80:80
volumes:
- "{{ nginx_docker_path }}/nginx.conf:/etc/nginx/nginx.conf"
An important note here is to mount the configuration file in an nginx container to the configuration file on the host, this will ensure that the consul-template will change the configuration file regularly whenever any information changed regarding the services. The configuration template will like the following:
worker_processes 1;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_tokens off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
upstream apps{
{{range service "ghost"}}
server {{.Address}}:{{.Port}};{{end}}
}
server {
listen 80 default_server;
server_name ghost.me.com;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://apps;
}
}
}
Note that this creates an automatic reverse proxy by using this template with consul-template tool, and restarts Nginx every time the upstream updated. The last task is to run consul-template itself:
- name: Download consul-template
get_url:
url=https://releases.hashicorp.com/consul-template/0.12.0/consul-template_0.12.0_linux_amd64.zip
dest=/opt/consul-template.zip
- name: Unarchive consul template
unarchive: src=/opt/consul-template.zip dest=/usr/local/bin/ copy=no mode=u+rwx
- name: Run consul template
shell: creates=/run/consul-template.pid consul-template -pid-file /run/consul-template.pid -consul {{
consul_url }} -retry "5s" -template "{{ nginx_docker_path }}/nginx.ctmpl:
{{ nginx_docker_path }}/nginx.conf:docker restart nginx-{{ansible_hostname }}" &
This will use nginx.ctmpl file described earlier to update nginx.conf file which is in turn mounted inside the Docker container.
This role will run registrator as well, and it will run the ghost application and consul-template. The ghost task will look like the following:
- name: Create configuration path
file:
path={{ ghost_docker_path }}
state=directory
mode=750
owner=root
group=root
- name: Download Ghost configuration
git:
repo={{ ghost_config_repo }}
dest={{ ghost_docker_path }}
clone=yes
ignore_errors: yes
- name: Build Ghost image
docker_image:
name=denmanjd/ghost
path={{ ghost_docker_path }}/
state=build
- name: Pull and run Ghost Docker container
docker:
name: ghost-{{ ansible_hostname }}
image: denmanjd/ghost
restart_policy: always
volumes:
- "{{ ghost_docker_path }}/config.js:/var/www/ghost/config.js"
ports:
- 2368:2368
env: CONSUL_URL={{ consul_url }}
This creates a directory to store the configuration for the ghost application and will build the docker image for the ghost application. Then it will mount the config.js file inside the application. Finally the role will run consul template which will process the following template:
var path = require('path'), config;
config = {
// ### Production
// When running Ghost in the wild, use the production environment.
// Configure your URL and mail settings here
production: {
url: 'http://denman.ghost.com',
mail: {},
database: {
client: 'mysql',
connection: {
{{range service "mysql"}}
host : '{{.Address}}',{{end}}
user : 'ghost_user',
password : '{{key "mysql/ghost_password"}}',
database : 'ghost_db',
charset : 'utf8'
},
debug: false
},
server: {
host: '0.0.0.0',
port: '2368'
}
}
};
module.exports = config;
Beside using the .Address, it will also fetch the MySQL password from consul, which will be stored later using mysql role.
This role will run registrator and then a mysql role:
- name: Pull and run MySQL Docker container
docker:
name: mysql
image: mysql:5.5
restart_policy: always
env: MYSQL_DATABASE=ghost_db,
MYSQL_USER=ghost_user,
MYSQL_PASSWORD={{ mysql_ghost_password }},
MYSQL_ROOT_PASSWORD={{ mysql_root_password }}
ports:
- 3306:3306
volumes:
- /var/lib/mysql:/var/lib/mysql
- name: Store mysql password into consul
command: curl -XPUT -d "{{ mysql_ghost_password }}" http://
{{ consul_url }}/v1/kv/mysql/ghost_password
The mysql ghost password and mysql root password will be generated randomly and will store the ghost password into key/value store.
Now its time to run $ terraform apply
and then run Ansible playbook on the created servers:
$ terraform apply
$ ansible-playbook -i terraform.py -s -u ubuntu playbook.yml
You can make sure that everything went OK, by checking the consul server using the following url:
http://consul-ip:8500/
To make sure that nginx is aware of the 4 ghost application servers created, we can check the nginx configuration which should’ve been updated with the server’s ips:
root@lb1:~# cat /opt/nginx-ghost/nginx.conf
upstream apps{
server 52.35.119.235:2368;
server 52.35.31.204:2368;
server 52.32.104.9:2368;
server 52.24.29.58:2368;
}
Finally access the application using the load balancer’s ip. We can also configure Terraform to adjust the DNS records using AWS Route53 record resource.
Here at Levvel, we’re excited about creating unique solutions, leveraging existing toolsets and helping our clients revolutionize the way they develop applications, manage complex infrastructure and drive business needs through the IT organization.
If you’d like to talk about how to use such tools in your team, please reach out to me at james.denman@levvel.io or at hello@levvel.io for more general questions.
Levvel is an IT consulting firm that combines the innovative DNA of a start up with the wisdom, scalability, and process rigor of a Fortune 100 company. We offer both technical implementation services as well as strategic advisory services. Levvel offers you an “unfair advantage” over your competition with comprehensive services including DevOps, Cloud, Mobile, UI/UX, Big Data, Analytics, Payment Strategy, and more.
Authored By
James Denman
Meet our Experts
Let's chat.
You're doing big things, and big things come with big challenges. We're here to help.