January 11, 2016
TABLE OF CONTENTS
In my last post about Rails+Vagrant+Docker+TravisCI, we learned about using Docker in the Dev side of DevOps to quickly develop and test in a production-like system, to gain critical feedback rapidly from a CI pipeline (TravisCI) and to test both the code and the build artifacts which all set the stage for us to deploy docker applications to a production environment. If you haven’t read that post, a quick skim will familiarize you with some of the solutions underpinning this post.
Today we’re going to build on the project we developed in my previous blog post and work with a powerhouse combination: Docker and AWS. AWS provides first-class citizen support for Docker and its’ ECS and ECR (Amazon EC2 Container Registry) offerings represent one of the quickest ways to start deploying production Docker containers. Check out Docker’s blog post for more detailed background information.
Building on our last blog post, I’d now like to deploy my Rails app, which has been containerized using Docker, into production.
As a DevOps engineer:
I’d like to commit code into version control from my development workstation, have TravisCI build it, and if it builds correctly and deploys the production-ready artifacts (Docker image) to DockerHub:
Since we’ve worked in Dev and Test using Docker-Compose, we’re used to spinning up a container for the Rails app, and a Postgres container for persistence, but we don’t really want to do that in production. I’ve added the below information to the
database.yml file so that we can later pass ENV vars in production to link the app and the persistence store.
production: <<: *default encoding: utf8 database: <%= ENV['RDS_DB_NAME'] %> username: <%= ENV['RDS_USERNAME'] %> password: <%= ENV['RDS_PASSWORD'] %> host: <%= ENV['RDS_HOSTNAME'] %> port: <%= ENV['RDS_PORT'] %>
We’ve got a Docker image (denmanjd/rails_app) waiting in DockerHub and now we need to knock out a few things.
First, we’re going to create an RDS instance – specifically a Postgres DB. Make a note of the DB name, username, password, Endpoint and port (5432) as we’ll need those bits of info later in the process.
Once the RDS instance is up and available, we’re going to create the container instance using AWS ECS (EC2 Container Service). The easiest way to get started is to login to the AWS console then navigate to https://console.aws.amazon.com/ecs/home#/firstRun.
You should see:
“Getting Started with Amazon EC2 Container Service (ECS) Select options to configure Get started by running a sample app with EC2 Container Service (ECS), setting up a private image repository with EC2 Container Registry (ECR), or both.”
Because our Docker image is already built and publicly available from DockerHub, we won’t need to use ECR here, so go ahead and uncheck that option.
Now we’re at Step 1 – Creating a Task Definition. An Amazon ECS task definition is a blueprint or recipe for containers. You can modify parameters in the task definition to suit your particular application (for example, to provide more CPU resources or change the port mappings).
You are free to name the Task and Container whatever you want, but note that the Image field is already pre-populated with httpd:2.4. We’re going to replace that with our Docker image by typing__ denmanjd/rails_app.__
If you recall from the last blog post (and verified in our Dockerfile), we ran the Rails server using Unicorn and exposing port 3000. So let’s go ahead and add 3000 3000 to our port mappings.
Additionally, we’ll want to click on __Advanced Options __so that we can add all of our necessary ENV vars.
You’ll want to create key-value ENV vars for each of the following:
RDS_DB_NAME name_of_your_rds_instance RDS_USERNAME username RDS_PASSWORD password RDS_HOSTNAME the_rds_endpoint RDS_PORT 5432
The RDS_HOSTNAME is the Endpoint listed in the RDS instance details.
Additionally, we’ll want to pass in:
RAILS_ENV production SECRET_KEY_BASE _your_secret_key_base_
To generate a SECRET_KEY_BASE value you can run
bundle exec rake secret from you dev CLI and copy that in. I’m sure there’s a better way to do this but for now this works.
Steps 2, 3 and 4, are all set to sane defaults, but you might want to give yourself a Keypair to access the Container instance via SSH (step 3).
When you’re ready, click through and Launch Service.
You’ll be taken to a screen where a number of tasks are displayed (about 18) and these are the building blocks of the ECS Cluster. Once this is ready you can click View Service and you’ll be taken to your ECS cluster.
I named my cluster “rails” so I will click on Clusters > rails to get a detailed view of the resources on my cluster.
Select your cluster and navigate to ECS Instances which will show you the underlying EC2 instance for your ECS cluster. You can click on your EC2 instance identifier to be taken to the EC2 console where you can locate the IP address and Security Group information.
You’ll want to go ahead and modify your RDS Postgres Instance Security Groups to allow traffic from your newly minted EC2 Instance (that’s running your ECS Cluster). That’s because the Rails app inside the container that we’re about to fire up using Tasks will make a call to create and migrate the database using the values in database.yml. If it can’t connect, then the container will error out and die.
Let’s go back to the ECS Cluster that we’ve just created.
I can see that there are no Tasks. We created one in the ECS First Run Wizard so let’s select Tasks -> Run New Task
In the dropdown menu, select the Task definition you created in the wizard. If you aren’t sure, click on Advanced Options to double check if all your ENV vars are there.
Go ahead and click Run Task and your Task will be started (in a pending status), instantiating the Docker image in the cluster.
Now you can navigate to your_ec2_ipaddress:3000 and your Rails app should be running.
Remember that any debugging that needs to happen can occur inside the EC2 instance by SSHing in. You can run
docker ps -a to see why a container might have exited early and use
docker logs to get more detailed stacktrace info.
All in all, we’ve leveraged some powerful technologies including Ubuntu, Vagrant, TravisCI, Ruby on Rails, Chef, Docker, Docker-Compose, AWS ECS and AWS RDS. This is a great stack to work on and we here at Levvel are excited about continuing to create unique solutions, leveraging existing toolsets and helping our clients revolutionize the way they develop applications, manage complex infrastructure and drive business needs through the IT organization.
Levvel is an IT consulting firm that combines the innovative DNA of a start up with the wisdom, scalability, and process rigor of a Fortune 100 company. We offer both technical implementation services as well as strategic advisory services. Levvel offers you an “unfair advantage” over your competition with comprehensive services including DevOps, Cloud, Mobile, UI/UX, Big Data, Analytics, Payment Strategy, and more.
Kubernetes has become the standard when it comes to containerization. While raw Kubernetes is not easy to deploy and manage, cloud services providers such as AWS, Azure and IBM Bluemix provide managed services that significantly ease adoption.
Sharing insights from the Go developer conference in San Diego, affectionately referred to as GopherCon, to try and answer a fundamental question: should Go be used for enterprise applications?
How to use the Amazon API Gateway in a multi-account environment where one instance can be used to manage a variety of APIs deployed across multiple accounts.
This video blog series is a comprehensive guide to component-based architecture and design and how a business can benefit from it.