Build, Ship, Run with Docker Datacenter
Levvel completed the Docker Authorized Consulting Partner training this week in Chicago. Thanks to Ben and Anoop from Docker for a great technical overview on the new Docker 1.10 and the upcoming 1.11 features. Our team left impressed with Docker Datacenter. For those that have not kept up with the stream of Docker announcements, this product is hitting its stride. With a focus on “build, ship, run” Docker Datacenter makes life easier for operations and developers.
Containers in Production
Docker Datacenter is simple and easy to use, and makes container deployment understandable even for those not familiar with Docker. Before Universal Control Plane (UCP) and Docker Datacenter, using and setting up a working Docker Swarm required time and energy to get started (here’s the long 1.9 way). Docker has taken a lot of that initial pain and created a simple installer the makes this process much easier on everyone. Off the shelf Datacenter comes with security, storage, networking and host management ready to go. Once your environment is setup, the web app is managing a distributed, multi-tenant platform capable of running on cloud providers like AWS or on your own hardware.
Docker Datacenter Component Overview
Universal Control Plane (UCP) – Website for managing Docker Datacenter
Docker Swarm – Native clustering support for Docker to run across a multi-host environment
Docker Trusted Registry (DTR) – Where to build, store, update, and hold your container images
Notary – Securely sign your Docker images for peace of mind that they are the same across any environment
LDAP Integration – Control those who need access
Labels – Tagging for helping organize your Swarm’s resources
User Namespaces – Security, Security, Security
Deployment and hosting strategy – Container placement to maximize your hardware and reduce resource waste
Nautilus (Coming Soon) – Image security scanning for vulnerability inspection
Docker Studio (Coming Soon) – Removes the virtualbox dependencies to get started with Docker
Running Containers in Production with Docker Datacenter
In this post we want to focus on how Docker Datacenter helps an enterprise run containers in production. There are a lot of moving pieces so we will start from the website sitting above all the underlying components and start migrating towards more granular features of interest.
Setting up Docker Datacenter is easy with the 1.10 installers and can be deployed across as many hosts as needed on AWS or an on-premise environment. Docker is eating their own dogfood and built Datacenter to run as containers that horizontally scale to reduce outages and single points of failure. Once the environment is running you can start deploying containers (by default out of your Docker Hub account) or out of your Docker Trusted Registry (DTR see below). It is refreshing to see Docker making more and more strides to bring containers into the mainstream development workflow and this is no exception, Datacenter is a website and does not require a command line to manage the underlying Swarm.
Here’s how easy it is to deploy containers with Datacenter:
Just click the light blue ‘Deploy Container’ button
Deploying, starting, stopping, setting up, and quickly checking what’s running and where no longer requires a sysadmin with command line access. This is a website dashboard that helps even those that are new to Docker manage containers across the underlying Swarm deployment…and it works right out of the box.
From an architecture perspective, here is a starting point for understanding Docker Datacenter:
- UCP – running on an odd-number of unique hosts (3 at least) that automatically leverage the etcd KV store
- Swarm Nodes – setup across as many unique hosting nodes as necessary for your production traffic loads (including across availability zones)
- Swarm Managers – running on at least 2 unique hosts for redundancy purposes
- Service Discovery – out of the box Datacenter supports integration with etcd, consul, and zookeeper and these should be setup in distributed, redundant configurations for reducing outages. (As a carry forward from the 1.9 Swarm we are still using consul)
- Storage volumes – Docker Datacenter supports Flocker, SolidFire, Gluster, and the original device mapper. Docker volumes should be setup to utilize persistent storage backends that are mounted inside containers that need to read, write and store files.
- Docker Trusted Registry (DTR) – Image repository hosting that can be integrated with persistent storage (like S3). By default it runs on a single host in the Swarm with a clustered version coming soon.
- Notary – Signs container images as they are pushed to your DTR. This runs on a single host somewhere in the Swarm.
Swarm Nodes Overview
So how does Datacenter host and run containers?
UCP has a detailed view of your environment with the Nodes view. Here you can check the status of the hosts powering your containers and what Labels have been applied to what hosts. Here’s how it looks with a 3-node configuration:
Viewing your Docker Datacenter Swarm nodes from UCP
This is a good administration and monitoring view into how your environment is organized and where containers are getting placed (previously this required someone logging in and running docker info just to see the health and metrics across the Swarm). Each node is running the Docker engine, Swarm Join, and any deployed containers. Previous readers know we are big fans of “the Swarm pattern” for deploying across hosts that can handle workloads in scalable manner and having a website to tune a configuration is just a nice touch.
Container Deployment and Placement
How can I control placement of a container across my environment to ensure redundancy and fault tolerance?
Docker’s scheduler strategy has: spread, binpack, and random and with the new Labels support they become a powerful combination for any business to maximize environment utilization without over-saturation. Being able to dictate where a container is placed is incredibly valuable from a resourcing perspective, and we encourage you to take a look at what Labels can do (https://docs.docker.com/engine/userguide/labels-custom-metadata/). Being able to tag images, containers, and daemons on hosts running across your environment is a great way to control placement for your containers. By understanding your budget and technology needs, you can ensure that the Docker scheduler and Labels place just your JVM containers on those expensive memory-optimized r3 instances and your Hadoop reducers on some of those cheaper compute-optimized c4 instances. These are the kind of configuration and budgetary problems Docker aims to help with Datacenter and our team at Levvel can help your organization setup your Docker Datacenter environment to reduce your operational budget while maintaining performance.
Docker Trusted Registry
Where do I store my containers and how do we ensure what image is running in an environment?
Under the hood Datacenter can run Docker Trusted Registry (DTR) for storing your containers in an easy to manage repository that dedicated team maintainers control. DTR comes with its own website and here’s what it looks like:
Docker Trusted Registry Overview
Previously we have been using Docker Hub, but as an organization why would we want to wait and pay for the network bandwidth to download and upload images every time one of us makes a change. DTR brings the image repository hosting solution in-house and comes with its own CI/CD build tools to help developers focus on just pushing changes (like a git commit) to the DTR repository when they are ready to cut a new image version. Once a new tag or version is pushed, it can be signed automatically with Docker Notary. Organizations needing explicit image versioning on production can review which versions are signed and running with Notary enabled.
Imagine a use case where your team wants to optimize a Redis configuration for performance purposes and then hand that configuration inside an image to QA to validate during the next regression. Here’s how Docker Trusted Registry looks with Notary signing images as we iterate on our hypothetical Redis image:
Docker Trusted Registry with Notary Signing images
Being able to sign and track each new version of your Docker image is a huge upgrade from the previous tag or pull-from-latest approach. Ensuring your production environment is only hosting the exact version your QA team validated is just all around cleaner with Notary silently protecting your shop from man-in-the-middle with image attacks. From a DevOps perspective, the integration of DTR within a Docker Datacenter environment is a great workflow tool uniting Development and QA. Once an image is pushed to the DTR it can be tested, signed, and then ready for deployment. Once it is ready for production, the same image can be deployed right out of UCP into your production environment.
Who can see my images and make changes to my image repository?
Another feature that DTR supports is the logical hierarchy of grouping users within an organization. Here’s a screenshot of setting up a hypothetical Ops organization that handles deployments across a multi-tenant Docker Datacenter environment:
Docker Trusted Registry – Organizations and Members for securing your image repositories
Users can only push to registered repositories under the organizations they have access to use and see. A company could put their Dev, QA, and Ops team into different organizations where only restricted users could push a container from the Dev org to the Ops/QA org. We like the logical ability to restrict access across organizations and what images they can deploy to where. All in all, DTR is a big win for operational production use cases that most companies need to address when trying to take Docker containers to production:
- Reduce developer headaches
- Have a well-defined artifact handoff to QA
- QA can track and validate a specific version is ready for promotion
- Ops can use the DTR to deploy the newly promoted ‘gold image‘ across the Swarm
Utilizing Virtual Networks for Logical Container Groupings
How can we setup service layers that only need connectivity to their required endpoints?
As we continue going from the macro to the micro features inside Docker Datacenter, we find the networking dashboard as another great upgrade for visibility and logically grouping containers into well-defined virtual networks. For those not familiar with how this works, we encourage taking a look at Docker overlay networks. Docker has figured out how to group and isolate containers to your own, custom networks. The Docker training challenged how we looked at where our applications used to live. They refer to this as a “Pets vs Cattle” approach. In the traditional development model, Pets were things your team cared about and fixed anytime something broke. In past lives, we all remember fixing that flaky QA environment running on the sub-par hardware which took time away from working on actual features.
Now with Docker Datacenter, the shift is to treat applications as ephemeral containers that can be grouped inside of the Swarm and connected to their required endpoints using overlay networking. In the Datacenter we can create and delete (only if not in use) overlay networks for ensuring that applications are logically organized exactly how we want for cross-host container connectivity in our Swarm environment for each of our service layers. Here’s how a hypothetical overlay Networks configuration could look inside of UCP:
Setting up Overlay Networks for your Service Layers within UCP
This kind of fine-grained control through a web interface is a huge improvement over the time we used to spend on the command line making sure everything was right with docker inspect or manually debugging compose files. Once these networks are created, we can deploy applications into those networks and know the applications can only connect to those on its own network (assuming there are no external dns servers in use). Beyond debugging network configurations, we were really happy to hear 1.10 fixed the race condition where /etc/hosts was getting stomped when starting containers in a compose file that used an overlay network. This fix allows containers to scale up with a docker-compose that uses a more appropriate solution than editing a file across all the containers at the same time and hoping for the best.
What is getting stored and where?
Speaking of files, the new volumes support within Datacenter is very helpful when trying to track all those volumes across a distributed set of Swarm hosts as well as looking at how Datacenter is setting up its own volumes. Want to know where to place those new TLS certs for the Datacenter environment? Just search for ‘certs‘ in the webapp.
Searching Docker Datacenter Volumes with UCP
From an operations and production side, just being able to survey the environment volumes’ setup and configuration for logs and any other persistent files is a big win when trying to quickly debug an environment. Being able to mount volumes for a container to write/read is a powerful tool and to support that kind of dynamic mount ability when docker-compose up -d is ran means there needs to be a quick way to debug and track which nodes have those volumes mounted and where. This is a nice feature thrown into Datacenter.
Taking a step back, we believe Docker Datacenter is a big win for the production and operations use cases that enterprises are trying to tackle on their own. If you do not want to be locked in on AWS (or any other cloud provider) then this is an option to try out. Docker has worked to remove the command line as the only option for running containers in production, and we are interested to see if compose becomes integrated into UCP in the future. Finding out that Docker Datacenter has reduced the effort to run a distributed Docker Swarm in just a few clicks within a website was very refreshing. This product has a come a long way in a very short amount of time, and that is a testament to Docker’s commitment to helping everyone be able to not only run but support containers in production. If your organization has an interest in setting up Docker Datacenter, a multi-host Swarm, or concerned about in-house limitations that prevent containers in production then please reach out to us at Levvel and we can get you started.
Thanks for reading!
If you want to learn more, here are a few sites we found helpful:
- Docker Datacenter
- Docker Swarm in Production
- Docker Trusted Registry and CI / CD Workflow
- Docker Notary
- Docker 1.10 features
- Running a Distributed Docker Swarm on AWS
Until next time,