Distributed Spring XD 1.3 in Docker Containers with Travis

Blog

November 27, 2015

TABLE OF CONTENTS

Introduction

We have been looking forward to getting started with Spring XD 1.3, and today we want to talk about running it in a self-contained environment with Docker. Spring XD is described as a “unified, distributed, and extensible service for data ingestion, real time analytics, batch processing, and data export,” and it is a platform for adding/removing data streams without the headache of building and managing your own services to move + transform data in real time. The first time you use a Source with a Sink to propagate data without downtime is pretty impressive. Spring XD comes out of the box with numerous Source and Sink Streams for modifying data between disjoint systems. If you are interested in using microservices, it already supports an HTTP Source Stream as an interface in front of your database. Being able to transform incoming data over HTTP and drop it into RabbitMQ or Redis takes time get right on your own (even more for a production load-balanced deployment), and it is these simple-to-manage deployment solutions that are a big win for Spring XD. We highly recommend the Spring XD Reference guide as a starting place for new users to get introduced with each of the components and how they work together. This platform can do a lot of complex actions without interrupting services, and for today’s post we want a consistent deployment and hosting strategy that works for multiple cloud providers and local development.

My previous post covered how to create a working Docker Swarm that can run locally on a single host or spanning multiple hosts deployed on AWS EC2 instances. Since this solution will work locally and on AWS, we will setup a distributed Spring XD 1.3 environment running inside a Docker Swarm. But before we get into application use cases or customized deployments using the new Spring XD features, we wanted to first setup a container artifact pipeline that enables us to focus on development instead of testing. We want to get new builds of the underlying Spring XD 1.3 Docker Containers out faster.

Let’s get started!

Continuous Integration Aand Continuous Delivery With Docker Containers

There are numerous tools for building out a DevOps CI/CD pipeline for builds with artifact orchestration. For helping make a decision, we wanted it to natively support building, testing, deploying, and pushing a Docker Container with minimal hassle. We settled on Travis CI because of its comprehensive build customization and artifact orchestration capabilities, and personally we just really like seeing the build passing image right inside of a GitHub repository.

I mean look at this thing:

Travis Build Status Badge

DevOps With Docker Containers And Travis

I am using the Travis free tier which allows me to sign in with my GitHub account and then Travis can sync my available public repositories for handling git push webhook notifications. This allows Travis to perform events when the underlying source code changes. Travis does this by running build steps outlined in a repository’s hidden .travis.yml file. Travis supports customizing your build lifecycle events as needed, but for this post we will only be using the before_install and after_success events. Travis also has a well-documented Developer API for helping test your build and event propagation handlers are working.

Building a Travis Container Artifact Pipeline

Before going into the technical details, here is a diagram for a DevOps workflow using Docker Containers:

Developer Workflow for Docker Containers using Travis CI

Here’s how this workflow can take a simple git push and extend it to build, test, and push a Docker Container image as an artifact for streamlining deployments to QA and beyond.

  1. Sign in to Travis with your GitHub user
  2. Sync Travis to your Public GitHub Repositories (Private repos using https://travis-ci.com/)
  3. Enable a Repository for Travis Builds.

Click the X icon to toggle between enable and disable. A green check indicates the repository is enabled for Travis automated builds.

  1. Add Travis Build Environment Variables

    For Travis to auto-push containers into Docker Hub, please open the project settings. From the settings page add these Environment Variables with your corresponding values:

DOCKER_EMAIL     Your Docker Hub Email
DOCKER_USERNAME  Your Docker Hub User Name
DOCKER_PASSWORD  Your Docker Hub Password

Travis will securely store these Environment Values on your build project. For protecting your Docker Hub account, please confirm none of them have the “Display value in build log” enabled. No one wants the Travis build project displaying credentials in the logs.

  1. Create the Dockerfile in your GitHub Repository

Please note, how to onboard an application for a Docker Container is not in the scope of today’s post. we will be discussing how to docker-ize an application in an upcoming post. (If you are curious, Digital Ocean has a good one on Docker-izing Python Web Applications)

The simplest Spring XD Dockerfile is the xd-shell component which has just one interesting line:

FROM jayjohnson/springxd-base
MAINTAINER Jay Johnson jay.p.h.johnson@gmail.com
CMD ["shell/bin/xd-shell"]

For a more complex example here is the Dockerfile from the Spring XD Base repository.

  1. Create the .travis.yml file in your Repository

Testing out Travis Docker builds the first time can be annoying when you have to wait on the build to test everything before Travis tries to push to Docker Hub. To make this a little easier the first time through, we found putting the docker login attempt before the docker build a quick way to confirm the Travis build project settings were linked to Docker Hub without waiting on a whole container to build.

Here’s a sample .travis.yml file for testing the Spring XD Base Container:

  1. (Optional) Add the Travis Build Status Badge to the README.md

If you want a build status badge for your Travis build project displayed inside the GitHub repository just add the equivalent line from your build project to your GitHub repository’s README.md:

![Travis](https://travis-ci.org/YOUR_USER/BUILD_PROJECT.svg)

  1. Push the changes to the GitHub Repository

Performing a git push will cause GitHub to send a webhook notification to Travis to initiate a new project build. Travis will perform the tests laid out in your .travis.yml file and push the new Docker Container artifact into Docker Hub automatically. Travis also allows you to inspect the logs from the build project’s page: https://travis-ci.org/YOUR_USER/BUILD_PROJECT

DevOps Improvement Considerations

The DevOps workflow above is focused on reducing the time we spend getting a new build out. We want to push a commit to GitHub and let the tools do the work so we do not have to spend time debugging a build and manually deploying an artifact to each environment. Since I am the only developer, this workflow is probably not a perfect solution for all use cases. Like most things in software there are ways do it well and many ways to make it worse. We prefer a DevOps workflow that allows developers to focus on developing instead of handling builds coordinated across multiple teams with customized deployments and tracking which artifact version is running where.

Building an artifact pipeline using tools like GitHub, Travis, and Docker Hub reduces the time it takes to get features in front of your customers. While services like these make it great for getting new builds out faster, it is always helpful to internally assess your team’s comfort level with this kind of solution. In prior engagements where we were implementing a new DevOps strategy for an organization, it is usually a good starting point to consider: how the internal development processes currently work, how did the developers get a build into QA’s hands, how did QA hand off to production, what were the bottlenecks, how did the build work, what was the existing artifact propagation process, what does the application need on each environment, what were the security implications, what was the production deployment and rollback solution, and what can the organization already support today without extensive outside training and mentoring.

Additionally, here are some points to consider when trying to extend the DevOps workflow above for your organization:

  1. Support for git-flow Development Models

The current repositories use Travis to build anytime there is a git push. The after_success section can be extended for customizing the type of action based off a tag, pull request, or branch. This kind of build detection can decrease developer time spent waiting on a build to finish as well as creating specialized builds for internal environments and tagging a release build with a name.

  1. Running Integration and System Tests

Testing beyond simple unit tests usually requires system and integration mock test service(s). Luckily, Travis supports multiple databases, message queues, and tools that you can read more about here.

  1. Adding Docker Container-specific Tests

If your goal is to run Docker Containers on production, then it would make sense to add Container tests for: security, ssl, intrusion detection, destroying ssh keys, port tests, and more. We are interested in trying out the new CoreOS Clair for assessing container vulnerabilities, but for now Travis is using ruby for Container-specific tests(I may change that in the future).

Deploying Docker Container Artifacts

Now that the Docker Container artifacts are stored in Docker Hub, we can deploy them using the native Docker tools and commands (docker run, docker pull, or docker-compose).

From my previous post, we discussed how to deploy a RabbitMQ Cluster running inside Docker Containers across a multi-host Docker Swarm on AWS. Now that the artifacts for this deployment are stored and updated in Docker Hub, the development-to-production deployment model looks something like this:

Developer Workflow for Docker Containers and Deploying with Docker Swarm

Deploying a Distributed Spring XD 1.3 Environment Using Docker Compose

With the Travis Docker CI/CD process in mind, we have created the following repositories as Docker Containers for each Spring XD 1.3 component with automated Travis CI builds that publish to Docker Hub. The goal of this post is to prepare for application testing a distributed Spring XD environment while running RabbitMQ HA Message Simulations all hosted on an AWS Docker Swarm.

Spring XD 1.3 Component Docker Containers

Here are the Docker Container repositories for each Spring XD 1.3 component, the Travis build status, and their respective Docker Hub location.

Using Docker Compose to Deploy Spring XD 1.3 With a RabbitMQ Cluster

With the Spring XD 1.3 artifacts ready in Docker Hub, we can build out an environment file, the docker-compose.yml file and then start the environment.

  1. Create a compose.env file with the contents:

  2. Create a docker-compose.yml file with the contents:

  3. Start up the Distributed Spring XD 1.3 Environment running across Docker Containers with the command:

    docker-compose up -d

    For Docker Swarm the command for deploying using the overlay networking is:

    docker-compose --x-networking --x-network-driver overlay up -d

  4. Confirm the Environment is running (this is from my local deployment)

  5. Connect to the Spring XD Admin using the XD Shell

  6. Confirm the Docker Container hosting the Spring XD Container instance has the same IP reported by the XD Shell

spring-xd-1.3.0.RELEASE$ docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container1 
172.17.0.9
spring-xd-1.3.0.RELEASE$- 

Your distributed Spring XD 1.3 environment using Docker is now ready for use locally. And what if you want to take this local deployment and run it on production?

Building a Distributed Spring XD 1.3 Environment Running in a Docker Swarm

In the next post, we will be using the following diagram as a reference topology for setting up a distributed Spring XD 1.3 environment in Docker Swarm on AWS. We want to do this so we can start running RabbitMQ High Availability tests with the Message Simulator while test driving the new Spring XD features.

Here is how we plan on setting up the distributed Spring XD 1.3 environment in a Docker Swarm:

Distributed Spring XD Environment on Docker Swarm

Please note, this is not a production deployment that will work for everyone, but as the Spring XD 1.3 components can be scaled out horizontally, it is a way to consider distributing components for increasing its fault tolerance.

Well that is all for now! In this post we covered how developers can hand off an artifact as a container and how Docker natively supports deploying that same artifact across different environments. Whether we are hosting a distributed Spring XD 1.3 platform or something more customized, there are always ways to reduce time spent developing new features.

Thanks for reading and we hope you found this post valuable. There is a lot to discuss when trying figure out a DevOps strategy for Docker Containers. If your organization’s goal is to reduce the time it takes to get new builds in front of your customers, then it makes sense to let your developer focus on developing new features for your business and the let tools do the rest. We are excited to hear your feedback on how this DevOps approach can help get new features in front of your customers faster. If your organization would like assistance determining your DevOps, Docker, or Spring XD strategy, please reach out to us at Levvel and we can get you started.

Until next time,

- Jay

Links and References:

Authored By

Jay Johnson

Jay Johnson

Meet our Experts

Jay Johnson

Jay Johnson

Let's chat.

You're doing big things, and big things come with big challenges. We're here to help.

Read the Blog

By clicking the button below you agree to our Terms of Service and Privacy Policy.

levvel mark white

Let's improve the world together.

© Levvel & Endava 2023