April 11, 2018
TABLE OF CONTENTS
By now, most of us are no strangers to the CI/CD methodology and how it enables teams to continuously test and release new code to customers in “real time” without any disruption of services.
Living in the age of Platform as a Service (PaaS) infrastructures, such as Pivotal Cloud Foundry (PCF), has enabled software organizations to adopt a more automated, streamlined and even stateless approach to CI/CD. In this blog, we’ll take a look at what CI/CD looks like within the PCF ecosystem.
PCF offers developers and operators a consistent approach to deploy applications in a dependable and repetitious manner via its marketplace services, buildpacks, and underlining infrastructure. This simplifies the CD process, but what about CI?
ConcourseCI is the prefered CI engine for PCF. It uses a declarative API-based model to set and manage pipelines in a stateless manner, enabling developers and operators to focus on what matters most—successful deployments. Pipelines are made up of jobs that consists of one or more tasks. The jobs and tasks are run as containers within the ConcourseCI infrastructure. Each task’s results and artifacts are passed as conditions or triggers to the next job or task in the pipeline, and this process continues until the final artifact is deployed.
Pipelines are much like they sound—they are a sequence of jobs that can be triggered when a developer commits code to a specified repository. Pipelines can be built to compile code, apply smoke tests, deploy an artifact to staging and eventually to production.
ConcourseCI considers container images, code repositories, and storage (on-premise or cloud-provided) for what they are—resources. These resources are defined in the pipeline’s YAML file. The pipeline’s tasks uses these resources to compile code and create intermediate artifacts without leaving a local fingerprint. Other CI engines, such as Jenkins, by default store and pass artifacts locally between tasks unless tweaked to do so otherwise. The ConcourseCI’s model enables pipelines to consume resources, such as storage and repositories, by mounting them to a container so that they can be used until the tasks are completed. ConcourseCI destroys these containers once all the tasks are completed, creating a stateless process promoting better Disaster Recovery (DR) and business continuity.
Let us start by creating a simple blue-green deployment using ConcourseCI and PCF. For context, let us dig a little deeper into the concept of pipelines, jobs, tasks, and resources and review some of the common tools used by developers.
Fly CLI is the command line interface developers use to create pipelines, tasks and jobs in ConcouseCI. The steps to install Fly CLI are provided through the ConcourseCI dashboard.
A Hello World application
A simple Java / Spring Boot Hello World application—create a fork and clone the repository.
– Navigate to the GitHub repository for the Hello World Application by clicking here, fork the GitHub repository by clicking on the Fork button in the top-right corner of the page
~$ mkdir ~/pcfdev ~$ cd ~/pcfdev ~$ git clone [email@example.com](mailto:firstname.lastname@example.org):<...>/spring-hello-world.git
pipeline.yml__ __and credentials.yml – The pipeline.yml file is used to define the ConcourseCI pipeline resources, jobs and tasks – The credentials.yml is used by the Fly CLI command line interface to connect to the PCFDev instance
ConcourseCI instance VirtualBox and Vagrant are used to run a ConcourseCI instance, follow the link below to install them. – VirtualBox – Vagrant – ConcourseCI (select the New tab, under “How to use this box with Vagrant”)
PCFDev is used to locally emulate a production PCF environment. Follow the link below to install PCFDev.
Resources are docker-images, source code repositories, or storage blobs used by the ConcourseCI pipeline. The pipeline uses a YAML file to define all the resources.
Locate the pipeline.yml in the forked Git repository, it is in the spring-hello-world/ci/ directory. You must replace the
<replace your id here> placeholder with your Git ID, so the resource points to your forked copy of the Hello World application’s Git repository.
resources: - name: spring-hello-world type: git source: branch: master uri: [https://github.com](https://github.com)<replace your id here>/spring-hello-world.git
As previously described, pipelines are made up of jobs and tasks. We will setup the pipeline so it is triggered when developers commit code changes to the Hello World application.
Find the section of the pipeline.yml file in your cloned repository and locate the following lines. This section refers to the “spring-hello-world” resource we previously defined and the “trigger: true” stanza ensures that this job is triggered when developers commit code changes to the Git repository. You can read more about the other options used here.
jobs: - name: maven-build-and-deploy serial: true public: true plan: - get: spring-hello-world trigger: true
A ConcourseCI job can be made up of one or more tasks. Tasks operate within containers, executing scripts or a suite of tests. Once a task is completed, it is assigned a pass or fail status. This status can be used to alter the flow of the pipeline.
Find the section of the pipeline.yml file in your cloned repository and locate the following lines. The task outlined below is used to build the Hello World application. Some interesting points to note:
You can read more about the other options used here.
task: maven-build config: platform: linux image_resource: type: docker-image source: repository: ubuntu tag: "xenial" inputs: - name: spring-hello-world outputs: - name: builds run: path: spring-hello-world/ci/tasks/maven-build
We will create two instances of the Hello World Application in PCFDev, a blue instance and a green instance. Let us get started with the blue instance.
Open a command prompt / terminal, and from “spring-hello-world” directory execute the following command (alter the paths and commands if you are using Windows).
~$ cd ~/pcfdev/spring-hello-world ~$ cf push hellospring-blue --hostname hellospring-prod
Open a command prompt / terminal and execute the following commands (alter the paths and commands if you are using Windows).
~$ cd ~/pcfdev/spring-hello-world ~$ fly -t hellospring login --concourse-url http://192.168.100.4:8080 ~$ fly -t hellospring set-pipeline -c pipeline.yml -p blue-green -l credentials.yml ~$ fly -t hellospring unpause-pipeline -p blue-green
Start a browser and navigate to the ConcourseCI dashboard by using the URL defined above. It is usually http://192.168.100.4:8080. Once logged in, you can navigate the dashboard to view the pipeline that was created above.
Now let us create the green instance of the Hello World Application by triggering the blue-green pipeline.
~$ cd ~/pcfdev/spring-hello-world/hellospring/application/src/main/resources/static/ ~$ vi index.html
<!-- 1.) Edit section from Blue --> <h1>Spring - Blue to Prod!</h1> <!-- 2.) To Green --> <h1>Spring - Green to Prod!</h1> <!-- 3.) Close and save index.html -->
~$ git add index.html ~$ git commit -m “changing syntax from blue to green” ~$ git push origin master
As applications and infrastructure evolve, CI/CD pipelines need to as well in order to keep up with the frantic pace in which organizations strive to deliver new features to its customers. In a stateless and declarative model, we accomplish this by removing the need for third party plug-ins, CI engine backups, and operator overhead. Simply “Keep Calm and Deploy to Prod.”
At the end of lunch with a mentee, I used the items on our table to express the fundamental concepts of Kubernetes. Sometime after explaining the purpose of the Kubernetes scheduler, she asked a question I spent the next several weeks thinking about.
API design is crucial, giving structure to application interaction. Given cross-functional teams and applications, development time is reduced with a clear, intuitive way to access data. API development often follows two approaches: REST and GraphQL.
As of June 2018, the state of California passed a new privacy law that could lead to more consequences for US-based companies than the European Union’s General Data Protection Regulation (GDPR). Here's what you need to know and how to be compliant.
Before your data scientists wring value out of your reams of data, it has to be accessible and, on some basic level, coherently arranged. To harness all that brainpower, you need to keep the data wrangling to a minimum. Enter the data lake.