Of Peas and Pods: Kubernetes and Microservices

Blog

May 4, 2018

TABLE OF CONTENTS

Introduction

The confluence of complementary technologies that evolve separately is not uncommon. We all carry one textbook example of this in our pockets–digitization of cameras plus miniaturization of phones. And, in the not so distant future, we will be driven around by the by-product of machine learning, GPS, digitization of maps, and improvements in sensors coming together–the smart car.

While microservices and containerization don’t qualify to be included among once-in-a-generation innovation like autonomous vehicles, there is an interesting complementarity between these two technologies in that they almost seem made for each other. While I wouldn’t call them two peas in a pod, microservices certainly fit very nicely into Kubernetes Pods. They work together to solve problems that they could not solve individually.

Microservices

Microservices are loosely coupled, fine grained, lightweight and independently deployable components of an application. They can be considered a refinement of Service Oriented Architecture (SOA).

SOA began with the same promise of simplification through componentization, but soon degenerated into complex interface specification with SOAP and XML, and integration with heavy-duty middleware such as ESBs. SOA deployments in real life turned out to be no less complex than monolithic applications. Instead, microservice architecture focuses on building modular, completely self-contained components that provide a specific business service and communicate through a light-weight interface.

Microservices provide flexibility of deployment, agility to roll out changes quickly, scalability through loose coupling, and availability through redundancy, but a natural corollary of microservices architecture is that it results in a large number of components that now must be managed. An application that would have consisted of a few components and a few services could now be made up of dozens of microservices. For example, a cloud platform management application at a large IT service provider consists of 100+ microservices. Of course, we can’t talk about microservices without mentioning Netflix, which reportedly has “hundreds of microservices to support our global members.

Kubernetes

As compute resource management has evolved from bare-metal to virtualization to containerization, the number of units of resources that require management has also increased. When virtualization went from infancy to maturity, a number of tools became available that provided management of and visibility into virtual machines. In the same way, as containerization technology—such as Docker—has become more popular, it has created the need for tools to deploy, scale, and manage them. Kubernetes, which is a “portable, extensible open-source platform for managing containerized workloads,” is one such tool. It reduces or even eliminates the need for manual effort to create, deploy, scale and dispose containers.

Microservices Within Pods

Microservices are ideal candidates for containerization. They are designed to be self-contained deployable units that are loosely coupled, and as a result, have a lifecycle that is independent of other components. Containers are perfect hosts for these microservices since they can be instantiated in large numbers without the resource overhead associated with virtual machines. Kubernetes provides tools to manage the lifecycle of these containers, and therefore the lifecycle of microservices that run within these containers. Kubernetes can also be used to configure deployment, availability, and security of these microservices via the corresponding operation on containers.

Kubernetes abstracts compute, network, and storage resources into units called Pods. Kubernetes is responsible for managing these Pods, which can be thought of as a wrapper around a single container or multiple containers work together. Correspondingly, Kubernetes can indirectly manage single instance of a containerized microservice or multiple microservices that collaborate in a well-defined manner. For the remainder of this document, any reference to microservices running within Pods implies that the microservice is containerized. Figure 1 illustrates microservices running within Pods that are themselves placed within Nodes (which represent the underlying physical or virtual machine resource).

Figure 1: Microservices inside Kubernetes Pods__

We will examine how Kubernetes can be used to manage 3 aspects of microservice architecture:

Deployment

Kubernetes and microservices complement each other in multiple ways when it comes to deployment. Consider the need to perform rolling updates of an application. Conventional monolithic applications generally cannot be quickly started and shutdown without affecting a number of dependencies and potentially leaving resources in an indeterminate state. Well-designed microservices on the other hand are loosely coupled and establish connections with each other using a service discovery model. This allows flexibility for the platform to start and stop individual microservices with very little impact to other components that are dependent on them.

Kubernetes excels at creating and destroying Pods dynamically as needed. This capability can be leveraged to perform rolling updates of microservices with zero downtime. Kubernetes can be configured to ensure that a minimum number of instances of a microservice are available and that traffic is always routed to available instances during the update.

Another area where microservices and Kubernetes complement each other is with resource allocation. Microservice architecture increases the number of components that need to communicate with each other over the network. Hence, the deployment model (or physical architecture) must be taken into account in the placement of microservices to reduce network chatter and latency.

Kubernetes provides the ability to assign Pods to Nodes. By assigning Pods containing microservices to specific nodes, we can optimize for resource utilization, access and proximity to storage, and network hops between communicating endpoints. Consider for example, a microservice (or any other application component) A that communicates frequently with a microservice B. Pods containing A and B can be placed on the same node (or on nodes that are located within the same physical rack) to reduce the number of network hops required for them to communicate. Similarly, microservices can be deployed taking into account their memory and CPU usage.

__

Figure 2: Placement-aware deployment of microservices

Figure 2 illustrates this concept via two racks in a datacenter. One rack is dedicated to running application components, and a second rack runs both applications components and databases. Microservice A—which does not persist to data store—is placed in a Node that is physically located on the first rack, whereas microservice B—which does require access to database—is placed in the second rack to reduce network latency while reading from or writing to the database.

Resilience should be taken into consideration when determining deployment topology of microservices or application components. Physically collocating services may not always be feasible or desirable, such as when dealing with single instance resources like a database. If the microservices are deployed in an active-passive model, the active instance could be collocated with a resource that it is dependent on, while the passive instance is not. Similarly, if the deployment model is active-active, a higher percent of traffic could be routed to the instance of the microservice that is collocated with a dependent resource.

Availability

Kubernetes provides high availability by ensuring that a specified number of Pods are running at any given time. If Pods terminate unexpectedly, Kubernetes creates new Pods, and destroys them when there are too many. Traditional monolithic applications are not designed to take advantage of this capability because they cannot always be scaled out by simply starting another instance. Microservice architecture facilitates this ability to scale out through the use of self-contained, granular services with well-defined interfaces.

Complementing this ability to scale out are two other features of Kubernetes: service abstraction and load-balancing. Kubernetes defines an abstraction called Service that defines a logical set of Pods. These Services shield clients of microservices from any awareness of the number or location of Pods that run these microservices. Working along these Services is another abstraction that facilitates load balancing, called Ingress. Ingress is a collection of network routing rules which ensure that traffic from clients are delivered to the correct Pods. As Pods are scaled out or scaled back in, either manually (using the Kubernetes Command Line Interface) or automatically (using scaling policies), Services keep track of available number of instances and provide a single endpoint that is visible to clients.

Security

Comprehensive application security requires a multi-dimensional approach. Kubernetes contributes to certain aspects of this, such as container security and network access. Policies can be defined with Pods that limit privileges available to processes running within the container. This can limit damage inflicted by security vulnerabilities within the microservice. If a vulnerable microservice runs inside a privileged container, any hacker who gains control of this microservice may be able to perform system-level operations to extend control to other parts of the network.

Network policies can also provide the required degree of isolation for a microservice. By default, Pods are non-isolated and accept all inbound and outbound traffic. When it is necessary to isolate a microservice so that it can only be accessed from specific clients, network policies can be configured to allow or deny traffic from specific IP blocks or to specific ports. Consider for example a microservice that provides access to Personally Identifiable Information (PII) or Sensitive Personal Information (SPI). Access to this microservice can be limited to well-known clients. If this microservice has a health check API, a policy can be configured to allow the monitoring tool to only access this API.

Conclusion

This document provides an introductory look at the flexibility provided by Kubernetes. We have not touched upon multi-container Pods, Pod affinity, and other advanced concepts that can be used to further optimize the deployment of microservices. Though we focus on Kubernetes in this discussion, other container management platforms such as Docker Swarm provide similar capabilities. Microservices, which have become popular in the last few years, are here to stay (until the next refinement of this architecture comes along) and containerization significantly reduce their complexity and increase their value to the enterprise. Organizations that adopt microservices should consider their deployment topology to avoid falling into the trap of creating yet another highly complex architecture. Kubernetes provides a very flexible platform to deploy and manage microservice architecture.

Authored By

Sonny Werghis, Principal Architecture Consultant

Sonny Werghis

Principal Architecture Consultant, Levvel

Meet our Experts

Sonny Werghis, Principal Architecture Consultant

Sonny Werghis

Principal Architecture Consultant, Levvel

Sonny Werghis is a Principal Architecture Consultant at Levvel where he advises clients on Payment technology. Previously, Sonny worked at IBM as a Product Manager and a Solution Architect focused on Cloud and Cognitive technology where he developed AI and Machine Learning-based business solutions for customers in various industries, including Finance, Government, Healthcare, and Transportation. Sonny is an Open Group Master Certified IT Architect and a certified Enterprise Architect.

Let's chat.

You're doing big things, and big things come with big challenges. We're here to help.

Read the Blog

By clicking the button below you agree to our Terms of Service and Privacy Policy.

levvel mark white

Let's improve the world together.

© Levvel & Endava 2023