Getting Started with Kubernetes, Minikube, and Docker

Blog

January 16, 2017

TABLE OF CONTENTS

Introduction

Docker containers are changing the landscape of how we think about building, shipping, and running applications in modern computing. One of the most promising CaaS (Container as a Service) solutions comes to us from Google and is completely open-source. It’s so consistently performant that Red Hat decided to build their OpenShift PaaS on top of enterprise-hardened Kubernetes releases.

As you may have heard, Levvel recently expanded into the Asia-Pacific region, and I’ve moved to Sydney to help lead the expansion in the role of Vice President of Technology. As I’ve started to work with clients in the Asia-Pacific region, I’m extremely encouraged to find clients actively engaged in DevOps transformations with Dockerized applications at the core. The queries we usually receive are around managing such clusters in a highly regulated, often governmentally compliant environments, while still allowing people to leverage their technologies instead of being beholden to them. Kubernetes is one of the most battle-hardened and team-empowering tools for managing containerised applications and services in a redundant and persistent HA configuration with cloud-native scaling and failover capabilities built in.

Kubernetes (pronounced koo-ber-net-ees) is an open source system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications. Kubernetes is hosted by the Cloud Native Computing Foundation (CNCF) and builds upon a decade and a half of experience at Google running production workloads at scale using a system called Borg, combined with best-of-breed ideas and practices from the community.

This blog post will show you the basics of the Kubernetes cluster orchestration system. Specifically, we’ll install the Kubernetes tools to interact with the cluster and deploy a containerized application on a cluster.

So, let’s get to know Kubernetes at the command line; the best place to actually learn a new technology. We’re going to use a lightweight tool called Minikube to experiment with a Kubernetes cluster.

What is Minikube?

Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.

  • Minikube packages and configures a Linux VM, the container runtime, and all Kubernetes components, optimised for local development.
  • Minikube supports Kubernetes features such as:
    • DNS
    • NodePorts
    • ConfigMaps and Secrets
    • Dashboards
    • Container Runtime: Docker, and rkt
    • Enabling CNI (Container Network Interface)

Requirements

Install Minikube and start it:

$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.15.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
$ minikube start

You should see output along the lines of:

“Starting local Kubernetes cluster…Kubectl is now configured to use the cluster.”

You can then type $ kubectl cluster-info to get some detailed output about your cluster.

Kubernetes master is running at https://192.168.99.100:8443
KubeDNS is running at https://192.168.99.100:8443/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://192.168.99.100:8443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard

What exactly is a Kubernetes cluster?

Kubernetes coordinates a highly available cluster of computers that are connected to work as a single unit. The abstractions in Kubernetes allow you to deploy containerized applications to a cluster without tying them specifically to individual machines. To make use of this new model of deployment, applications need to be packaged in a way that decouples them from individual hosts: they need to be containerized. Containerized applications are more flexible and available than in past deployment models, where applications were installed directly onto specific machines as packages deeply integrated into the host. Kubernetes automates the distribution and scheduling of application containers across a cluster in a more efficient way.

A Kubernetes cluster consists of two types of resources:

  • The Master, which coordinates the cluster
  • Nodes, who are the workers that run applications

Kubernetes Cluster

Using minikube, we have a running master and a dashboard. The Kubernetes dashboard allows you to view your applications in a UI. During this tutorial, we’ll be focusing on the command line for deploying and exploring our application. To view the nodes in the cluster, run the kubectl get nodes command:

$ kubectl get nodes

$ kubectl get nodes

This command shows all nodes that can be used to host our applications. Now we have only one node, and we can see that the node’s status is ready (it is ready to accept applications for deployment).

Kubernetes Deployment

With our running Kubernetes cluster, we are able to deploy containerized Docker applications on top of it. We’ll do this by creating a Kubernetes Deployment which creates and updates instances of our application. After creating this Deployment, the Kubernetes master will schedule the app instances that the Deployment creates onto individual Nodes in the cluster.

After the creation of these app instances, a Kubernetes Deployment Controller monitors those instances and replaces an instance if the Node hosting it goes down or it is deleted. This provides a self-healing mechanism to address machine failure or maintenance.

By both creating your application instances and keeping them running across Nodes, Kubernetes Deployments provide a fundamentally different approach to application management.

Kubernetes Cluster

For our first Deployment, we’ll use a Node.js application packaged in a Docker container. The source code and the Dockerfile are available in the GitHub repository for the Kubernetes Bootcamp.

Let’s run the app using kubectl commands at the CLI:

$ kubectl run kubernetes-bootcamp --image=docker.io/jocatalin/kubernetes-bootcamp:v1 --port=8080

And we’ll see the following output:

deployment "kubernetes-bootcamp" created

With that last command, Kubernetes:

  • Searched for a suitable node where an instance of the application could be run (we have only 1 available node)
  • Scheduled the application to run on that Node
  • Configured the cluster to reschedule the instance on a new Node when needed

With the $ kubectl get deployments command, we can see information about that deployment.

$ kubectl get deployments command info

Deployed applications are only accessible inside the cluster by default. To view application output without exposing it externally, we’ll create a route between our terminal and the Kubernetes cluster using a proxy:

$ kubectl proxy

$ kubectl proxy

A connection between our host and the Kubernetes cluster is now available. The started proxy enables direct access to the API. The app runs inside a Pod.

What is a Pod, exactly?

A pod (as in a pod of whales or pea pod) is a group of one or more containers (such as Docker containers), the shared storage for those containers, and options about how to run the containers. Pods are always co-located and co-scheduled, and run in a shared context. A pod models an application-specific “logical host”, it contains one or more application containers which are relatively tightly coupled—in a pre-container world, they would have executed on the same physical or virtual machine.

While Kubernetes supports more container runtimes than just Docker, Docker is the most commonly used runtime, and it helps to describe pods in Docker terms.

Pods

Let’s get the name of the Pod and store it in the POD_NAME environment variable. Open up a new terminal window and type:

$ export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')

$ echo $POD_NAME

$ echo pod name

We can also curl our app and get some info back.

$ curl http://localhost:8001/api/v1/proxy/namespaces/default/pods/$POD_NAME

You should then see:

Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-390780338-qzg89 | v=1

In our next post, we’ll dig into the application layer a bit more. Until then, have fun hacking through minikube and kubectl commands!

Authored By

James Denman

James Denman

Meet our Experts

James Denman

James Denman

Let's chat.

You're doing big things, and big things come with big challenges. We're here to help.

Read the Blog

By clicking the button below you agree to our Terms of Service and Privacy Policy.

levvel mark white

Let's improve the world together.

© Levvel & Endava 2023