Introduction

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications (kubernetes.io). The containerized applications typically run on Docker containers.

Kubernetes allows you to run thousands of computer nodes as if they were a single enormous computer (Lukša).

Architecture of a Kubernetes cluster

A Kubernetes cluster consists of

  • the master node hosts Kubernetes Control Plane that controls and manages the whole Kubernetes system
  • the worker nodes runs your applications
Kubernetes cluster architecture.
Kubernetes cluster architecture. (Source: Lukša)

The master node

There could be one or more master nodes (to ensure high availability).

The master node consists of

  • The Kubernetes API server the front-end of the control plane: lets the user interact with the Kubernetes cluster, also allows external components and parts of the cluster to communicate with each other
  • The Scheduler assigns pods (sets of containers that should be located on the same node) to nodes
  • The Controller Manager performs cluster-level functions (replicating components, keeping track of worker nodes, handling node failures, etc.)
  • etcd (key-value store) persistently stores cluster configuration

The worker nodes

The worker nodes run your containerized applications (Lukša).

It contains

  • The Kubelet talks to the Kubernetes API server and manages containers on its node
  • The Kubernetes Service Proxy (kube-proxy) load-balances traffic to the containers
  • The Container runtime e.g. Docker, runs your application containers

Running an application in Kubernetes

The typical steps to run an application in Kubernetes:

  1. package the application to one or more container images,
  2. push the image to an image registry,
  3. post a description of the application to the Kubernetes API server.

The application description

The description of the application posted to the Kubernetes API server includes:

  • the application pods/containers
  • how the containers are related to each other
  • which containers need to be co-located (put in the same pod)
  • how many copies ("replicas") of each pods
  • which pods provide service to internal/external clients (should be exposed through a single IP address)

Benefits of Kubernetes

According to Lukša,

  • Simplifying application deployment You treat the cluster as a single deployment platform, don't need to care about the servers that make up the cluster. (You can specify requirementssuch as requiring the container to run on a node with SSDs instead of HDDs, though.)
  • Achieving better utilization of hardware Let the Kubernetes choose the most appropriate nodes to run your application. It can move pods/containers around (accross nodes) to tightly pack the worker nodes (achieving more efficient node utilization).
  • Health-checking and self-healing Kubernetes monitors the containers and nodes they run on, and reschedules them accordingly (to a new node) in case of node failure
  • Automatic scaling Kubernetes can be told to monitor resource usage and scale each pods/containers accordingly.

References