Skip to content

Kubernetes – Terminology

If you haven’t worked with Kubernetes or any orchestration system before, starting can be a bit like jumping in the deep end of a pool. Most developers won’t have been exposed to the terminology spread throughout the product This can make figuring out what is going on all the more difficult. Below I have tried to give a simple high-level overview of the main components of Kube, and the relationships between them.

The node

This is a worker machine – or minion. This is usually a VM or physical piece of hardware and is where your programs are run.

The cluster

A cluster is a collection of nodes. It is very rare that you will be concerned with individual nodes. Most of the time, when thinking about hardware, it will be the cluster as a whole you are concerned about. Kubernetes orchestration handles everything going on underneath (automagically). It’s the cluster that allows Kubernetes to handle machine failure if one node fails, then another node in the cluster can be used instead.

The container

This is where your application is packaged up into something deployable. Docker would be an example of a platform that allows you to create containers. These are self-contained Linux environments that contain everything that the node running your application will need. The application itself, plus all the runtimes and other bits and pieces it needs to function.

The pod

Pods are an encapsulating unit and contain one or many containers. Containers in the same pod can talk to each other like they are on a local system, and share the same resources to boot. Using duplicate pods allows you to provide increased capacity with little effort.

The deployment

Deployments are run against clusters and are what handle the distribution of pods to that cluster. A deployment can be told how many pod replicas should be present. Should something happen to a pod managed by the deployment, then the deployment will recreate that pod automatically.

The service

A service is a layer of abstraction that describes a set of pods. Since a deployment can create and destroy pods (and their assigned IPs) as it deems appropriate, something needs to facilitate the communication to pods where ever they may be running. A service enables this by defining a policy by which external entities can contact pods described by the service.

The ingress

By default, your pod and the containers in it running on a cluster won’t be accessible to the outside world. You need to create an ‘ingress’ to let people in. This is handled by an ingress controller such as Nginx or Haproxy that allows routing of traffic through the controller to a service.

Published inOperations

Comments are closed.