Kubernetes has been variously described as a framework, a platform, a container management tool, and a container orchestration system. It is all of these. Cloud Native Computing Foundation (CNCF), the non-profit that maintains Kubernetes, calls it an “open-source framework for automating deployment and managing applications in a containerized and clustered environment.”
While it’s not the sole platform for container management, Kubernetes is by far the most widely used framework to manage and orchestrate container systems, with a large, growing ecosystem of tools, services, and support.
How Does Kubernetes Work?
With the complexity of having so many containers across services and environments, organizations need management and automation. Kubernetes uses an open-source API that manages where and how containers run. It allows resource automation and provisioning tasks, making the process of scaling easier.
Kubernetes runs on top of an operating system, most often Linux but also Windows and others. When Kubernetes is deployed, it creates a cluster — which, at its most basic level, includes just worker machines called nodes and a manager called a control plane. This cluster can exist on physical machines or across virtual machines — or both — communicating with each other over a shared network. All the Kubernetes components and capabilities exist in the cluster.
Kubernetes manages these clusters and schedules containers to run most efficiently based upon the available computing resources and the requirements of each container. Managing these clusters, and the storage they require, is called orchestration.
Here are a few essential terms to help understand the layers of Kubernetes:
- Control plane: The control plane is the orchestration layer — a governing set of processes that acts as the communications director for a Kubernetes cluster. The heart of the control plane is the Kubernetes API server, which lets developers automate tasks related to resource provisioning and management. All task assignments, such as scheduling and apportioning workload, or starting applications, originate from the control plane. The Kubernetes control plane can run on one main node, or, for high-availability clusters, can exist across multiple nodes.
- Nodes: A node is a worker machine that performs operations requested by the control plane. Each node must include a container runtime, such as Docker, which accesses container images and runs the application, as well as a Kubelet, a communication process that lets the control plane manage the nodes. Nodes can consist of one or multiple pods.
- Pods: A pod is the smallest item in the Kubernetes ecosystem. It consists of one or more containerized apps, each with an IP address where it exists. Pods operate within nodes.
- Cluster: A Kubernetes cluster is a group of nodes that run containerized applications, together with a control plane.
- Services: Applications need to be exposed so that users can access them. If applications want to access a functional capability that other applications hold, developers configure the capability as a service, adding metadata called key value pairs in the form of labels and annotations. Using these labels and annotations, a Kubernetes service can be associated with a set of pods. This architecture offers a loosely-coupled method of service discovery — the automated process of locating a necessary service.
Why Do Organizations Choose Kubernetes and What Are Its Benefits?
Organizations choose to use Kubernetes as their need for management of microservices in containerized applications grows. Development organizations use Kubernetes to reduce operations resource needs by automating the deployment, scaling, and management of containers. Kubernetes simplifies the management and discovery of applications by grouping them into functional units, and it employs an open-source API to manage how and where containers are run.
Organizations adopt Kubernetes for the following benefits:
- Achieve faster, focused development. As developers refactor legacy applications into small functional clusters and build new cloud-native applications, they can release new services as each service is ready using Kubernetes, without having to wait until the entire application is complete. Developers can change the desired state of pods of software to deploy new software, pause deployments, scale up, roll back, and clean up unneeded sets. This automation eliminates manual processes.
- Deploy applications anywhere. Because Kubernetes is designed to run everywhere, organizations can choose from any number of environments in which to run clusters — from on premises to public cloud, hybrid cloud, or any combination.
- Maintain service health. Kubernetes makes services available when and where they’re needed by monitoring system health. Kubernetes ensures availability and provides resiliency through its self-healing capabilities. Kubernetes finds and restarts failed containers, reschedules failed nodes, and destroys containers that are non-responsive.
- Run containers and applications efficiently. Through automated packing, developers can specify requirements for containers and let Kubernetes automatically determine how to allocate resources. Kubernetes can automatically fit containers into nodes, reducing manual effort and maximizing resource use. Kubernetes can also automatically scale clusters based upon demand and provide load balancing to ensure traffic is distributed evenly.
The Origin and Management of Kubernetes
Kubernetes originated at Google over a period of about 15 years as Google built an internal cluster management system, called Borg, which created billions of containers every week. In 2014, Google announced Kubernetes publicly, and in 2015, the company donated it as a seed technology to the Linux Foundation. The Linux Foundation created the CNCF, the non-profit technology consortium that maintains Kubernetes. In 2018, Google formally ceded operational control of Kubernetes to the community. According to the foundation, Kubernetes is one of the fastest-growing projects in the history of open-source software.
To further support the Kubernetes environment, CNCF offers Certified Kubernetes Administrator (CKA) credentialing and Certified Kubernetes Application Developer (CKAD) accreditation. In addition, CNCF offers vendors the opportunity to certify products and services through its Certified Kubernetes Conformance Program (KCSP). Through this program, vendors prove that their products and services conform to a defined set of Kubernetes APIs and work with other Kubernetes implementations.
Manage Containerized Apps Efficiently with Kubernetes
In the continuing race to build and deploy applications quickly and at scale, Kubernetes is a vital tool for development teams. To learn more about using and managing Kubernetes, take a look at our Tech Talk on this topic.