Kubernetes enables developers to deploy their applications themselves and as often as they need without acquiring any assistance from the operations(Ops) team. Kubernetes doesn’t benefit only developers it also helps the ops team by automatically monitoring and rescheduling those apps in the event of a hardware failure.
Kubernetes abstract away the hardware infrastructure and exposes your whole data center as one enormous computational resource. When you have multiple servers and you’re deploying a multi-component application through Kubernetes.
Select a server for every component
Enables access ability and communication easy among all application components
the straightforward component management makes Kubernetes great for many on-premises data centers also as for cloud providers.
Kubernetes allows cloud providers to supply developers an easy platform for developing and running any sort of application, while not requiring the cloud provider’s own sysadmins to understand anything about the tens of thousands of apps running on their hardware.
Google created a system for their internal use with the name “Borg”.
Later they changed the name of Borg with “Omega”.
the aim of Borg/Omega was to assist both application developers and system administrators manage google’s thousands of applications and services.
additionally to simplifying the event and management, also helped them achieve a way higher utilization of their infrastructure, which is vital when your organization is that enormous.
After a decade a creation of Borg/Omega in 2014 Google introduced Kubernetes an open-source system supported the experience gained through Borg, Omega and other internal Google system.
Kubernetes enables you to run your software application on thousands of computer nodes as if all those nodes were one computer.
It abstracts away the underlying infrastructure and, by doing so, simplifies development, deployment, and management. Deploying applications through Kubernetes is usually equivalent, whether your cluster contains only a few nodes or thousands of them.
the dimensions of the cluster make no difference in the least.
A Kubernetes cluster consists of a group of worker machines, called nodes, that run containerized applications. Every cluster has a minimum of one worker node.
The worker node(s) host the Pods that are the components of the appliance workload. The control plane manages the worker nodes and therefore the Pods within the cluster. In production environments, the control plane usually runs across multiple computers and a cluster usually runs multiple nodes, providing fault-tolerance and high availability.
It is what controls the cluster and makes it function. It consists of multiple components. It might be on one master node or be split across multiple nodes and replicated to make sure high availability.
A reliable distributed data store that persistently stores key-value data of the cluster configuration.
The scheduler, which schedules your apps i.e, assigns a worker node to every deployable component of your application counting on resource requirement by app and resource availability of the node.
The controller manager, which performs cluster-level functions, like replicating components, keeping track of worker nodes, handling node failures, and so on.
The Kubernetes API server, which you(user), other control plane and worker node components with
The worker nodes are the machines that run your containerized applications.
A networking proxy that runs on each worker node, enables pod to pod,pod-to-service communication. The Kubernetes Service Proxy(Kube-proxy), which also load-balances network traffic between application components.
Container Run time
Docker,rkt, or another container run time, which runs your containers. Manages and takes responsibility for containers on its node. It ensures that the containers described in those Pod specs are running and healthy.
Kubernetes keep track of all the containers it manages. If multiple containers provide the same service then you’ll group them at one static IP address. Kubernetes then expose that address to all or any applications running within the cluster or the out world. The Kube-poxy will confirm connections to the service are load-balanced across all the containers that provide that very same service. The IP address of the service stays constant,so clients can always hook up with its containers, even when they’re moved around the cluster.
If you’ve got Kubernetes deployed on all of your services, the aps team doesn’t get to affect deploying your app anymore.
WHY?……….. Because a containerized application already contains all it must run, the system administrators don’t get to install anything to deploy and run the app.
On any node where Kubernetes is deployed, Kubernetes can run the app immediately with no help from the sysadmins.