Kubernetes abstract away the hardware infrastructure and exposes your whole data center as one enormous computational resource.When you have multiple servers and you’re deploying a multi component application through Kubernetes.
Select a server for every component
Enables access ability and communication easy among all application components
the straightforward component management makes Kubernetes great for many on-premises data centers also as for cloud providers.
Google created a system for there internal use with the name “Borg”.
Later they changed the name of Borg with “Omega”.
the aim of Borg/Omega was to assist both application developers and system administrators manage google’s thousands of application and services.
additionally to simplifying the event and management, it also helped them achieve a way higher utilization of their infrastructure, which is vital when your organization is that enormous .
After a decade a creation of Borg/Omega in 2014 Google introduced Kubernetes an open-source system supported the experience gained through Borg,Omega and other internal Google system.
Kubernetes enable you to run your software application on thousands of computer nodes as if all those nodes were one computer.
It abstracts away the underlying infrastructure and,by doing so, simplifies development, deployment and management.Deploying application through Kubernetes is usually an equivalent , whether your cluster contains only a few of nodes or thousands of them.
the dimensions of the cluster makes no difference in the least .
A Kubernetes cluster consists of a group of worker machines, called nodes , that run containerized applications. Every cluster has a minimum of one worker node.
The worker node(s) host the Pods that are the components of the appliance workload. The control plane manages the worker nodes and therefore the Pods within the cluster. In production environments, the control plane usually runs across multiple computers and a cluster usually runs multiple nodes, providing fault-tolerance and high availability.
It is what controls the cluster and makes it function.It consists of multiple components.It might be on one master node or be split across multiple nodes and replicated to make sure high availability.
A reliable distributed data store that persistently stores key-value data of the cluster configuration.
The scheduler,which schedules your apps i.e, assign a worker node to every deploy able component of your application counting on resource requirement by app and resource availability of the node.
The controller manager, which performs cluster-level functions, like replicating components,keeping-track of worker nodes,handling node failures, and so on.
The Kubernetes API server, which you(user), other control plane and worker node components with
The worker nodes are the machines that run your containerized applications.
A net working proxy that runs on each worker node,enables pod to pod,pod-to-service communication.The Kubernetes Service Proxy(Kube-proxy), which also load-balances network traffice between application components.
Container Run time
Docker,rkt, or an other container run time, which runs your containers.Manages and takes responsibility for containers on its node.It ensures that the containers described in those Pod specs are running and healthy.
Kubernetes keep track of all the containers it manages.If multiple container provide same service then you’ll group them at one static IP address.Kubernetes then expose that address to all or any applications running within the cluster or the out world.The kube-poxy will confirm connections to the service are load balanced across all the containers that provide that very same service.The IP address of the service stay constant,so clients can always hook up with its containers,even when they’re moved round the cluster.
If you’ve got Kubernetes deployed on all of your services, the aps team doesn’t got to affect deploying your app anymore.
WHY?………..Because a containerized application already contains all it must run,the system administrators don’t got to install anything to deploy and run the app.
On any node where Kubernetes is deployed,Kubernetes can run the app immediately with none help from the sysadmins.