Blogging Community Kubernetes Liveness Probe - Technologies In Industry 4.0

Kubernetes Liveness Probe

Health Check

One of the most benefits of using Kubernetes it keeps our containers running somewhere within the cluster.
But what if one among those containers dies? What if all containers of a pod die?
If app containers crash due to a bug in your app, Kubernetes will restart your app container automatically.
But what would say about those situations when your app stops responding because it falls into an infinite loop or a deadlock?
Kubernetes provides us with thanks for checking the health of your application.
Pods are often configured to periodically check an application’s health from the surface and not depend upon the app doing it internally.
You can specify liveness looked for each container within the pod’s specification.
Kubernetes would periodically execute the probe and restart the container if the probe fails.
IMPORTANT POINT: “Container is restarted” means the old one is killed and a completely new container is created———it’s not an equivalent container being restarted again.

Probe Types

There are three sorts of probes

1.HTTP GET
This type of probe conveys requests on the container’s IP address, a port, and the path you specify.
The probe is taken into account failure and container are going to be automatically restarted if
Probe receives error response code
The container app doesn’t respond in the least
2. TCP Socket
TCP socket probe tends to open a TCP connection to the required port of the container.
If the connection is maintained successfully, the probe is successful.
Otherwise, the container is restarted.
3. EXEC Probe
An Exec probe does some commands you provide inside the container and checks the command’s exit status code.
If the status code is declared 0, the probe is successful.
All other codes are considered failures.

Numerous operations running for long ages of time ultimately transition to broken countries, and can not recover except by being renewed. Kubernetes provides liveness examinations to descry and remedy similar situations.

In this exercise, you produce a Cover that runs a vessel grounded on thek8s.gcr.io/ busybox imageThen‘s the configuration train for the Cover;

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-exec
spec:
  containers:
  - name: liveness
    image: k8s.gcr.io/busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
    livenessProbe:
      exec:
        command:
        - cat
        - /tmp/healthy
      initialDelaySeconds: 5
      periodSeconds: 5

In the configuration train, you can see that the Cover has a single Container. The periodSeconds field specifies that the kubelet should perform a liveness inquiry every 5 seconds. The initialDelaySeconds field tells the kubelet that it should stay 5 seconds before performing the first inquiry. To perform an inquiry, the kubelet executes the command cat/ tmp/ healthy in the targetcontainer. However, it returns 0, and the kubelet considers the vessel to be alive and healthy If the command succeeds. However, the kubelet kills the vessel and restarts it, If the command returns the non-zero value.

When the vessel starts, it executes this command;

/bin/sh -c "touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600"

For the first 30 seconds of the vessel‘s life, there’s a/ tmp/ healthy trainSo during the first 30 seconds, the command cat/ tmp/ healthy returns a success law. After 30 seconds, cat/ tmp/ healthy returns a failure law.

Produce the Cover;

kubectl apply -f https://k8s.io/examples/pods/probe/exec-liveness.yaml