Kubernetes is an open-source container orchestration framework, it was developed by Google. Kubernetes manages containers it could be a docker container or any other container technology. So we can say that Kubernetes help us manage the containerized application, We can use Kubernetes on a different type of environments let it be a physical server, virtual machines or cloud.
What Problem Does It Solve: The rise of microservice architecture has increased the use of container technologies. nowadays a complex enterprise application can have thousands of containers. managing those containers using scripts or self-made tools is very tough and error-prone. Kubernetes like container orchestration tools solve this problem.
A container orchestration tool offers the following features
- High Availability or no downtime
- Scalability or high performance
- Disaster recovery
Basic Architectures: Kubernetes cluster is made of at least one master node and connected to it a couple of worker nodes. These worker nodes have a process running on them which is known as kubelet (primary node agent). kubelet makes it possible for these worker nodes to talk to each other, also it executes tasks needed to keep the cluster running. these worker nodes can have one of the multiple containers running on them.
Master nodes run several important Kubernetes processes which are necessary to run and manage Kubernetes clusters properly, one of the processes is the API server, it's a container that is an entry point for the K8S cluster. If you are using UI, API, or CLI to manage the cluster it interacts with this API service. Another Important service running on master is controller manager, it keeps a track of what is happening in the cluster. whether something needs to be repaired or a container died and needs to be restarted. Another important service is scheduled, it's responsible for the creation of the pods on the nodes, scheduler keeps track of the resources on different worker nodes, and based on the need it decides on which nodes the next pod should be created. Another important component of the Kubernetes cluster that runs on the master nodes is etcd storage, its key-value pair which stores the current state of the cluster. it users to restore the cluster based on the need.
Another important part of Kubernetes is the virtual network, which converts the whole cluster into a single unified powerful machine.
We can see that the actual application will be deployed on worker nodes, which means most of the work will be done by worker nodes, and due to that, these worker nodes will be the bigger machine. where master nodes just run a handful of services, but the master is very important for the proper functioning of the whole cluster so if the master is down we will not be able to access the cluster due to that there could be multiple master nodes as in production environment so that when one master node is done cluster can still work smoothly as we have other master nodes available.
Basic Concepts: let us understand a few basic but important Kubernetes concepts, pods, and containers. The pod is the smallest unit which as a Kubernetes user you will/can configure. The pod is a wrapper on the container. in a pod, you can have one or more containers. Usually, per application, you will have one pod, for example, the database could need one pod, UI could need another Pod, API could need another Pod, So if your application needs only one container then the Pod will have a single container in it. or if the application needs multiple containers then Pod will have multiple containers.
Each pod is a self-contained server with its own IP address. the way they communicate with each other with the help of these IPs. We do not configure or create containers directly in the Kubernetes cluster. We work with Pods. Pod manages the container running inside itself. So if a container dies within a Pod then Pod restarts that container automatically. Pods are themselves ephemeral components that mean pods can also die very frequently and when a pod dies a new one gets created, here is where the notion of services comes into play so when a pod gets restarted or recreated a new pod is created with a new IP address, so if you were running a database on a Pod and using that Pod's IP address to connect to this database if the pod restarted the IP will change and other pods where the applications are deployed will not be able to connect to that database. This is very inconvenient to use the IP address to access the Pod. This is where another component of Kubernetes called service is used, which is basically and substitute for those IPs. Service is basically a component seat in front of each Pod that talks to each other. so if a Pod behind the service dies and gets recreated the service stays in place. as the service life cycle is not tied to Pod.
Service has to main functionality one is IP address means it has a permanent IP address which can be used to communicate between the pods and at the same time its a load balancer
Example Configurations: Let us now see how we can configure different Kubernetes components. All the configurations in the Kubernetes go through the master node with the process called API server. Kubernetes client which could be UI (Kubernetes dashboard) or an API which could be a script or CLI, all talk to API server and they send their configuration request to the API server in the form of JSON or XAML format, In the request, we send a deployment which is nothing but a template to create a Pod.
in the example above we are asking Kubernetes to create two pods with the app name my-app which will be based on the containers images my-images. also, we are passing the environment variable.
This configuration is declarative in nature and using these configurations we are telling Kubernetes that what is the desired state. So Kubernetes master controller component checks that state of the cluster and if it does not match with the configuration then creates the pods to meet the desired state.
Comments
Post a Comment