Skip to main content

A powerful and Easy view of Kubernetes Architecture and Components

 kubernetes also known as K8s is an open-source system for automating the deployment, scaling and management of containerized applications.

"Virtualization makes virtual machines, Docker makes containers, and Kubernetes makes Pods."

Benefit Kubernetes

Deploy applications and also react to changes.
Scale it up and down based on demand.
Heal up when things break down.
Perform upgrades and rollbacks with zero downtime.

Kubernetes Nodes: Kubernetes are made up of two types of nodes [Nodes can be virtual machine or can be a physical machine]

1.    Worker Node

2.     Master Nodes [Can be called as Control Node/Control Plane/Supervisor Node]

Inside every node:

The basic unit of Kubernetes is POD.

๐Ÿ‘€ Infromational Note: A pod is the basic execution unit of a Kubernetes application. Each module represents a portion of the workload that runs on your cluster

1. Each pod has its own IP address its an internal IP.
2. A pod can communicate to another pod using their ip .
3. A pods are ephemeral means if a container or app crashes the pod will die a new pod will be created with new ip address.

POD is made up of Containers

๐Ÿ‘€Informational Note: When we pull the image on the local machine and run it, it will run the application inside the container and create the container environment. if it's running, it's a container.

Containers are made up of Images. [These Images are also called docker images]

A Docker image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.

๐Ÿ‘€Infromational Note: Public image repo :https://hub.docker.com/

Overview in Diagram:




MASTER AND WORKER NODES DETAIL
Let me share a exploded view



Developer Communication Flow and Interaction with Kubernetes


Before going into detail, One should know about kubectl Client.
Kubectl Client available both for linux and windows. It allows us to run the kubectl command for execution of different tasks.

These kubectl commands talk with the master node and worker nodes. Best practice is that kubernetes version compatible client version need to be used.

Client can be downloaded from here: https://kubernetes.io/docs/tasks/tools/

Official Kubectl CheatSheet

Kubernetes Nodes Overview

There are two types of nodes in kubernetes. That can be physcial or virtual

Master Node
Worker Node

Above image shows 1 Master Node and Two Worker Nodes. Each nodes are equiped with neceesary componenet to perfrom respective jobs.

Master Node [Can be called Management Node or Brain Node]

There are 4 processes that run on master node. they control worker node and kubernetes cluster as well.
As shown in above diagram we can see furthr breakdown view as :
1. Apiserver
2. Scheduler
3. Controller
4. Etcd

Explanation:

The API Server is essentially the single entry point - through which we can communicate with the cluster. In fact, all worker nodes also communicate with the control plane through the API server.

The scheduler watches the API server for newly created pods with no assigned nodes, and assigns them to appropriate healthy nodes.

The controller  implements all of the background control loops, that monitor the cluster and respond to events – this logic is the heart of Kubernetes and declarative design pattern.

The ETCD store all the configuration, and the desired state of the cluster.

Worker Node [All application pod run in this Node]

There are 3 processes that run on worker node

1. Container runtime
2. Kubelet
3. Kubeproxy

Explanation

The Kubelet needs a container runtime to perform container-related tasks-–things like pulling images and starting and stopping containers

๐Ÿ‘€Information Note:  
In the early days, Kubernetes had native support for a few container runtimes such as Docker. More recently, it has moved to a plugin model called the Container Runtime Interface (CRI).

Kubelet helps in registering the node with the cluster Watch the API server for new work assignments

➤Kubeproxy makes sure each node gets its own unique IP address, and implements local IPTABLES or IPVS rules to handle routing and load-balancing of traffic on the Pod network.  



Comments

Popular posts from this blog

Managing AI Workloads in Kubernetes and OpenShift with Modern GPUs [H100/H200 Nvidia]

 AI workloads demand significant computational resources, especially for training large models or performing real-time inference. Modern GPUs like NVIDIA's H100 and H200 are designed to handle these demands effectively, but maximizing their utilization requires careful management. This article explores strategies for managing AI workloads in Kubernetes and OpenShift with GPUs, focusing on features like MIG (Multi-Instance GPU), time slicing, MPS (Multi-Process Service), and vGPU (Virtual GPU). Practical examples are included to make these concepts approachable and actionable. 1. Why GPUs for AI Workloads? GPUs are ideal for AI workloads due to their massive parallelism and ability to perform complex computations faster than CPUs. However, these resources are expensive, so efficient utilization is crucial. Modern GPUs like NVIDIA H100/H200 come with features like: MIG (Multi-Instance GPU): Partitioning a single GPU into smaller instances. Time slicing: Efficiently sharing GPU res...

What's New in Red Hat OpenShift 4.17

What's New in Red Hat OpenShift 4.17 Release Overview: · Kubernetes Version:  OpenShift 4.17 is based on Kubernetes 1.30, bringing enhancements and new capabilities. Notable Beta Features: 1.     User Namespaces in Pods:  Enhances security by allowing pods to run with distinct user IDs while mapping to different IDs on the host. 2.     Structured Authentication Configuration:  Provides a more organized approach to managing authentication settings. 3.     Node Memory Swap Support:  Introduces support for memory swapping on nodes, enhancing resource management. 4.     LoadBalancer Behavior Awareness:  Kubernetes can now better understand and manage LoadBalancer behaviors. 5.     CRD Validation Enhancements:  Improves Custom Resource Definition (CRD) validation processes. Stable Features: 1.     Pod Scheduling Readiness:  Ensures that...

Choosing the Right OpenShift Service: Service Mesh, Submariner, or Service Interconnect?

In today’s digital world, businesses rely more and more on interconnected applications and services to operate effectively. This means integrating software and data across different environments is essential. However, achieving smooth connectivity can be tough because different application designs and the mix of on-premises and cloud systems often lead to inconsistencies. These issues require careful management to ensure everything runs well, risks are managed effectively, teams have the right skills, and security measures are strong. This article looks at three Red Hat technologies—Red Hat OpenShift Service Mesh and Red Hat Service Interconnect, as well as Submariner—in simple terms. It aims to help you decide which solution is best for your needs. OPENSHIFT Feature Service Mesh (Istio) Service Interconnect Submariner Purpose Manages service-to-service communication within a single cluster. Enables ...