Skip to main content

TKGS VMware/Kubernetes ReadWriteMany Functionality with NFS-CSI

 TKGS VMware WRX Functionality with NFS CSI

ReadWriteMany Access mode in Kubernetes




When it come to RWX access mode in PVC, TKGS support it if we have the following:

1. Kubernetes is upgraded to 1.22.9 (This version supports this RWX functionality)

2. vSAN should be there in your environment (VMware uses the vpshere csi, which only support vSAN)

How to done it without vSAN:

1. Upgrade the kubernetes to version 1.22.9

2. Use NFS-CSI and then create a new storage class to be consumed.


Work Around :

2.a : Please use the below link to get the nfs-csi-driver 

https://github.com/ibraraziz/csi-driver-nfs

Note: It absolutely fine that we have multiple CSI drivers/provisioner in kubernetes (Just for information)


Step:1 Goto csi-driver-nfs/deploy/v4.0.0/ and apply that yaml into your environment.

It will create NFS csi provisioner and controller pods in namespace of kubesystem as below



Step: 2 Now create storage class and goto the example folder  csi-driver-nfs/deploy/example 

and change 3 parameters as below in yaml

---

apiVersion: storage.k8s.io/v1

kind: StorageClass

metadata:

  name: nfs-csi # 1. Your Desire name

provisioner: nfs.csi.k8s.io

parameters:

  server: nfs-server.default.svc.cluster.local # 2. your NFS server IP

  share: / #3. Your NFS mount point

reclaimPolicy: Delete

volumeBindingMode: Immediate

mountOptions:

  - nfsvers=4.1

Storage Class created as highlighted below I have created in my env:


One done your can use below yaml to create dynamic PVC (Mean no need to create manual PV) Click Here

This yaml will manage to create PV AND PVC 

Note: storageclass name should be same which you declared for your storageclass

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs-dynamic
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  storageClassName: nfs-csi #here storageclass name must be same as declared while creating storageclass 

PV PVC created as below :


Feel Free to query : Click Here 

By: Ibrar Aziz (Cloud Enthusiast)
https://ibraraziz-cloud.blogspot.com/
https://www.linkedin.com/in/ibraraziz/


Comments

Popular posts from this blog

Choosing the Right OpenShift Service: Service Mesh, Submariner, or Service Interconnect?

In today’s digital world, businesses rely more and more on interconnected applications and services to operate effectively. This means integrating software and data across different environments is essential. However, achieving smooth connectivity can be tough because different application designs and the mix of on-premises and cloud systems often lead to inconsistencies. These issues require careful management to ensure everything runs well, risks are managed effectively, teams have the right skills, and security measures are strong. This article looks at three Red Hat technologies—Red Hat OpenShift Service Mesh and Red Hat Service Interconnect, as well as Submariner—in simple terms. It aims to help you decide which solution is best for your needs. OPENSHIFT Feature Service Mesh (Istio) Service Interconnect Submariner Purpose Manages service-to-service communication within a single cluster. Enables ...

TKGM PR-DR SITE ON VCLOUD DIRECTOR ARCHITECURE

 TKGM PR-DR SITE ON VCLOUD DIRECTOR ARCHITECURE  You build: vSphere + vCD + NSX-T + CSE on both sites. You deploy TKGm clusters on primary. You set up Velero to back up YAMLs and volumes. You mirror Harbor registry to DR. You test restoring a cluster on DR site using CSE + Velero. You prepare DNS (manual or automated) to point to DR when needed. Primary & DR Site Layer Comparison Table Layer Component Primary Site DR Site What Happens During DR? Notes / Tools 1️⃣ Infrastructure vSphere (ESXi, vCenter) Same setup DR vSphere takes over Ensure hardware compatibility 2️⃣ Networking NSX-T Same NSX-T setup DR NSX routes traffic Replicate NSX segments, edge configs 3️⃣ Cloud Management vCloud Director vCloud Director DR vCD deploys new VMs Must sync templates across sites 4️⃣ K8s Provisioning CSE (TKGM enabled) CSE (same version) DR CSE deploys TKGm cluster Sync catalog/templates 5️⃣ Kubernetes Cluster TKGm Cluster (Running) TKGm Cluster (Rebuilt) Apps are restore...