Skip to main content

TKGM PR-DR SITE ON VCLOUD DIRECTOR ARCHITECURE

 TKGM PR-DR SITE ON VCLOUD DIRECTOR ARCHITECURE 

You build:

  1. vSphere + vCD + NSX-T + CSE on both sites.

  2. You deploy TKGm clusters on primary.

  3. You set up Velero to back up YAMLs and volumes.

  4. You mirror Harbor registry to DR.

  5. You test restoring a cluster on DR site using CSE + Velero.

  6. You prepare DNS (manual or automated) to point to DR when needed.


Primary & DR Site Layer Comparison Table

LayerComponentPrimary SiteDR SiteWhat Happens During DR?Notes / Tools
1️⃣InfrastructurevSphere (ESXi, vCenter)Same setupDR vSphere takes overEnsure hardware compatibility
2️⃣NetworkingNSX-TSame NSX-T setupDR NSX routes trafficReplicate NSX segments, edge configs
3️⃣Cloud ManagementvCloud DirectorvCloud DirectorDR vCD deploys new VMsMust sync templates across sites
4️⃣K8s ProvisioningCSE (TKGM enabled)CSE (same version)DR CSE deploys TKGm clusterSync catalog/templates
5️⃣Kubernetes ClusterTKGm Cluster (Running)TKGm Cluster (Rebuilt)Apps are restored on DR clusterUse Velero / GitOps to restore
6️⃣Persistent Storage (PV)CSI Volumes / DatastoreRestored from backup or replicationApps regain their dataUse Velero+Restic, Zerto, or vSphere Replication
7️⃣Container ImagesHarbor RegistryMirror / Backup HarborDR cluster pulls same imagesEnable Harbor replication between sites
8️⃣K8s Configs / YAMLsGitOps (Flux / ArgoCD) or VeleroSame toolsRe-apply YAMLs in DRUse Git source or Velero backup
9️⃣DNS FailoverDNS entry points to primaryDNS updated to DR IPDNS points to DR cluster ingressUse manual switch or automated failover (Route53, Cloudflare)

🥶 Cold vs 🔥 Hot Standby Table

TypeWhat It MeansProsConsWhen to Use
🧊 Cold StandbyDR site is ready but not running TKGmCheaperSlow failover (10–60 mins)Most common, low-cost DR
🔥 Hot StandbyDR cluster runs live + in syncFast failoverHigh cost, complexityFor mission-critical workloads

🌐 DNS Redirection Table

MethodDescriptionToolsSpeedRecommended When
🛠️ Manual DNS SwitchYou change DNS IP after failoverGoDaddy, Cloudflare, etc.Slow (few minutes)OK for small/low-impact apps
⚙️ Automated FailoverHealth check + switch IP automaticallyRoute53, NS1, F5 GSLBFast (seconds to 1 min)Critical apps needing <1 min downtime


Comments

Popular posts from this blog

Managing AI Workloads in Kubernetes and OpenShift with Modern GPUs [H100/H200 Nvidia]

 AI workloads demand significant computational resources, especially for training large models or performing real-time inference. Modern GPUs like NVIDIA's H100 and H200 are designed to handle these demands effectively, but maximizing their utilization requires careful management. This article explores strategies for managing AI workloads in Kubernetes and OpenShift with GPUs, focusing on features like MIG (Multi-Instance GPU), time slicing, MPS (Multi-Process Service), and vGPU (Virtual GPU). Practical examples are included to make these concepts approachable and actionable. 1. Why GPUs for AI Workloads? GPUs are ideal for AI workloads due to their massive parallelism and ability to perform complex computations faster than CPUs. However, these resources are expensive, so efficient utilization is crucial. Modern GPUs like NVIDIA H100/H200 come with features like: MIG (Multi-Instance GPU): Partitioning a single GPU into smaller instances. Time slicing: Efficiently sharing GPU res...

Choosing the Right OpenShift Service: Service Mesh, Submariner, or Service Interconnect?

In today’s digital world, businesses rely more and more on interconnected applications and services to operate effectively. This means integrating software and data across different environments is essential. However, achieving smooth connectivity can be tough because different application designs and the mix of on-premises and cloud systems often lead to inconsistencies. These issues require careful management to ensure everything runs well, risks are managed effectively, teams have the right skills, and security measures are strong. This article looks at three Red Hat technologies—Red Hat OpenShift Service Mesh and Red Hat Service Interconnect, as well as Submariner—in simple terms. It aims to help you decide which solution is best for your needs. OPENSHIFT Feature Service Mesh (Istio) Service Interconnect Submariner Purpose Manages service-to-service communication within a single cluster. Enables ...

What's New in Red Hat OpenShift 4.17

What's New in Red Hat OpenShift 4.17 Release Overview: · Kubernetes Version:  OpenShift 4.17 is based on Kubernetes 1.30, bringing enhancements and new capabilities. Notable Beta Features: 1.     User Namespaces in Pods:  Enhances security by allowing pods to run with distinct user IDs while mapping to different IDs on the host. 2.     Structured Authentication Configuration:  Provides a more organized approach to managing authentication settings. 3.     Node Memory Swap Support:  Introduces support for memory swapping on nodes, enhancing resource management. 4.     LoadBalancer Behavior Awareness:  Kubernetes can now better understand and manage LoadBalancer behaviors. 5.     CRD Validation Enhancements:  Improves Custom Resource Definition (CRD) validation processes. Stable Features: 1.     Pod Scheduling Readiness:  Ensures that...