Skip to main content

What to consider when migrating a DataCenter ?

 Few glimpse of learning for successful DataCenter Migration I am sharing for knowledge purpose.




Comprehensive Migration plan

  • Assigned Project Manager/Owner
  • Environment prerequisites
  • Hardware prerequisites
  • Software prerequisites
  • Task prerequisites
  • Key team players (who will be involved in data center relocation?)
  • Key data for every phase of your data center relocation project
  • Back-out plan
Planning and Methodical Implementation

  • Label cabling in existing and target data centers
  • Backups complete
  • DBAs and application administrators disable services
  • Turn off and shut down all data center equipment
  • A network administrator checks cabling, switch configuration, and firewall connectivity
  • OS Manager remotely connects to the console to reconfigure the network settings for the service (if not done before shutdown)
  • OS Manager checks the health of the operating system, network connections, and storage
  • Allowable Downtime server, network, storage, and operating system requirements
  • DBAs and application administrators control services
  • The QA team tests all applications and certifies the data center environment
  • Physical size of doors/elevators/stairs and loading docks/ramps
  • Make sure the account
  • Moving tools: forklift, dolly, flatbed
  • Space for packing and unpacking in both data centers
  • Identify your development, manufacturing, and production equipment to avoid misplacement at the new location
  • Extraction and bottling at a new location
  • Time for wiring/Dressing needs

Comments

Popular posts from this blog

Choosing the Right OpenShift Service: Service Mesh, Submariner, or Service Interconnect?

In today’s digital world, businesses rely more and more on interconnected applications and services to operate effectively. This means integrating software and data across different environments is essential. However, achieving smooth connectivity can be tough because different application designs and the mix of on-premises and cloud systems often lead to inconsistencies. These issues require careful management to ensure everything runs well, risks are managed effectively, teams have the right skills, and security measures are strong. This article looks at three Red Hat technologies—Red Hat OpenShift Service Mesh and Red Hat Service Interconnect, as well as Submariner—in simple terms. It aims to help you decide which solution is best for your needs. OPENSHIFT Feature Service Mesh (Istio) Service Interconnect Submariner Purpose Manages service-to-service communication within a single cluster. Enables ...

TKGS VMware/Kubernetes ReadWriteMany Functionality with NFS-CSI

 TKGS VMware WRX Functionality with NFS CSI ReadWriteMany Access mode in Kubernetes When it come to RWX access mode in PVC, TKGS support it if we have the following: 1. Kubernetes is upgraded to 1.22.9 (This version supports this RWX functionality) 2. vSAN should be there in your environment (VMware uses the vpshere csi, which only support vSAN) How to done it without vSAN: 1. Upgrade the kubernetes to version 1.22.9 2. Use NFS-CSI and then create a new storage class to be consumed. Work Around : 2.a : Please use the below link to get the nfs-csi-driver  https://github.com/ibraraziz/csi-driver-nfs Note: It absolutely fine that we have multiple CSI drivers/provisioner in kubernetes (Just for information) Step:1 Goto csi-driver-nfs/deploy/v4.0.0/ and apply that yaml into your environment. It will create NFS csi provisioner and controller pods in namespace of kubesystem as below Step: 2 Now create storage class and goto the example folder  csi-driver- nfs/deploy/example...

Managing AI Workloads in Kubernetes and OpenShift with Modern GPUs [H100/H200 Nvidia]

 AI workloads demand significant computational resources, especially for training large models or performing real-time inference. Modern GPUs like NVIDIA's H100 and H200 are designed to handle these demands effectively, but maximizing their utilization requires careful management. This article explores strategies for managing AI workloads in Kubernetes and OpenShift with GPUs, focusing on features like MIG (Multi-Instance GPU), time slicing, MPS (Multi-Process Service), and vGPU (Virtual GPU). Practical examples are included to make these concepts approachable and actionable. 1. Why GPUs for AI Workloads? GPUs are ideal for AI workloads due to their massive parallelism and ability to perform complex computations faster than CPUs. However, these resources are expensive, so efficient utilization is crucial. Modern GPUs like NVIDIA H100/H200 come with features like: MIG (Multi-Instance GPU): Partitioning a single GPU into smaller instances. Time slicing: Efficiently sharing GPU res...