Skip to main content

Choosing the Right OpenShift Service: Service Mesh, Submariner, or Service Interconnect?

In today’s digital world, businesses rely more and more on interconnected applications and services to operate effectively. This means integrating software and data across different environments is essential. However, achieving smooth connectivity can be tough because different application designs and the mix of on-premises and cloud systems often lead to inconsistencies. These issues require careful management to ensure everything runs well, risks are managed effectively, teams have the right skills, and security measures are strong.

This article looks at three Red Hat technologies—Red Hat OpenShift Service Mesh and Red Hat Service Interconnect, as well as Submariner—in simple terms. It aims to help you decide which solution is best for your needs.

OPENSHIFT

Feature

Service Mesh (Istio)

Service Interconnect

Submariner

Purpose

Manages service-to-service communication within a single cluster.

Enables communication between services in different OpenShift clusters on the same hardware.

Connects services across different networks/clusters (e.g., on-premises to cloud).

Key Benefits

Traffic management, security, observability.

Seamless cross-cluster communication on the same infrastructure.

Secure communication over the internet.

Use Case Example

Managing microservices within one cluster.

Service A in Cluster A communicates with Service B in Cluster B, both on the same hardware.

Service A in a private data center communicates with Service B in a public cloud.


Here are the key differences between Service Mesh and Service Interconnect to help clarify your architectural decisions.


Feature

OpenShift Service Mesh

Service Interconnect

Introduction

Centralized control for service interactions

Simplifies interconnection across environments

Architecture

Sidecar proxy model

Virtual network overlay with routers

Deployment

Container on Red Hat OpenShift

Container on Red Hat OpenShift, VMs, bare-metal

Security

Mutual TLS for pod-to-pod communication

Mutual TLS for router-to-router communication

Ownership

Managed by cluster administrators

Managed by development teams

Observability

Advanced metrics, logs, and tracing via Kiali

Basic visualization and logging

Ideal Use Cases

Traffic management, observability, A/B testing

Progressive migrations, interconnecting diverse platforms

Federation

Supports cross-cluster connectivity

No federation; focuses on interconnections

Environment Support

OpenShift 4.10 and later

OpenShift, RHEL, Kubernetes, VMs, bare-metal

Configuration Overhead

Requires Envoy proxy per application pod

One router per namespace

Traffic Management

Advanced capabilities

Basic interconnection

Key Considerations

Team skills, workload requirements, platform

Flexibility for dev team self-service





Comments

Popular posts from this blog

TKGS VMware/Kubernetes ReadWriteMany Functionality with NFS-CSI

 TKGS VMware WRX Functionality with NFS CSI ReadWriteMany Access mode in Kubernetes When it come to RWX access mode in PVC, TKGS support it if we have the following: 1. Kubernetes is upgraded to 1.22.9 (This version supports this RWX functionality) 2. vSAN should be there in your environment (VMware uses the vpshere csi, which only support vSAN) How to done it without vSAN: 1. Upgrade the kubernetes to version 1.22.9 2. Use NFS-CSI and then create a new storage class to be consumed. Work Around : 2.a : Please use the below link to get the nfs-csi-driver  https://github.com/ibraraziz/csi-driver-nfs Note: It absolutely fine that we have multiple CSI drivers/provisioner in kubernetes (Just for information) Step:1 Goto csi-driver-nfs/deploy/v4.0.0/ and apply that yaml into your environment. It will create NFS csi provisioner and controller pods in namespace of kubesystem as below Step: 2 Now create storage class and goto the example folder  csi-driver- nfs/deploy/example...

PV and PVC Deletion in Kubernetes and remains stuck in terminating state

 First we need to note that :  When you need to delete both PV, PVC then you must start from PVC and then go for PV . I n case mistakenly a PV is deleted first then it goes in terminating state as shown below: Deleted the pv mistakenly Output : See the higlighted one Enlisting the desired PVC for which the PV we have deleted as highlighted  Now if we delete that particular PVC so it will also go into terminating state as shown below After deletion it also goes in terminating state. Work Around Edit the particular PVC like as shown kubectl edit pvc < pvc name> Remove that particular line just as highlighted below: Once Edit is done that Terminiating state is no more there and PVC AND PV completely deleted.👏                     Feel Free to query : Click Here   By: Ibrar Aziz (Cloud Enthusiast) https://ibraraziz-cloud.blogspot.com/ https://www.linkedin.com/in/ibraraziz/