Skip to main content

Kubernetes/containerization Network troubleshooting tools package into Single Container

 Many a time we run containers in kubernetes in controlled environment and many IP's are restricted and internet is not allowed specially in private environment case. And we do not have enough utilities within that environmet to download any package for troubleshooting purpose.


Work Around

Use the following git repo that is having a special container packaged with below enlisted tool for troubleshooting purpose within kubernetes environment.

Git Repo   :https://github.com/ibraraziz/netshoot

Tools included within container:

apache2-utils
bash
bind-tools
bird
bridge-utils
busybox-extras
calicoctl
conntrack-tools
ctop
curl
dhcping
drill
ethtool
file
fping
httpie
iftop
iperf
iproute2
ipset
iptables
iptraf-ng
iputils
ipvsadm
jq
libc6-compat
liboping
mtr
net-snmp-tools
netcat-openbsd
netgen
nftables
ngrep
nmap
nmap-nping
openssl
py-crypto
py2-virtualenv
python2
scapy
socat
strace
swaks
tcpdump
tcptraceroute
termshark
tshark
util-linux
vim
websocat

How to use:

1.    Create the pod by:

kubectl apply -f  <repo name-share with you>

Command Below

Kubectl apply -f https://github.com/ibraraziz/netshoot/blob/master/configs/netshoot-sidecar.yaml

For Reference:

YAML Changes with respect to my environment as I have that yaml into my local system

apiVersion: apps/v1
kind: Deployment
metadata:
    name: netshoot
    labels:
        app: netshoot
spec:
 replicas: 1
 selector:
    matchLabels:
        app: netshoot
 template:
    metadata:
     labels:
        app: netshoot
    spec:
        containers:
        - name: netshoot
          image: docker.io/nicolaka/netshoot
          command: ["/bin/bash"]
          args: ["-c", "while true; do ping localhost; sleep 60;done"]

2.    Once created go and enlist the pod as below


3.    Enter into the pod and start consuming the networking utilities.


Now you have entered into netshoot pod and now you can start using all the above tools packaged into it.

Sharing Sample Results:







So all tools are working which is packaged with this single container. We can take benefit in terms of network troubleshooting.

Feel Free to query : Click Here 

By: Ibrar Aziz (Cloud Enthusiast)
https://ibraraziz-cloud.blogspot.com/
https://www.linkedin.com/in/ibraraziz/

Comments

Post a Comment

Popular posts from this blog

Managing AI Workloads in Kubernetes and OpenShift with Modern GPUs [H100/H200 Nvidia]

 AI workloads demand significant computational resources, especially for training large models or performing real-time inference. Modern GPUs like NVIDIA's H100 and H200 are designed to handle these demands effectively, but maximizing their utilization requires careful management. This article explores strategies for managing AI workloads in Kubernetes and OpenShift with GPUs, focusing on features like MIG (Multi-Instance GPU), time slicing, MPS (Multi-Process Service), and vGPU (Virtual GPU). Practical examples are included to make these concepts approachable and actionable. 1. Why GPUs for AI Workloads? GPUs are ideal for AI workloads due to their massive parallelism and ability to perform complex computations faster than CPUs. However, these resources are expensive, so efficient utilization is crucial. Modern GPUs like NVIDIA H100/H200 come with features like: MIG (Multi-Instance GPU): Partitioning a single GPU into smaller instances. Time slicing: Efficiently sharing GPU res...

What's New in Red Hat OpenShift 4.17

What's New in Red Hat OpenShift 4.17 Release Overview: · Kubernetes Version:  OpenShift 4.17 is based on Kubernetes 1.30, bringing enhancements and new capabilities. Notable Beta Features: 1.     User Namespaces in Pods:  Enhances security by allowing pods to run with distinct user IDs while mapping to different IDs on the host. 2.     Structured Authentication Configuration:  Provides a more organized approach to managing authentication settings. 3.     Node Memory Swap Support:  Introduces support for memory swapping on nodes, enhancing resource management. 4.     LoadBalancer Behavior Awareness:  Kubernetes can now better understand and manage LoadBalancer behaviors. 5.     CRD Validation Enhancements:  Improves Custom Resource Definition (CRD) validation processes. Stable Features: 1.     Pod Scheduling Readiness:  Ensures that...

Choosing the Right OpenShift Service: Service Mesh, Submariner, or Service Interconnect?

In today’s digital world, businesses rely more and more on interconnected applications and services to operate effectively. This means integrating software and data across different environments is essential. However, achieving smooth connectivity can be tough because different application designs and the mix of on-premises and cloud systems often lead to inconsistencies. These issues require careful management to ensure everything runs well, risks are managed effectively, teams have the right skills, and security measures are strong. This article looks at three Red Hat technologies—Red Hat OpenShift Service Mesh and Red Hat Service Interconnect, as well as Submariner—in simple terms. It aims to help you decide which solution is best for your needs. OPENSHIFT Feature Service Mesh (Istio) Service Interconnect Submariner Purpose Manages service-to-service communication within a single cluster. Enables ...