Skip to main content

OpenSource Devops Tool: Jenkins Deployment on Kubernetes

 Jenkins is an open source continuous integration/continuous delivery and deployment (CI/CD) automation software DevOps tool written in the Java programming language. It is used to implement CI/CD workflows, called pipelines.


Devops:

DevOps is best explained as people working together to create, build and deliver secure software at the highest speed. DevOps practices enable software developers (devs) and operations (ops) teams to accelerate delivery through automation, collaboration, rapid feedback, and iterative improvement.

DevOps represents a change in mindset for IT culture. Based on agile, lean practices and systems theory, DevOps focuses on incremental development and rapid software delivery. Success depends on the ability to create a culture of accountability, better collaboration, empathy and shared responsibility for business results.

Process flow:

A developer develops an application

The developer submits the code to GitLab

A reviewer reviews the code and commits to the appropriate branch

Jenkins continuously monitors the relevant branch and starts building the code

Jenkins builds a container image, tags and sends the images to Docker Hub

Jenkins will start deploying the image to the kubernetes cluster

Kubernetes deploys the updated image using a rolling update strategy


Deployment on Kuberneres Cluster

1.    Get into the repo :

https://github.com/ibraraziz/kubernetes-jenkins

2.    Clone the repo from git client like


3.     Goto to Kubernetes and create a namespace devops-tools like

4.    Create Service Account with cloned yaml file like
    

5.    Create the pvc using the below yaml code just replace that in volume.yaml and apply like
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jenkins-pv-claim # that should same intact as claimed in deployment code
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  storageClassName: tkg-rwx-policy-dev # this need to be the one storageclass you have in your  env



6. Create deployment like (image address needed to be updated as per your env_ I have used jenkins with 2.332.3-jdk11  TAG)






7.    Create a service like below to expose the services to LoadBalancer as below:


apiVersion: v1
kind: Service
metadata:
  name: jenkins-service
  namespace: devops-tools
  annotations:
      prometheus.io/scrape: 'true'
      prometheus.io/path:   /
      prometheus.io/port:   '8080'
spec:
  selector: 
    app: jenkins-server
  type: LoadBalancer  
  ports:
    - port: 8080
      targetPort: 8080

8.    Access with with expose IP 10.50.49.28 and Port as 8080 ( It will differ with respect to your env )


 
9.    How to get the password is very simple

1. Enlist the jenkin pod

2. Enter into the pod and enlisht the path given in UI as above 


3.   Copy the password and paste into UI as below 


Congratulations: Jenkins is Deployed and Accessible Now:


Feel Free to query : Click Here 


By: Ibrar Aziz (Cloud Enthusiast)
https://ibraraziz-cloud.blogspot.com/
https://www.linkedin.com/in/ibraraziz/



Comments

Popular posts from this blog

Managing AI Workloads in Kubernetes and OpenShift with Modern GPUs [H100/H200 Nvidia]

 AI workloads demand significant computational resources, especially for training large models or performing real-time inference. Modern GPUs like NVIDIA's H100 and H200 are designed to handle these demands effectively, but maximizing their utilization requires careful management. This article explores strategies for managing AI workloads in Kubernetes and OpenShift with GPUs, focusing on features like MIG (Multi-Instance GPU), time slicing, MPS (Multi-Process Service), and vGPU (Virtual GPU). Practical examples are included to make these concepts approachable and actionable. 1. Why GPUs for AI Workloads? GPUs are ideal for AI workloads due to their massive parallelism and ability to perform complex computations faster than CPUs. However, these resources are expensive, so efficient utilization is crucial. Modern GPUs like NVIDIA H100/H200 come with features like: MIG (Multi-Instance GPU): Partitioning a single GPU into smaller instances. Time slicing: Efficiently sharing GPU res...

What's New in Red Hat OpenShift 4.17

What's New in Red Hat OpenShift 4.17 Release Overview: · Kubernetes Version:  OpenShift 4.17 is based on Kubernetes 1.30, bringing enhancements and new capabilities. Notable Beta Features: 1.     User Namespaces in Pods:  Enhances security by allowing pods to run with distinct user IDs while mapping to different IDs on the host. 2.     Structured Authentication Configuration:  Provides a more organized approach to managing authentication settings. 3.     Node Memory Swap Support:  Introduces support for memory swapping on nodes, enhancing resource management. 4.     LoadBalancer Behavior Awareness:  Kubernetes can now better understand and manage LoadBalancer behaviors. 5.     CRD Validation Enhancements:  Improves Custom Resource Definition (CRD) validation processes. Stable Features: 1.     Pod Scheduling Readiness:  Ensures that...

Choosing the Right OpenShift Service: Service Mesh, Submariner, or Service Interconnect?

In today’s digital world, businesses rely more and more on interconnected applications and services to operate effectively. This means integrating software and data across different environments is essential. However, achieving smooth connectivity can be tough because different application designs and the mix of on-premises and cloud systems often lead to inconsistencies. These issues require careful management to ensure everything runs well, risks are managed effectively, teams have the right skills, and security measures are strong. This article looks at three Red Hat technologies—Red Hat OpenShift Service Mesh and Red Hat Service Interconnect, as well as Submariner—in simple terms. It aims to help you decide which solution is best for your needs. OPENSHIFT Feature Service Mesh (Istio) Service Interconnect Submariner Purpose Manages service-to-service communication within a single cluster. Enables ...