This project contains documented details of implementation of a CI CD pipeline using Jenkins for Docker and Kubernetes
The project focuses on implementing continuous delivery for Docker containers 🐳. The aim is to continuously build Docker images and deploy them to a Kubernetes cluster 🚀. This approach is commonly used in microservice architecture, but it can be applied anywhere containers are used.
With continuous code changes, there needs to be a continuous build and test process 🔨, as well as regular deployment of the containers 🚛. The deployment process is typically handled by the operations team 👷♂️, who manage the container orchestration tool like Kubernetes 🌐. However, manual deployment can create dependencies 🔗 and be time-consuming ⏰.
To address this, the project aims to automate the build and release process of container images, allowing for fast and continuous deployment as soon as code changes are made by developers 💻. This will be achieved through the implementation of a continuous delivery or deployment pipeline for Docker containers 📦.
The following events happen serially:
- A developer makes a code change and pushes it to GitHub 💻.
- Jenkins fetches the code, including the Dockerfile, Jenkinsfile, and Helm charts 📥.
- The code is tested and analyzed using Checkstyle and SonarQube scanner 🔍, with results uploaded to SonarQube Server 📈.
- If the code passes all quality gates, an artifact is built with Maven 🔨.
- A Docker build process starts to build the Docker image 🐳.
- If everything passes, the Docker image is pushed to Docker Hub 🚀.
- Jenkins uses Helm to deploy the Helm charts to the Kubernetes Cluster 🌐.
- The Helm chart deployment creates all necessary resources, such as pods, services, secrets, and volumes 📦.
- If any changes are made, such as a new image tag for an application pod, they are implemented 🔧.
Follow the README.md file in https://github.com/SumitM01/CI-using-Jenkins–Nexus-and-Sonaqube repository to create and setup the Continous Integration pipeline. Create instances for Jenkins and Sonarqube scanner only. Do not create an instance for Nexus artifact storage as it is not required.
- Install these additional plugins on Jenkins.
- Docker pipeline 🐳
- Log in to jenkins instance using SSH and install openjdk-11-jdk and openjdk-8-jdk using the following commands 🔧
sudo apt update
sudo apt install openjdk-8-jdk -y
sudo apt install openjdk-11-jdk -y
-
Configure JDK installation on Jenkins by providing Java_Home path.
-
Configure Sonarqube scanner and sonarqube server with sonarqube token 🔍
-
SSH to the instance and install docker engine in it using the following commands 🐳
#!/bin/bash
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg -y
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg -y
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
- Add Jenkins user to Docker group 🔧
usermod -aG docker jenkins
-
Create a domain in GoDaddy and a subdomain in Route 53.🌐
-
Copy the NameServer records to Godaddy DNS manager 📋 from the subdomain.
-
Launch the kOps server instance with the following specifications 🚀:
-
Create an S3 bucket 🪣 on the same region as the server.
-
Create an IAM 🔐 role for using awscli and store its credentials.
-
Install awscli on kOps server and configure it with the IAM credentials 🔐.
-
Run the following command to download ⬇ awscli :
apt get update
apt install awscli -y
- Install kubectl and kops from the kubernetes site 🌐:
- Install kubectl using the following command 🔧:
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
- Install kOps from Kubernetes site using the following command 🔧:
curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
- Generate the ssh keys on the kops server using ssh-keygen 🔑
- Go to account settings -> SSH and GPG keys -> add key -> paste the contents of the public ssh key -> save 💾
- Clone the created repo into your kops machine using SSH link 🔗.
git clone [email protected]:git_username/git_repository_name.git
- IMPORTANT: Cloning the repository on the machine validates the authentication using the created SSH keys🔑.
- Copy the contents 📋 of the vprofile-project/cicd-kube branch.
- Clone 🔗 the vprofile-project repo onto the kops machine.
git clone https://github.com/devopshydclub/vprofile-project.git
- Checkout to cicd-kube branch🔀.
git checkout cicd-kube
- Copy all the files in the root to the created repository folder 📋.
cp -r * ../your_created_repo/
- Delete files inside your created repo that are not required: docker-db, docker-web, ansible,compose 🗑️.
rm -rf docker-db docker-web ansible compose
- Copy dockerfile from inside docker-app folder to root and delete dockerapp folder 📋.
cp Dockerapp/Dockerfile .
rm -rf Dockerapp
- Delete 🗑️ contents 📋 inside helm/vprofilecharts/templates folder 📁 and replace them with contents of Deploying-an-application-on-Kubernetes- cluster/Setupfiles folder📁.
cd helm/vprofilecharts/templates
rm -rf
cd
git clone https://github.com/SumitM01/Deploying-an-application-on-Kubernetes-cluster.git
cp -r Deploying-an-application-on-Kubernetes-cluster/Setupfiles/* your_created_repo/helm/vprofilecharts/templates/
- Create an EC2 volume using the following command🔧:
aws ec2 create-volume --availability-zone=your_prefered_zone --size=3 --volume-type=gp2
- Note down the volume ID as displayed after volume creation📝.
- Specify the volume ID in the vprodbdep.yml file with the copied ID 🔧.
- On AWS console
- Go to EC2 management console🖱️.
- On Navigation Pane, go to volumes 🖱️.
- Search for the created volume with the volume ID and select it🔍.
- Click on Manage tags and add the following tag to it🔧:
- Key : KubernetesCluster
- Value : your_subdomain_name
- IMPORTANT: This is necessary because without the cluster tags the volume won't get attached to the required instance for database purposes⚡.
- Run the following command to create a Kubernetes cluster using kOps:🚀
kops create cluster --name=your_subdomain_name --state=s3://your_bucket_name --zones=your_preferred_zone --node-count=2 --node-size=t2.small --master-size=t3.medium --dns-zone=your_subdomain_name
- Run the following command to launch the created cluster using kOps:🚀
kops update cluster --name vprofile.sumitmishra.info --state=s3://vprofile-kube-project --yes --admin
- Wait for 10-15 minutes for the cluster to launch fully.⏳
- While you wait for the cluster to be launched, you can install Helm on the kops server using the following commands
cd
wget https://get.helm.sh/helm-v3.12.2-linux-amd64.tar.gz
tar -zxvf helm-v3.12.2-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm
helm --help
- Run the following command to validate the created cluster using kOps:✅
kops validate cluster --name=vprofile.sumitmishra.info --state=s3://vprofile-kube-project
- Run the following command to taint all nodes to be Scheduled on launch.
kubectl taint nodes --all node-role.kubernetes.io/control-plane:NoSchedule-
- IMPORTANT: This is very important because, the latest kubernetes versions do not allow scheduling to be done on control nodes which causes various errors during deployment, to avoid that we should allow scheduling on the control node.
- Run the following command to check whether all nodes have the same zone.⚡
kubectl get nodes -L zone
- If there are no records under Zone then assign zones to individual nodes by running the following command for each node.
kubectl label nodes <node-name> zone=your_prefered_zone
- IMPORTANT: This is necessary because the node should be in the same zone as the created volume in order to get attached to it and in the same zone as specified in the deployment files in order to not raise an error during deployment.⚡
- On kops server
- Connect to kops-server using SSH client.
- Using ubuntu user, install open-jdk-11 in the server.📦
sudo apt update
sudo apt install openjdk-11-jdk -y
- Create a folder as /opt/jenkins-slave and provide ownership of jenkins-slave to ubuntu user.📁
sudo mkdir /opt/jenkins-slave
sudo chown ubuntu.ubuntu /opt/jenkins-slave
- On Jenkins server
- Configure a node with the following settings:⚙️
- Remote root directory : /opt/jenkins-slave
- Labels:KOPS
- Usage: only build jobs with label expressions matching this node
- Launch method:launch agents via ssh
- Host: private kops IP
- Credentials: kops instance private login key
- Host key verification strategy: non verifying verification strategy
- Availability: keep this agent online as much as possible
- On local_machine
- Write a Jenkinsfile inside your-created-repository by referring to the Jenkinsfile present in vprofile-project/cicd-kube directory.
- Push the contents to github remote repo.
-
On jenkins
- Create a pipeline
- Choose Poll SCM and provide * * * * *
- Choose Pipeline script from SCM and provide your github repository, branch and Jenkinsfile path then save.💾
- Now commit to the repository then see that the pipeline gets automatically triggered after the commit.🚨
- Wait for the pipeline to be completed successfully.✅
-
After the successful completion of the pipeline do the following
- On kops server
- SSH to kops-server.
- Run the following command to list all the running services in the project.📋
kubectl get svc
- Copy the Load balancer ARN from the displayed services.
- Create a new record in the hosted zone of route 53 with the value as the dns of load balancer.🌐
After everything is done, wait for 5-10 mins⏳ then validate the services by accessing the website using the URL.🔗
- Here we can see that the backend services have been created and configured and are also running fine. User details Page (Database Validation)
User details Page (Cache Validation)
- Cleanup the services one by one.
- Delete the cluster in the kops vm using the following command
kops delete cluster --name your_subdomain_name --state=s3://your_bucket_name --yes
- Take a snapshot of the entire stack and store it in an s3 bucket for future use.🔮
- Poweroff/terminate the instances
- Delete security groups.🗑️
- Delete S3 buckets if you don't require them.🗑️
- Delete the hosted zone on AWS Route53 if not required.🗑️
This project implmented the complete Continous Integration and Continous Deployment pipeline using Jenkins for production deployment on Docker and Kubernetes cluster. This ensures efficient and streamlined development process and maintenance of the application.
As documented in this README file, I have invested MANY MANY HOURS of my time in researching 🔎, learning 📖, debugging 👨💻 to implement this project. If you appreciate this document please give it a ⭐, share with friends and do give it a try. Thank you for reading this! 😊