DevOps-MasterPiece Project using Git, GitHub, Jenkins, Maven, JUnit, SonarQube, Jfrog Artifactory, Docker, Trivy, AWS S3, Docker Hub, GitHub CLI, EKS, ArgoCD, Prometheus, Grafana, Slack and Hashicorp Vault
In this project, I created an end-to-end Production like CI-CD pipeline while keeping in mind Securities Best Practices,DevSecOps principles and used all these tools Git, GitHub, Jenkins, Maven, JUnit, SonarQube, Jfrog Artifactory, Docker, Trivy, AWS S3, Docker Hub, GitHub CLI, EKS, ArgoCD, Prometheus, Grafana, Slack and Hashicorp Vault, to achieve the goal.
When an event (commit) will occur in the application code GitHub repo, the GitHub webhook will push the code to Jenkins and Jenkins will start the build.
Maven will build the code, if the build fails, the whole pipeline will become a failure and Jenkins will notify the user using Slack, If build success then
Junit will do unit testing, if the application passes test cases then will go to the next step otherwise the whole pipeline will become a failure Jenkins will notify the user that your build fails.
SonarQube scanner will scan the code and will send the report to the SonarQube server, where the report will go through the quality gate and gives the output to the web Dashboard.
In the quality gate, we define conditions or rules like how many bugs or vulnerabilities, or code smells should be present in the code. Also, we have to create a webhook to send the status of quality gate status to Jenkins. If the quality gate status becomes a failure, the whole pipeline will become a failure then Jenkins will notify the user that your build fails.
After the quality gate passes, Artifacts will be sent to Jfrog Artifactory. If artifacts send to the artifactory successfully then will go to the next stage otherwise the whole pipeline will become a failure Jenkins will notify the user that your build fails.
After successful artifacts push to Artifactory, Docker will build the docker image. if the docker build fails when the whole pipeline will become a failure and Jenkins will notify the user that your build fails.
Trivy will scan the docker image, if it finds any Vulnerability then the whole pipeline will become a failure, and the generated report will be sent to s3 for future review and Jenkins will notify the user that your build fails.
After the Trivy scan docker images will be pushed to the docker hub, if the docker fails to push docker images to the docker hub then the pipeline will become a failure and Jenkins will notify the user that your build fails.
After the docker push, Jenkins will clone the Kubernetes manifest repo from the feature branch, if the repo is already present then it will only pull the changes. If Jenkins is unable to clone the repo then the whole pipeline will become a failure and Jenkins will notify the user that your build fails.
After Cloning the repo, Jenkins will update the image tag in the deployment manifest. If Jenkins is unable to update the image tag then the whole pipeline will become a failure and Jenkins will notify the user that your build fails.
After updating the image tag, Jenkins will commit the change and push the code to the feature branch. If Jenkins is unable to push the changes to the feature branch then the whole pipeline will become a failure and Jenkins will notify the user that your build fails.
After pushing changes to the feature branch, Jenkins will create a pull request against the main branch. If Jenkins is unable to create a pull request then the whole pipeline will become a failure and Jenkins will notify the user that your build fails.
After the pull request creation, a senior person from the team will review and merge the pull request.
After merging the feature branch into the main branch, ArgoCD will pull the changes and deploy the application into Kubernetes.
2 t2.medium ( ubuntu) EC2 Instances – 1. one for sonarqube and Hashicorp vault server 2. another for Jfrog Artifactory
1 t2.large (ubuntu ) EC2 Instance - For Jenkins, Docker, Trivy, AWS CLI, Github CLI, Terraform
EKS Cluster with t3.medium nodes
Push all the web application code files into GitHub
sudo apt install docker.io
sudo usermod -aG docker $USER
sudo usermod -aG docker jenkins
sudo apt-get install wget apt-transport-https gnupg lsb-release
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update
sudo apt-get install trivy
sudo apt install unzip
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
type -p curl >/dev/null || (sudo apt update && sudo apt install curl -y)
curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg \
&& sudo chmod go+r /usr/share/keyrings/githubcli-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null \
&& sudo apt update \
&& sudo apt install gh -y
sudo apt-get update && sudo apt-get install -y gnupg software-properties-common
wget -O- https://apt.releases.hashicorp.com/gpg | \
gpg --dearmor | \
sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg
gpg --no-default-keyring \
--keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg \
--fingerprint
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update
sudo apt-get install terraform
sudo apt update
sudo apt install docker.io
sudo docker run -d -p 9000:9000 --name sonarqube sonarqube
HashiCorp Vault is a secret-management tool specifically designed to control access to sensitive credentials in a low-trust environment.
sudo curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt update
sudo apt install vault -y
sudo apt update
sudo apt install docker.io
sudo docker pull docker.bintray.io/jfrog/artifactory-oss:latest
sudo mkdir -p /jfrog/artifactory
sudo chown -R 1030 /jfrog/
sudo docker run --name artifactory -d -p 8081:8081 -p 8082:8082 \
-v /jfrog/artifactory:/var/opt/jfrog/artifactory \
docker.bintray.io/jfrog/artifactory-oss:latest
Slack is a workplace communication tool, “a single place for messaging, tools and files.” .
Install Slack from the official website of Slack https://slack.com/intl/en-in/downloads/linux
To create EKS Cluster using Terraform, I have put the Terraform code here - https://github.com/praveensirvi1212/medicure-project/tree/master/eks_module
Suggestion – create eks cluster after successful configuration of jenkins server. When jenkins is able to create pull request in the manifest repo.
Run this command after eks cluster creation to update or configure .kube/config
file
aws eks --region your-region-name update-kubeconfig --name cluster-name
I am assuming that you have already Kubernetes cluster running
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
kubectl -n argocd edit svc argocd-server
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
helm repo add stable https://charts.helm.sh/stable
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm search repo prometheus-community
kubectl create namespace prometheus
helm install stable prometheus-community/kube-prometheus-stack -n prometheus
kubectl get pods -n prometheus
kubectl get svc -n prometheus
#in order to make Prometheus and grafana available outside the cluster, use LoadBalancer or NodePort instead of ClusterIP.
#Edit Prometheus Service
kubectl edit svc stable-kube-prometheus-sta-prometheus -n prometheus
#Edit Grafana Service
kubectl edit svc stable-grafana -n prometheus
kubectl get svc -n prometheus
#Access Grafana UI in the browser using load balancer or nodeport
UserName: admin
Password: prom-operator
I am assuming that your Vault server is installed and running
/etc/vault.d/vault.hcl
file with vi or nanostorage "raft" {
path = "/opt/vault/data"
node_id = "raft_node_1"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = 1
}
api_addr = "http://127.0.0.1:8200"
cluster_addr = "https://127.0.0.1:8201"
ui = true
sudo systemctl stop vault
sudo systemctl start vault
export VAULT_ADDR='http://127.0.0.1:8200'
vault operator init
copy the unseal tokens and initial root token, save it somewhere for later use
vault operator unseal
Paste the first unseal token here
vault operator unseal
Paste the second unseal token here
vault operator unseal
Paste the third unseal token here
vault login <Initial_Root_Token>
<Initial_Root_Token>
is found in the output of vault operator init
vault auth enable approle
vault write auth/approle/role/jenkins-role token_num_uses=0 secret_id_num_uses=0 policies="jenkins"
This app role will use for jenkins integration
vault read auth/approle/role/jenkins-role/role-id
Copy the role_id and token, and store somewhere
vault write -f auth/approle/role/jenkins-role/secret-id
Copy the secret-id and token, store them somewhere
admin
and admin
http://jenkins-server-url-with-port/sonarqube-webhook/
admin
and password
my-local-repo
If you already have a dockerhub account then no need to create another
Note: we can create token from dockerhub to integrate jenkins but in this case, I am using docker username and password.
chat:write
,chat:write.customize
@your-app-name
and click on send icon > click on add to channelrun all these commands into the vault server
vault secrets enable -path=secrets kv
vault write secrets/creds/docker username=abcd password=xyz
likewise, we can store all the credentials in the vault server. I have stored only the docker credential but you can store all your credential like this.
jenkins-policy.hcl
path "secrets/creds/*" {
capabilities = ["read"]
}
the policy is created with * means the vault server can read credentials from every path. No need to create policies for each path like secrets/creds/docker
, secrets/creds/slack
etc…
vault policy write jenkins jenkins-policy.hcl
admin
kubectl -n argocd get secret argocd-initial-admin-secret -o yaml
echo “copied-password” | base64 -d
echo "your-slack-token" | base64
argocd-notifications-secret
secret to add slack tokenkubectl -n argocd edit secret argocd-notifications-secret
apiVersion: v1
, Replace xxxx-xxxxx-xx
with your encoded slack tokendata:
slack-token: xxxxx-xxxxxx-xxxxxx
argocd-notifications-cm
configmapkubectl -n argocd edit cm argocd-notifications-cm
apiVersion: v1
data:
service.slack: |
token: $slack-token
username: argocd-bot
icon: ":rocket:"
template.app-sync-succeeded-slack: "message: | \n Application {{.app.metadata.name}}
is now {{.app.status.sync.status}}\n"
trigger.on-sync-succeeded: |
- when: app.status.sync.status == 'Synced'
send: [app-sync-succeeded-slack]
kubectl -n argocd edit application your-app-name-you-created-in-argocd
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
annotations:
notifications.argoproj.io/subscribe.on-sync-succeeded.slack: your-slack-channel-name
name: argocd-demo
namespace: argocd
use this docs to import grafana Dashboard into grafana https://www.coachdevops.com/2022/05/how-to-setup-monitoring-on-kubernetes.html
pipeline {
agent any
tools {
maven 'apache-maven-3.0.1'
}
stages {
stage('Example') {
steps {
sh 'mvn --version'
}
}
}
}
stage('Checkout git') {
steps {
git branch: 'main', url:'https://github.com/praveensirvi1212/DevOps_MasterPiece-CI-with-Jenkins.git'
}
}
stage ('Build & JUnit Test') {
steps {
sh 'mvn install'
}
}
In this stage, I used withSonarQubeEnv to Prepare the SonarQube Scanner environment and shell command sh
stage('SonarQube Analysis'){
steps{
withSonarQubeEnv('SonarQube-server') {
sh '''mvn clean verify sonar:sonar \
-Dsonar.projectKey=gitops-with-argocd \
-Dsonar.projectName='gitops-with-argocd' \
-Dsonar.host.url=$sonarurl \
-Dsonar.login=$sonarlogin'''
}
}
}
This step pauses Pipeline execution and waits for the previously submitted SonarQube analysis to be completed and returns quality gate status. Setting the parameter abortPipeline to true will abort the pipeline if the quality gate status is not green.
stage("Quality Gate") {
steps {
timeout(time: 1, unit: 'HOURS') {
waitForQualityGate abortPipeline: true
}
}
}
steps {
script {
try {
def server = Artifactory.newServer url: 'http://13.232.95.58:8082/artifactory', credentialsId: 'jfrog-cred'
def uploadSpec = """{
"files": [
{
"pattern": "target/*.jar",
"target": "${TARGET_REPO}/"
}
]
}"""
server.upload(uploadSpec)
} catch (Exception e) {
error("Failed to deploy artifacts to Artifactory: ${e.message}")
}
}
}
}
First, write your dockerfile to build docker images. I have posted my dockerfile in the application repo code In this stage, I used the shell command sh to build the docker image
sstage('Docker Build') {
steps {
sh 'docker build -t ${IMAGE_REPO}/${NAME}:${VERSION}-${GIT_COMMIT} .'
}
}
In this stage, I trivy shell command sh to scan the docker image
stage('Image Scan') {
steps {
sh ' trivy image --format template --template "@/usr/local/share/trivy/templates/html.tpl" -o report.html ${IMAGE_REPO}/${NAME}:${VERSION}-${GIT_COMMIT} '
}
}
}
}
In this stage, I used the shell command sh to Upload Scan report to AWS S3
stage('Upload Scan report to AWS S3') {
steps {
// sh 'aws configure set aws_access_key_id "$AWS_ACCESS_KEY_ID" && aws configure set aws_secret_access_key "$AWS_ACCESS_KEY_SECRET" && aws configure set region ap-south-1 && aws configure set output "json"'
sh 'aws s3 cp report.html s3://devops-mastepiece/'
}
}
In this stage, I used the shell command sh to push the docker image to the docker hub. I stored Credentials in Vault and accessed them in jenkins using the Vault key. You can store DockerHub credentials in jenkins and use them as environment variables
stage ('Docker Build') {
steps {
withVault(configuration: [skipSslVerification: true, timeout: 60, vaultCredentialId: 'vault-token', vaultUrl: 'http://13.232.53.209:8200'], vaultSecrets: [[path: 'secrets/creds/docker', secretValues: [[vaultKey: 'username'], [vaultKey: 'password']]]]) {
sh "docker login -u ${username} -p ${password} "
sh 'docker push ${IMAGE_REPO}/${NAME}:${VERSION}-${GIT_COMMIT}'
sh 'docker rmi ${IMAGE_REPO}/${NAME}:${VERSION}-${GIT_COMMIT}'
}
}
}
stage('Clone/Pull Repo') {
steps {
script {
if (fileExists('DevOps_MasterPiece-CD-with-argocd')) {
echo 'Cloned repo already exists - Pulling latest changes'
dir("DevOps_MasterPiece-CD-with-argocd") {
sh 'git pull'
}
} else {
echo 'Repo does not exists - Cloning the repo'
sh 'git clone -b feature https://github.com/praveensirvi1212/DevOps_MasterPiece-CD-with-argocd.git'
}
}
}
}
stage('Update Manifest') {
steps {
dir("DevOps_MasterPiece-CD-with-argocd/yamls") {
sh 'sed -i "s#praveensirvi.*#${IMAGE_REPO}/${NAME}:${VERSION}-${GIT_COMMIT}#g" deployment.yaml'
sh 'cat deployment.yaml'
}
}
}
stage('Commit & Push') {
steps {
withCredentials([string(credentialsId: 'GITHUB_TOKEN', variable: 'GITHUB_TOKEN')]) {
dir("DevOps_MasterPiece-CD-with-argocd/yamls") {
sh "git config --global user.email '[email protected]'"
sh 'git remote set-url origin https://${GITHUB_TOKEN}@github.com/${GIT_USER_NAME}/${GIT_REPO_NAME}'
sh 'git checkout feature'
sh 'git add deployment.yaml'
sh "git commit -am 'Updated image version for Build- ${VERSION}-${GIT_COMMIT}'"
sh 'git push origin feature'
}
}
}
}
The reason to create a pull request is that argocd is sync automatically with Git Hub. GitHub is the only single source of truth for argocd. So if jenkins push the changes to the main branch then argocd will deploy changes directly without reviewing the changes. This should not happen in the Production environment. That’s why we create pull requests against the main branch. So a senior person from the team can review the changes and merge them into the main branch. Then n then only changes should go to the production environment.
Here token.txt
contain the GitHub token, the reason for storing the GitHub token in the text file bcoz gh auth login --with-token
accept only STDIN Input
stage('Raise PR') {
steps {
withCredentials([string(credentialsId: 'GITHUB_TOKEN', variable: 'GITHUB_TOKEN')]) {
dir("DevOps_MasterPiece-CD-with-argocd/yamls") {
sh '''
set +u
unset GITHUB_TOKEN
gh auth login --with-token < token.txt
'''
sh 'git branch'
sh 'git checkout feature'
sh "gh pr create -t 'image tag updated' -b 'check and merge it'"
}
}
}
}
In post build action I used Slack notification. After the build jenkins will send a notification message to Slack whether your build success or failure.
post{
always{
sendSlackNotifcation()
}
}
sendSlackNotification function
def sendSlackNotifcation()
{
if ( currentBuild.currentResult == "SUCCESS" ) {
buildSummary = "Job_name: ${env.JOB_NAME}\n Build_id: ${env.BUILD_ID} \n Status: *SUCCESS*\n Build_url: ${BUILD_URL}\n Job_url: ${JOB_URL} \n"
slackSend( channel: "#devops", token: 'slack-token', color: 'good', message: "${buildSummary}")
}
else {
buildSummary = "Job_name: ${env.JOB_NAME}\n Build_id: ${env.BUILD_ID} \n Status: *FAILURE*\n Build_url: ${BUILD_URL}\n Job_url: ${JOB_URL}\n \n "
slackSend( channel: "#devops", token: 'slack-token', color : "danger", message: "${buildSummary}")
}
}
https://github.com/praveensirvi1212/DevOps_MasterPiece-CI-with-Jenkins/blob/main/Jenkinsfile
Sorry, I forgot to change the stage name while building the job, but don't worry I made changes in the Jenkins file.
SonarQube Quality gate status is green and passed.
You can apply your custom quality gate like there should be zero ( bug, Vulnerability, code smell ) and if your code has greater than 0 (bugs, vulnerability, code smells). Then your quality gate status will become a failure or red. If your quality gate status becomes a failure, stages after the quality gate will be a failure.