Prometheus Install on a k8s Cluster in AWS

Rangaswamy P V
21 min readJun 9, 2023

--

In this article we will Install a Prometheus server to achieve the Observability goals for a Kubernetes cluster. To store the Prometheus Alerts we are going to use an external Cloud Provided storage hence we will need a “ebs-csi” out of tree driver plugin to achieve this (there are other ways also like local storage, NFS server etc., but in this article we will concentrate on Cloud storage). All these requirements are taken care by our Terraform script mentioned in this article. Also we will look at a Hidden Gem YAML generator that can be applied to universally to most of the k8s resources. The next article in this link walks you through the Grafanna Install.

The Cluster

If you do not have a k8s Cluster then follow the below steps to install the k8s cluster. The requirements for Cluster installs are network storage( I am using Cloud storage) for Prometheus Alerts, Cloud Controller Manager, “ebs-csi” controller for out of tree cloud storage on AWS. The following terraform script takes care of these requirements. You can find the link to my repo.

$ git pull https://github.com/rangapv/tf.git
$ ls
Prometheus README.md ec2-simplek8s ec2-vanilla eks-aws installtf.sh
$ cd Prometheus;ls
ec2-simplek8s
$ ~/tf/Prometheus/ec2-simplek8s$ ls
Aldo3.pem ccm2.json modify.secret.pvrangaswamy policy4.json secrets.tfvars.pvrangaswamy tf.log
README.md dns.json modify.secret.rangapv policy6.json secrets.tfvars.rangapv user1.sh
ThirdKey.pem ebspolicy.json modify.secret.rangapv08 policy7.json secrets.tfvars.rangapv08 variables.tf
YCStartup2018.pem ec2.tf modify.secret.rangapv76 remote.sh secrets.tfvars.rangapv76 vpc.tf
ccm-control.json k8sStore.json policy1.json remotecsi.sh terraform.tf webserver.pem
ccm-node.json k8sStoreOrigin.json policy2.json role.tf terraform.tfstate
ccm1.json modify.secret policy3.json secrets.tfvars terraform.tfstate.backup

In the above repo download and navigate to the Prometheus directory

Here there are the 3 files that need to be changed. The first is the “modify.secret” file. Rename it as “secrets.tfvars” and make your account specific changes as bulleted below….

$ mv modify.secret.rangapv secrets.tfvars
$ vi secrets.tfvars
accesskey = "AKI***************S3D"
secretkey = "iJ8cF*********yo/K78B"
ami = "ami-03f65b8614a860c29"
public_subnets = "172.1.0.0/16"
public_snets = "0.0.0.0/0"
region = "us-west-2"
zone = "us-west-2c"
keypath = "ThirdKey.pem"
key_name = "ThirdKey"

  1. Change the “accesskey” and “secretkey” values to your AWS account specific ones.
  2. Change the “ami” reflecting the Ubuntu 22.04 ami in your AWS specific account for the region us-west-2.
  3. Change the value of “keypath” and “key_name” to your account key in AWS. My private key is “ThirdKey.pem”. Also it should be in the particular format of “openssl” as mentioned in my earlier article Addendum1. Then ftp it(the Privatekey) to the folder on the box where you are running the Terraform script.

fyi: This is the Private key part of it. The public key “key_name” is already in your AWS cloud when you created the keys and you would have also downloaded the private key to your local browser at the time of creation. (ec2 -> key pairs -> Create key pair , here if you have chosen to create the key in .pem format you are good to go. Just sftp transfer this file to the terraform box/folder from where you are calling the Code creation).

The other pre-requisite is that the “ebs-csi” driver creates a dynamic volume in you AWS cloud in a particular region. I tried many different zone and only “us-west-2” seems to be working hence it is required you stick to these region and zone for Prometheus installs referring to the cloud storage.

The rest of the scripts are good to go.

Lets run the terraform script as follows

$ terraform init

Initializing the backend...

Initializing provider plugins...
- Reusing previous version of hashicorp/tls from the dependency lock file
- Reusing previous version of hashicorp/cloudinit from the dependency lock file
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Using previously-installed hashicorp/random v3.4.3
- Using previously-installed hashicorp/tls v4.0.4
- Using previously-installed hashicorp/cloudinit v2.2.0
- Using previously-installed hashicorp/kubernetes v2.16.1
- Using previously-installed hashicorp/aws v4.57.1

Terraform has been successfully initialized!

and terraform plan as follows

$ terraform plan -var-file="secrets.tfvars"
aws_iam_policy.policy1: Refreshing state... [id=arn:aws:iam::988002206709:policy/terraform-20230515162426413800000001]
aws_vpc.awstfvpc: Refreshing state... [id=vpc-0cb39911b212b464c]
aws_iam_policy.policy2: Refreshing state... [id=arn:aws:iam::988002206709:policy/terraform-20230515162426415100000002]
aws_iam_role.k8s_role: Refreshing state... [id=k8s_role]
aws_iam_instance_profile.k8s_profile: Refreshing state... [id=k8s_profile]
aws_internet_gateway.IGW1: Refreshing state... [id=igw-08a40225946471c9a]
aws_subnet.publicsubnetstf: Refreshing state... [id=subnet-054f3d33b1a0ed6f8]
aws_security_group.vpc_security_tf: Refreshing state... [id=sg-03dc07ab1518c25a1]
aws_route_table.PublicRT: Refreshing state... [id=rtb-0c842bff81aa258c9]
..
..

and Finally terraform apply

$ terraform apply -var-file="secrets.tfvars"
aws_iam_policy.policy1: Refreshing state... [id=arn:aws:iam::988002206709:policy/terraform-20230515162426413800000001]
aws_vpc.awstfvpc: Refreshing state... [id=vpc-0cb39911b212b464c]
aws_iam_policy.policy2: Refreshing state... [id=arn:aws:iam::988002206709:policy/terraform-20230515162426415100000002]
aws_iam_role.k8s_role: Refreshing state... [id=k8s_role]
aws_iam_instance_profile.k8s_profile: Refreshing state... [id=k8s_profile]
aws_internet_gateway.IGW1: Refreshing state... [id=igw-08a40225946471c9a]
aws_subnet.publicsubnetstf: Refreshing state... [id=subnet-054f3d33b1a0ed6f8]
aws_security_group.vpc_security_tf: Refreshing state... [id=sg-03dc07ab151
..
..

You will have a working K8s Cluster with all the required add-ons for Prometheus components. Let us delve into the Cluster

Check cluster status

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-1-104-212.us-west-2.compute.internal Ready control-plane 93m v1.27.2
ip-172-1-166-9.us-west-2.compute.internal Ready <none> 91m v1.27.2

Check Cluster Pods

$ kubectl get po --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cloud-controller-manager-rnfkd 1/1 Running 0 92m
kube-system coredns-5d78c9869d-45kbx 1/1 Running 0 93m
kube-system coredns-5d78c9869d-7vzq2 1/1 Running 0 93m
kube-system ebs-csi-controller-689b48f644-2ppzw 5/5 Running 0 91m
kube-system ebs-csi-controller-689b48f644-6dww6 5/5 Running 0 91m
kube-system ebs-csi-node-55gmn 3/3 Running 0 91m
kube-system ebs-csi-node-6qc5k 3/3 Running 0 91m
kube-system etcd-ip-172-1-104-212.us-west-2.compute.internal 1/1 Running 0 94m
kube-system kube-apiserver-ip-172-1-104-212.us-west-2.compute.internal 1/1 Running 0 93m
kube-system kube-controller-manager-ip-172-1-104-212.us-west-2.compute.internal 1/1 Running 0 93m
kube-system kube-flannel-ds-bnsc5 1/1 Running 0 92m
kube-system kube-flannel-ds-v28h7 1/1 Running 0 93m
kube-system kube-proxy-ktgbw 1/1 Running 0 93m
kube-system kube-proxy-xcw98 1/1 Running 0 92m
kube-system kube-scheduler-ip-172-1-104-212.us-west-2.compute.internal 1/1 Running 0 94m
kubernetes-dashboard dashboard-metrics-scraper-5cb4f4bb9c-hlnkm 1/1 Running 0 93m
kubernetes-dashboard kubernetes-dashboard-6967859bff-gxhfp 1/1 Running 0 93m
ray-system kuberay-operator-5cd667675b-dwsx4 1/1 Running 0 93m

Check services

$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 95m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 95m
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.104.128.0 <none> 8000/TCP 95m
kubernetes-dashboard kubernetes-dashboard NodePort 10.109.34.86 <none> 443:30002/TCP 95m
ray-system kuberay-operator ClusterIP 10.106.146.158 <none> 8080/TCP 95m

So far the Cluster is working fine and we can proceed with the Prometheus Install in the next section.

A Hidden GEM:

Now we need to Install Prometheus as a deployment via YAML. I have made life easier my creating a script which will prompt for Prometheus YAML values and create all the necessary YAML specification accordingly which can then be applied to the cluster.

The script that I am referring to is called the generator YAML. The link to it is my repo here. I will briefly go over the usage of the scripts and walk in length about it on a different article in medium latter so that you can reuse the script for other Kubernetes YAML generation. It is kind of universal and more features are being added to it to make it more robust on a daily basis. This medium article explains in detail about this Database .

For this to work you need to clone my repo as shown below.

$ mkdir cn; git init; git pull https://github.com/rangapv/CloudNative.git
$ ls
README.md generator helm.sh unit-test utilities ylg
$ cd generator ; ls
Prometheus fill.sh gen.sh

Navigate to “generator” sub-folder and then to “Prometheus” underneath it. The “fill.sh” and “gen.sh” are the two work horses that generate the prompts and the finished YAMLS for the resources that we need to deploy Prometheus. More on these scripts including the architecture can be found here.

The Prometheus directory is shown below

$ cd Prometheus
$ ls -l
total 28
drwxrwxr-x 2 ubuntu ubuntu 4096 Jun 8 14:41 configmap
drwxrwxr-x 2 ubuntu ubuntu 4096 Jun 8 14:41 deployment
drwxrwxr-x 2 ubuntu ubuntu 4096 Jun 8 14:41 namespace
drwxrwxr-x 2 ubuntu ubuntu 4096 Jun 8 14:41 pv
drwxrwxr-x 2 ubuntu ubuntu 4096 Jun 8 14:41 pvc
drwxrwxr-x 2 ubuntu ubuntu 4096 Jun 8 14:41 service
drwxrwxr-x 2 ubuntu ubuntu 4096 Jun 8 14:41 storageclass

Each one of the sub-folder is an Kubernetes resource for which we can generate the YAML definition and also populate it with our appropriate values. In a nut shell it is based on the database in the subfolder “ylg” called “ylgdb.sh”. You can browse it when you clone the above repo.

For our Prometheus requirements we need the following

  1. Namespace
  2. StorageClass
  3. PersistentVolumeClaim
  4. ConfigMap
  5. Deployment
  6. Service

Lets us go into each folders..

Note: All the resource generators are the same and they differ in the “myindx” array values.

  1. Namespace: Every folder has name-generator.sh file. This file takes command line argument “gen” for generating the skeletal YAML file “nsr.yaml” for the resource and a “nsv.yaml” for user to input values. Once you fill in the “nsv.yaml” file and re-run the name-generator.sh file with “fill” as the command line argument it populates the “nsr.yaml” which is what we need and can be applied to the cluster as “kubectl apply” command ; as show below. So the user/you need not worry about the indentation for the resource/default values for the YAML etc., Of course you should know the most relevant values like for example the image-tag, the namespace you want to create, the relevant mount points etc……
ubuntu@ip-172-1-104-212:~/cn/generator/Prometheus$ cd namespace/
ubuntu@ip-172-1-104-212:~/ft/generator/Prometheus/namespace$ ls
ns-generator.sh nsr.yaml nsv.yaml nsv.yaml.bkp

$ ./ns-generator.sh gen

#This is the skeletal file First time generated
$ vi nsr.yaml
apiVersion:
kind:
metadata:
name:

#This is the generated values file that user needs to populate with his values
$ vi nsv.yaml
Version:
kind:
Name for this kind:

#This is the user filled file
$ vi nsv.yaml
Version: v1
kind: Namespace
Name for this kind: monitoring

#Re-run the generator file with "fill" command line argument as show below
$ ./ns-generator.sh fill

#Note the YAML is auto populated below with the values you entered in the above
$ vi nsr.yaml
apiVersion: v1
kind: Namespace
metadata:
name: monitoring

$ kubectl apply -f ./nsr.yaml
namespace/monitoring created

Also note if you are unsure of the default values then there is a valuesfile.bkp in each folder which will have default values. You can simply copy that file to values.yaml file and run the generator.sh fill command.

2. ConfigMap: Repeat the process as mentioned above.

$ cd configmap/
$ ls

cm-generator.sh configmap-promth-value.yaml configmap-promth-value.yaml.bkp configmap-promth.yaml

#Run the genrator with gen argument
$ ./cm-generator.sh gen


#This is the skeletal file First time generated
$ vi configmap-promth.yaml

apiVersion:
kind:
metadata:
name:
namespace:
annotations:
data:
prometheus.yml: |
global:
scrape_interval:
evaluation_interval:
alerting:
alertmanagers:
- static_configs:
- targets:
rule_files:
scrape_configs:
- job_name:
static_configs:
- targets:

#This is the generated values file that user needs to populate with his values
$ vi configmap-promth-value.yaml

Version:
kind:
Name for this kind:
Namespaces that you want this kind to be deployed:
The list of annotations for this kind:
The interval in which you need the metric to be scrapped:
The nterval in which you need the evaluation:
The alertmanagers header:
The static config header:
The target value header:
The rules file value:
The name of the job:
The Host&Port detail:


#This is the user filled file
$ vi configmap-promth-value.yaml
Version: v1
kind:ConfigMap
Name for this kind:prometheus-config
Namespaces that you want this kind to be deployed:monitoring
The interval in which you need the metric to be scrapped: 15s
The nterval in which you need the evaluation: 15s
The alertmanagers header:
The static config header:
The target value header:
The rules file value:# - "example-file.yml"
The name of the job:'prometheus'
The Host&Port detail:['localhost:9090']

#Re-run the generator file with "fill" command line argument as show below
$ ./cm-generator.sh fill

#Note the YAML is auto populated below with the values you entered in the above
$ vi configmap-promth.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: monitoring
data:
prometheus.yml: |
global:
scrape_interval: 15s
evaluation_interval: 15s
alerting:
alertmanagers:
- static_configs:
- targets:
rule_files: # - "example-file.yml"
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']


$ kubectl apply -f ./configmap-promth.yaml
configmap/prometheus-config created

3. StorageClass : Repeat the same process as shown below

$ cd storageclass/
ubuntu@ip-172-1-104-212:~/ft/generator/Prometheus/storageclass$ ls
scr.yaml scv.yaml scv.yaml.bkp storageclass.sh


#Run the genrator with gen argument
$ ./storageclass.sh gen


#This is the skeletal file First time generated
$ vi scr.yaml

kind:
metadata:
name:
namespace:
annotations:
provisioner:
parameters:
type:

#This is the generated values file that user needs to populate with his values
$vi scv.yaml

Version:
kind:
Name for this kind:
Namespaces that you want this kind to be deployed:
The list of annotations for this kind:
The kind of provisioner for aws/gcp/azure:
The storage type for aws/gcp/azure:

#This is the user filled file
$ vi scv.yaml

Version: storage.k8s.io/v1
kind: StorageClass
Name for this kind: prom1
Namespaces that you want this kind to be deployed: monitoring
The list of annotations for this kind:
The kind of provisioner for aws/gcp/azure:ebs.csi.aws.com
The storage type for aws/gcp/azure:gp3

#Re-run the generator file with "fill" command line argument as show below
$ ./storageclass.sh fill


#Note the YAML is auto populated below with the values you entered in the above
$ vi svr.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: prom1
namespace: monitoring
annotations:
provisioner: ebs.csi.aws.com
parameters:
type: gp3

$ kubectl apply -f ./scr.yaml
storageclass.storage.k8s.io/slow created

For Prometheus deployment we are attaching an external cloud volume to store our Alert values. Hence we are using PersistentVolumeClaim in a Dynamic way. And when PVC are created dynamically then we need not create PV it gets created automatically.

4. PersistentVolumeClaim: Repeat the steps shown below with your values. For example in this case copy “pvcv.yaml.bkp” to “pvcv.yaml” and then re-run the “pvc-gnerator.yaml fill”. You will have a working YAML for “PersistentVolumeClaim” resource which can be appled via kubectl apply.


$ cd pvc
ubuntu@ip-172-1-104-212:~/ft/generator/Prometheus/pvc$ ls
pvc-generator.sh pvc.yaml pvcv.yaml pvcv.yaml.bkp


#Run the generator with gen argument
$ ./pvc-generator.sh gen


#This is the skeletal file First time generated
$ vi pvc.yaml

apiVersion:
kind:
metadata:
name:
namespace:
labels:
app:
spec:
storageClassName:
accessModes:
-
resources:
requests:
storage:

#This is the generated values file that user needs to populate with his values
$ vi pvcv.yaml

Version:
kind:
Name for this kind:
Namespaces that you want this kind to be deployed:
app name to be referenced:
User defined Name for the Storage Class:
The Access Modes Value needs to be mentioned Read/Write/ReadWrite etc:
The storage Details(Mb/Gb):

#This is the user filled file
$ vi pvcv.yaml

Version:v1
kind:PersistentVolumeClaim
Name for this kind:pvc1-data
Namespaces that you want this kind to be deployed:monitoring
app name to be referenced:prometheus-deployment
User defined Name for the Storage Class:prom1
The Access Modes Value needs to be mentioned Read/Write/ReadWrite etc:ReadWriteOnce
The storage Details(Mb/Gb):11Gi

#Re-run the generator file with "fill" command line argument as show below
$ ./pvc-generator.sh fill


#Note the YAML is auto populated below with the values you entered in the above
$ vi pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1-data
namespace: monitoring
labels:
app: prometheus-deployment
spec:
storageClassName: prom1
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 11Gi

$ kubectl apply -f ./pvc.yaml
persistentvolumeclaim/pvc-csi-data created

5. Deployment: Repeat the steps shown below. Note In the value.bkp file how the VolumeMounts are entered as pairs(volume-mount-name;mountpath) and “args” for containers are comma separated. You can find more about the Hidden gen usuage in this article . This article delves in depth about the usages of this generator script file with the custom “ylgdb.sh” database in the medium article titled “YLG Database for YAML generation.

$ cd deployment/
ubuntu@ip-172-1-104-212:~/ft/generator/Prometheus/deployment$ ls
deployment-generator.sh dgr.yaml dgv.yaml


Lets us look at the generator file...

#deployment-generaor.sh
#!/bin/bash

set -E

#The YAMl file name for this resource ; this can be changed
pvfile="dgr.yaml"
#The Values file name for this resource ; this can be changed
pvvfile="dgv.yaml"
#The starting Index in the database(ylgdb.sh) for this resource ; this can have multiple indexes depending on your particular use case
myindx="1,2,3(type annotations),4,20,25(mountPath),26(mountPath protocol command envFrom env workingDir volumeDevices),29,30(defaultMode)"


if [[ "$1" == "gen" ]]
then

source <(curl -s https://raw.githubusercontent.com/rangapv/CloudNative/main/generator/gen.sh) "gen" "$pvfile" "$pvvfile" "$myindx"

elif [[ "$1" == "fill" ]]
then

source <(curl -s https://raw.githubusercontent.com/rangapv/CloudNative/main/generator/gen.sh) "fill" "$pvfile" "$pvvfile" "$myindx"
else
echo "usuage: deployment-generator.sh gen/fill"

fi


#Run the generator with gen argument
$ ./deployment-generator.sh gen


#This is the skeletal file First time generated
$vi dgr.yaml
apiVersion:
kind:
metadata:
name:
namespace:
labels:
app:
spec:
replicas:
selector:
matchLabels:
app:
strategy:
rollingUpdate:
maxSurge:
maxUnavailable:
type:
revisionHistoryLimit:
minReadySeconds:
progressDeadlineSeconds:
template:
metadata:
labels:
app:
spec:
restartPolicy:
initContainers:
- name:
image:
command:
volumeMounts:
- name:
containers:
- name:
image:
imagePullPolicy:
command:
args:
ports:
- containerPort:
hostIP:
name:
protocol:
volumeMounts:
- name:
mountPropagation:
readOnly:
subPath:
subPathExpr:
volumes:
- name:
persistentVolumeClaim:
claimName:
- name:
configMap:
name:

#This is the generated values file that user needs to populate with his values
$ vi dgv.yaml

Version:
kind:
Name for this kind:
Namespaces that you want this kind to be deployed:
app name to be referenced:
Replicas:
The app name for the Pod to match:
The max surge number for the Pod:
The max Unavailabe details for the Pod:
The strategy type for the Pod:
Enter number of old ReplicaSets to retain to allow rollback(default is 10):
Minimum number of seconds for it to be considered available:
maximum time (defaults 600s) for a deployment before it is considered to be failed:
The label for the Pod if any:
The app name for the Pod if any:
Restart policy for all containers within the pod(Always/OnFailure/Never):
The name for the init container:
The init container image:
The command to be executed for the init container:
Name of the volume mount in init container in the format mountpoints;mountpath:
Name of the container:
Container image name:
Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided:
Arguments to the entrypoint. The container image's CMD is used if this is not provided:
Port number to expose on the pods IP address(o-65536):
host IP to bind the external port to.:
Each named port in a pod must have a unique name:
Protocol for port. (UDP/TCP/SCTP Defaults to TCP):
Name of the Volume and the mount path enter value as name;path-of-mount here:
determines how mounts are propagated from the host to container and the other way around:
Mount volume value read-only if true:
Path within the volume from which the container's volume should be mounted.(Default to root):
Expanded path within the volume from which the container's volume should be mounted.:
spec for persistent Volume claim:
The Persistent claim name:
spec for Volume Configmap Volume:
In the options you have chosen Configmap pls enter the value as name;defaultmode here:


#This is the user filled file
$ vi dgv.yaml

Version: apps/v1
kind:Deployment
Name for this kind:prometheus
Namespaces that you want this kind to be deployed:monitoring
app name to be referenced:prometheus
Replicas:1
The app name for the Pod to match:prometheus
The max surge number for the Pod:1
The max Unavailabe details for the Pod:1
The strategy type for the Pod:RollingUpdate
Enter number of old ReplicaSets to retain to allow rollback(default is 10):2
Minimum number of seconds for it to be considered available:
maximum time (defaults 600s) for a deployment before it is considered to be failed:
The label for the Pod if any:
The app name for the Pod if any:prometheus
Restart policy for all containers within the pod(Always/OnFailure/Never):Always
The name for the init container:prom-busy-bos-set-permis
The init container image:busybox
The command to be executed for the init container:["/bin/chmod","-R","777","/prometheus"]
Name of the volume mount in init container in the format mountpoints;mountpath:prometheus-storage;/prometheus
Name of the container:prometheus
Container image name:prom/prometheus:v2.44.0
Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided:
Arguments to the entrypoint. The container image's CMD is used if this is not provided:--storage.tsdb.retention=6h,--storage.tsdb.path=/prometheus,--config.file=/etc/prometheus/prometheus.yml
Port number to expose on the pods IP address(o-65536):9090
host IP to bind the external port to.:
Each named port in a pod must have a unique name:promport
Protocol for port. (UDP/TCP/SCTP Defaults to TCP):
Name of the Volume and the mount path enter value as name;path-of-mount here:prometheus-config;/etc/prometheus,prometheus-storage;/prometheus
determines how mounts are propagated from the host to container and the other way around:
Mount volume value read-only if true:
Path within the volume from which the container's volume should be mounted.(Default to root):
Expanded path within the volume from which the container's volume should be mounted.:
spec for persistent Volume claim:prometheus-storage
The Persistent claim name:pvc1-data
spec for Volume Configmap Volume:prometheus-config
In the options you have chosen Configmap pls enter the value as name;defaultmode here:prometheus-config;420

#Re-run the generator file with "fill" command line argument as show below
$ ./deployment-generator.sh fill


#Note the YAML is auto populated below with the values you entered in the above
$ vi dgr.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus
namespace: monitoring
labels:
app: prometheus
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
revisionHistoryLimit: 2
minReadySeconds:
progressDeadlineSeconds:
template:
metadata:
labels:
app: prometheus
spec:
restartPolicy: Always
initContainers:
- name: prom-busy-bos-set-permis
image: busybox
command: ["/bin/chmod","-R","777","/prometheus"]
volumeMounts:
- name: prometheus-storage
mountPath: /prometheus
containers:
- name: prometheus
image: prom/prometheus:v2.44.0
command:
args:
- "--storage.tsdb.retention=6h"
- "--storage.tsdb.path=/prometheus"
- "--config.file=/etc/prometheus/prometheus.yml"
ports:
- containerPort: 9090
hostIP:
name: promport
protocol:
volumeMounts:
- name: prometheus-config
mountPath: /etc/prometheus
- name: prometheus-storage
mountPath: /prometheus
mountPropagation:
readOnly:
subPath:
subPathExpr:
volumes:
- name: prometheus-storage
persistentVolumeClaim:
claimName: pvc1-data
- name: prometheus-config
configMap:
name: prometheus-config
defaultMode: 420

$ kubectl apply -f ./dgr.yaml
deployment.apps/prometheus created

If you look closely in the generator file there is “myindx” variable.

myindx="1,2,3(type annotations),4,20,25(mountPath),
26(mountPath protocol command envFrom env workingDir volumeDevices),
29,30(defaultMode)"

I am saying for my deployment YAML items in the database(ylgdb.sh) are requiring spec from indexes 1,2,3,4,20,25,26,29 and 30.

If you are wondering why so many values, just a recall that Kubernetes deployment is a nested YAML spec. It contains the API section, the Pod spec which in turn has Container specs. Hence we are nesting those specs to generate the appropriate values. The indexes 1,2,3,4 are pretty common across all the resources for k8s. “20” is for Pod spec, “25” is for init-container spec, “26” is for containers, “29” is for Volume spec for a Pod, and “30” is again Volume for pod but configmap type etc…

If you see there are entries in round brackets next to the Index number, for example “3” has {type and annotations}. What this means is that by default the database has these entries but in my current requirement there is no need for type or annotation hence I am requesting it to be omitted during the YAML creation. No harm in including it in the generation and not filling in the value in values file, it just makes the YAML look decluttered. The same with container index 26 {mountPath protocol command envFrom env workingDir volumeDevices}. Here I am saying that I don’t need envForm, env, workingDir, command values in the container spec for my specific use case.

Note: Anything that goes inside the round brackets next to the index are that needs to be EXCLUDED in the final YAML generation

You can add or omit the right indexes to generate the YAML depending on your particular use case. If a certain Pod does not need a “initContainer” then just omit 25 in the index list. And if your Pod uses other types of Volumes like hostmap/github repo etc.. then check the right indexes starting point in the database , for which there is a utility script called “mul.sh” in the repo, run it as “mul.sh dbdetails” shown below

$ git pull https://github.com/rangapv/CloudNative.git

ubuntu@ip-172-1-104-212:~/cn/ylg$ ./mul.sh dbdetails
Version Begins at index 1
kind Begins at index 2
Metadata For this kind Begins at index 3
spec for the kind Deployment Begins at index 4
spec for the kind PV/PVC Begins at index 5
spec for the kind Service Begins at index 6
spec of configmap for Prometheus data Begins at index 7
spec of provisioner for aws/gcp/azure Begins at index 8
spec parameters of provisioner for aws/gcp/azure Begins at index 9
Spec to hold access/secret keys Begins at index 10
Spec for the container Begins at index 11
Details for PODS begin here- template Begins at index 20
spec for PODS affinity-taints- like scheduling constraints etc Begins at index 21
spec for PODS host-details- Specifies the hostname of the Pod(pods hostname will be set to a system-defined value) Begins at index 22
spec for PODS host-details- Host networking requested for this pod. Use the host's network namespace(Boolean default-false) Begins at index 23
spec for PODS secuirty-details- holds pod-level security attributes and common container settings Begins at index 24
spec for init-containers- The init container details for the Pod if any Begins at index 25
Spec for containers Begins at index 26
Spec for containers with setting resource limits- Compute Resources required by this container Begins at index 27
Spec for various Volumes in PODS- List of volumes that can be mounted by containers belonging to the pod; Persistent Volume Claim Begins at index 29
spec for Volume Configmap Volume Begins at index 30
spec for Volume hostmap Begins at index 31
spec for Volume gitrepo volume Begins at index 32
spec for NFS- NFS volume associated with pod that can be used by Container-s Begins at index 33
spec for cephfs volume assoicated with pod that can be used by Container-s Begins at index 34
spec for csi volume assoicated with pod that can be used by Container-s Begins at index 35
spec for the rd block device volume assoicated with pod that can be used by Container-s Begins at index 36

6. Service: Repeat the steps shown below. Default values are in “svv.yaml.bkp” file for reference and can be copied to “svv.yaml” values file before running the fill command on the generator.


$ cd Prometheus/service/
ubuntu@ip-172-1-104-212:~/ft/generator/Prometheus/service$ ls
service-generator.sh svr.yaml svv.yaml svv.yaml.bkp


#Run the generator with gen argument
$ ./service-generator.sh gen


#This is the skeletal file First time generated
$ vi svr.yaml

apiVersion:
kind:
metadata:
name:
namespace:
annotations:
spec:
selector:
app:
type:
ports:
- port:
targetPort:
nodePort:

#This is the generated values file that user needs to populate with his values
$ vi svv.yaml

Version:
kind:
Name for this kind:
Namespaces that you want this kind to be deployed:
The list of annotations for this kind:
App name for the selector for the kind Service:
Type for the kind Service(ClusterIP/NodePort):
Port mapping details for the Service:
The traget Port Number for the Service:
The NodePort number for the Service:

#This is the user filled file
$ vi svv.yaml

Version:v1
kind:Service
Name for this kind:prometheus-service
Namespaces that you want this kind to be deployed:monitoring
The list of annotations for this kind:prometheus.io/scrape:'true',prometheus.io/port:'9090'
App name for the selector for the kind Service:prometheus
Type for the kind Service(ClusterIP/NodePort):NodePort
Port mapping details for the Service:8080
The traget Port Number for the Service:9090
The NodePort number for the Service:30999

#Re-run the generator file with "fill" command line argument as show below
$ ./service-generator.sh fill


#Note the YAML is auto populated below with the values you entered in the above
$ vi svr.yaml

apiVersion: v1
kind: Service
metadata:
name: prometheus-service
namespace: monitoring
annotations:
prometheus.io/scrape:'true'
prometheus.io/port:'9090'
spec:
selector:
app: prometheus
type: NodePort
ports:
- port: 8080
targetPort: 9090
nodePort: 30999

$ kubectl apply -f ./svr.yaml
service/prometheus-service created

Lets check the pods status now ….

$ kubectl get po --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cloud-controller-manager-rnfkd 1/1 Running 0 19h
kube-system coredns-5d78c9869d-45kbx 1/1 Running 0 19h
kube-system coredns-5d78c9869d-7vzq2 1/1 Running 0 19h
kube-system ebs-csi-controller-689b48f644-2ppzw 5/5 Running 0 19h
kube-system ebs-csi-controller-689b48f644-6dww6 5/5 Running 0 19h
kube-system ebs-csi-node-55gmn 3/3 Running 0 19h
kube-system ebs-csi-node-6qc5k 3/3 Running 0 19h
kube-system etcd-ip-172-1-104-212.us-west-2.compute.internal 1/1 Running 0 19h
kube-system kube-apiserver-ip-172-1-104-212.us-west-2.compute.internal 1/1 Running 0 19h
kube-system kube-controller-manager-ip-172-1-104-212.us-west-2.compute.internal 1/1 Running 0 19h
kube-system kube-flannel-ds-bnsc5 1/1 Running 0 19h
kube-system kube-flannel-ds-v28h7 1/1 Running 0 19h
kube-system kube-proxy-ktgbw 1/1 Running 0 19h
kube-system kube-proxy-xcw98 1/1 Running 0 19h
kube-system kube-scheduler-ip-172-1-104-212.us-west-2.compute.internal 1/1 Running 0 19h
kubernetes-dashboard dashboard-metrics-scraper-5cb4f4bb9c-hlnkm 1/1 Running 0 19h
kubernetes-dashboard kubernetes-dashboard-6967859bff-gxhfp 1/1 Running 0 19h
monitoring prometheus-6cbd7ff5ff-wvfzv 1/1 Running 0 19h
ray-system kuberay-operator-5cd667675b-dwsx4 1/1 Running 0 19h

Now lets check services ..

$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19h
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 19h
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.104.128.0 <none> 8000/TCP 19h
kubernetes-dashboard kubernetes-dashboard NodePort 10.109.34.86 <none> 443:30002/TCP 19h
monitoring prometheus-service NodePort 10.110.60.136 <none> 8080:30999/TCP 2m9s
ray-system kuberay-operator ClusterIP 10.106.146.158 <none> 8080/TCP 19h

And finally for the dashboard access go to Http://node-server-ip:30999

The repo for Terraform scripts can be found here…

The YAML generator can be found here….

The database based on which the YAML gets generated is in here..

https://github.com/rangapv/CloudNative/blob/main/ylg/ylgdb.sh

This article walks in detail on the YAML generator:

Grafana Install:

You may be also interested in my following articles on Cloud computing/devops/kubernetes

If you have any issues , you can open issue in the github repo page or alternatively contact me at rangapv@yahoo.com or find me on Twitter @rangapv

--

--

Rangaswamy P V

Works on Devops in Startups, reach me @rangapv on X/twitter or email: rangapv@gmail.com