Grafana Install on a k8s Cluster in AWS

Rangaswamy P V
14 min readJul 8, 2023

--

In this article we will look into installing Grafana for the Observability stack for your microservices on a Kubernetes Cluster in AWS. This is in continuation to the Prometheus Install on a k8s Cluster (in my earlier article). You can also use the same steps on any other cloud which has Kubernetes.

If you do not yet have a Kubernetes Cluster then you can quickly install one from my earlier scripts, explained in detail in this Article.

With both Kubernetes and Prometheus installed; Up and Running lets dive into the Grafana install.

Assumption: I am using CloudStorage for Storing Prometheus & Grafana alerts. If you are using ‘nfs’ then your Deployment YAML may slightly change. Still the YLG YAML generator(the YAML generator for the k8s resources to install the Prometheus & Grafana) would work fine, you just need to put the right values for “myindx”” array a.k.a the starting Indexes for them , when generating the YAML as mentioned in the YAML generator for the resources (explained in this article). The ConfigMap, Service YAMLs may not change. The PVC YAML may change if you are using a different Cloud Provider. All in all if you are following all along my articles from Cluster Creation , Prometheus Install then you can just clone my repo on Grafana (in this link) as well and get it UP and Running!

Once you have the repo cloned here are the things that we need to do

For Grafana we need the following resources in the Kubernetes cluster

  1. Namespace
  2. StorageClass
  3. PersistentVolumeClaim
  4. ConfigMap
  5. Deployment
  6. Service

First lets pull the code from the repo:

ubuntu@ip-172-1-166-9:~/cn$ git pull https://github.com/rangapv/CloudNative.git
ubuntu@ip-172-1-166-9:~/cn$ cd generator/
ubuntu@ip-172-1-166-9:~/cn/generator$ cd Grafana/

As before we will make use of the YLG Database (details in this article) to create the YAMLs and also populate them with the values.

Since we already created Namespace and StorageClass while installing the Prometheus service( details in this article) we can skip those two and create the rest of the resource.

1 & 2 Resource creation is cluster wide hence (taken care during Prometheus Installs) skipping to the next resource

3. PersistentVolumeClaim: The yaml generator file format is as shown below. To make life easy and not having to refer to the Documentation(good luck with that), I have designed and automated script for the YAML code generation for any k8s resource.

Note: This code generator is universal for any k8s resource. The only thing that changes are the starting Index array “myindx” in the “resource-generator.sh” file.

ubuntu@ip-172-1-166-9:~/cn/generator/Grafana$ cd pvc/
ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/pvc$ ls
pvc-generator.sh pvc.yaml pvcv.yaml pvcv.yaml.bkp

ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/pvc$ vi pvc-generator.sh

#!/bin/bash

set -E

#The YAMl file name for this resource ; this can be changed
pvfile="pvc.yaml"
#The Values file name for this resource ; this can be changed
pvvfile="pvcv.yaml"
#The starting Index in the database(ylgdb.sh) for this resource ; this CANNOT be changed
myindx="5"
myindx="1,2,3(type annotations),5(capacity nfs csi)"


if [[ "$1" == "gen" ]]
then
source <(curl -s https://raw.githubusercontent.com/rangapv/CloudNative/main/generator/gen.sh) "gen" "$pvfile" "$pvvfile" "$myindx"

elif [[ "$1" == "fill" ]]
then
source <(curl -s https://raw.githubusercontent.com/rangapv/CloudNative/main/generator/gen.sh) "fill" "$pvfile" "$pvvfile" "$myindx"
else
echo "usuage: pvc-generator.sh gen/fill"

fi

myindx=”1,2,3(type annotations),5(capacity nfs csi)”

I am saying Omit type & annotation in the YAML generation for the versioning section. Also omit capacity, nfs, csi in my current YAML generation since I am using cloud based storage.

You can follow the same concept for generating any other resources for k8s. To know the starting index number for other specs you can run the utility in the cloned repo as shown below

$ ./mul.sh dbdetails

This file(pvc-generator.sh & any other resource generator) takes command line argument “gen” for generating the skeletal YAML file “pvc.yaml” for the resource and a “pvcv.yaml” for user to input values. Once you fill in the “pvcv.yaml” file with your appropriate values for the resource and re-run the pvc-generator.sh file with “fill” as the command line argument it populates the “pvc.yaml” which is what we need and thus can be applied to the cluster as “kubectl apply” command ; as show below. So the user/you need not worry about the indentation for the resource/default values for the YAML etc., Of course you should know the most relevant values like for example the “image-tag” in case of Deployment YAML , the “namespace” you want to create it in, the relevant mount points etc.


ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/pvc$ ./pvc-generator.sh gen
ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/pvc$ vi pvc.yaml

#This is the skeletal file First time generated
apiVersion:
kind:
metadata:
name:
namespace:
labels:
app:
spec:
storageClassName:
accessModes:
-
resources:
requests:
storage:

#This is the generated values file that user needs to populate with his values

ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/pvc$ vi pvcv.yaml
Version:
kind:
Name for this kind:
Namespaces that you want this kind to be deployed:
app name to be referenced:
User defined Name for the Storage Class:
The Access Modes Value needs to be mentioned Read/Write/ReadWrite etc:
The storage Details(Mb/Gb):

#This is the user filled file
Version: v1
kind: PersistentVolumeClaim
Name for this kind:pvc1-store
Namespaces that you want this kind to be deployed: monitoring
app name to be referenced: grafana-deployment
User defined Name for the Storage Class:prom1
The Access Modes Value needs to be mentioned Read/Write/ReadWrite etc:ReadWriteOnce
The storage Details(Mb/Gb):2Gi

#Re-run the generator file with "fill" command line argument as show below
ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/pvc$ ./pvc-generator.sh fill

#Note the YAML is auto populated below with the values you entered in the above
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1-store
namespace: monitoring
labels:
app: grafana-deployment
spec:
storageClassName: prom1
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi

#Lets apply the YAML on the kubernetes Cluster
ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/pvc$ kubectl apply -f ./pvc.yaml
persistentvolumeclaim/pvc1-store created

4. ConfigMap: This is one resource that is very specific to the resource we are deploying. So we cannot fully generate the YAML yet. The generator script works pretty fine for all resources which has pre defined Key: Value pairs. But ConfigMap as a resource uses partially JSON format; we will fill the values manually ( at least for now).

ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/configmap$ vi cm-generator.sh

#!/bin/bash

set -E

#The YAMl file name for this resource ; this can be changed
pvfile="configmap-grafa.yaml"
#The Values file name for this resource ; this can be changed
pvvfile="configmap-grafa-value.yaml"
#The starting Index in the database(ylgdb.sh) for this resource ; this CANNOT be changed
myindx="1,2,3(labels annotations),7(global alerting scrape_configs rule_files)"

if [[ "$1" == "gen" ]]
then
source <(curl -s https://raw.githubusercontent.com/rangapv/CloudNative/main/generator/gen.sh) "gen" "$pvfile" "$pvvfile" "$myindx"

elif [[ "$1" == "fill" ]]
then
source <(curl -s https://raw.githubusercontent.com/rangapv/CloudNative/main/generator/gen.sh) "fill" "$pvfile" "$pvvfile" "$myindx"
else
echo "usuage: cm-generator.sh gen/fill"

fi

ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/configmap$ ./cm-generator.sh gen
ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/configmap$ vi configmap-grafa.yaml


#This is the skeletal file First time generated
apiVersion:
kind:
metadata:
name:
namespace:
annotations:
data:
prometheus.yml: |


#This is the generated values file that user needs to populate with his values
Version:
kind:
Name for this kind:
Namespaces that you want this kind to be deployed:
The list of annotations for this kind:

#This is the user filled file
Version: v1
kind:ConfigMap
Name for this kind:grafana-config
Namespaces that you want this kind to be deployed:monitoring

#Re-run the generator file with "fill" command line argument as show below
ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/configmap$ ./cm-generator.sh fill


#Note the YAML is auto populated below with the values you entered in the above
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-config
namespace: monitoring
data:
prometheus.yml: |


#Now add these to the above file
{
"apiVersion": 1,
"datasources": [
{
"access":"proxy",
"editable": true,
"name": "prometheus",
"orgId": 1,
"type": "prometheus",
"url": "http://prometheus-service.monitoring.svc:8080",
"version": 1
}
]
}


ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/configmap$ vi configmap-grafa.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-config
namespace: monitoring
data:
prometheus.yml: |
{
"apiVersion": 1,
"datasources": [
{
"access":"proxy",
"editable": true,
"name": "prometheus",
"orgId": 1,
"type": "prometheus",
"url": "http://prometheus-service.monitoring.svc:8080",
"version": 1
}
]
}

#Lets apply the YAML on the kubernetes Cluster
ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/configmap$ kubectl apply -f ./configmap-grafa.yaml
configmap/grafana-config created

5. Deployment: Follow the same step as before for the resource creation.

Lets look at the generator file. The most important resource in k8s.

#!/bin/bash

set -E

#The YAMl file name for this resource ; this can be changed
pvfile="dgr.yaml"
#The Values file name for this resource ; this can be changed
pvvfile="dgv.yaml"
#The starting Index in the database(ylgdb.sh) for this resource ; this can have multiple indexes depending on your particular use case
myindx="1,2,3(type annotations),4,20(nodeSelector nodeName),25(mountPath),26(mountPath protocol command envFrom env workingDir volumeDevices),29,30(defaultMode)"


if [[ "$1" == "gen" ]]
then
#source /home/ubuntu/cn/generator/gen.sh "gen" "$pvfile" "$pvvfile" "$myindx"
source <(curl -s https://raw.githubusercontent.com/rangapv/CloudNative/main/generator/gen.sh) "gen" "$pvfile" "$pvvfile" "$myindx"

elif [[ "$1" == "fill" ]]
then
#source /home/ubuntu/cn/generator/gen.sh "fill" "$pvfile" "$pvvfile" "$myindx"
source <(curl -s https://raw.githubusercontent.com/rangapv/CloudNative/main/generator/gen.sh) "fill" "$pvfile" "$pvvfile" "$myindx"
else
echo "usuage: deployment-generator.sh gen/fill"

fi

The reason we have included the following indexes…

myindx="1,2,3(type annotations),4,20(nodeSelector nodeName),25(mountPath),26(mountPath protocol command envFrom env workingDir volumeDevices),29,30(defaultMode)"


A deployment consist of the following objects
Api-metadata index from 1 thr 4
Pod spec index from 20
inticontainer spec index from 25
container spec index from 26
29 & 30 indexes are for PVC & configmap specs for the Pod Volumes.

We have included the entries in round brackets in the myindx.
3(type annotations),4,20(nodeSelector nodeName)


What this instructs the workhorse scripts is to not include these entries during YAML
generation. For people who are familiar in k8s they can reconginze the "envFrom" variable is
typically used for a container spec. I am just saying I dont need this in my current use case.
Likewise for other entries in this array..

Now to generate the YAMLs themselves…

ubuntu@ip-172-1-166-9:~/cn/generator/Grafana$ cd deployment/
ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/deployment$ ./deployment-generator.sh gen

#This is the skeletal file First time generated
ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/deployment$ vi dgr.yaml


apiVersion:
kind:
metadata:
name:
namespace:
labels:
app:
spec:
replicas:
selector:
matchLabels:
app:
strategy:
rollingUpdate:
maxSurge:
maxUnavailable:
type:
revisionHistoryLimit:
minReadySeconds:
progressDeadlineSeconds:
template:
metadata:
labels:
app:
spec:
restartPolicy:
initContainers:
- name:
image:
command:
volumeMounts:
- name:
containers:
- name:
image:
imagePullPolicy:
command:
args:
ports:
- containerPort:
hostIP:
name:
protocol:
volumeMounts:
- name:
mountPropagation:
readOnly:
subPath:
subPathExpr:
volumes:
- name:
persistentVolumeClaim:
claimName:
- name:
configMap:
name:

#This is the generated values file that user needs to populate with his values
ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/deployment$ vi dgv.yaml


Version:
kind:
Name for this kind:
Namespaces that you want this kind to be deployed:
app name to be referenced:
Replicas:
The app name for the Pod to match:
app name for the container:
The name for the init container:
The init container image:
The command to be executed for the init container:
Name of the volume mount in init container in the format mountpoints;mountpath :
Name of this Container:
image tag:
args that may be needed to for the container to start/run:
containerports:
Name of the container mount points;mountpath:
Name of the Volume(configMapName):
Name fo this Configmap;DefaultMode:
Name of the Volume(PersistentVolumeClaim):
The Claim name of PersistentVolumeClaim:

#This is the user filled file
Version:apps/v1
kind:Deployment
Name for this kind:grafana
Namespaces that you want this kind to be deployed:monitoring
app name to be referenced:grafana
Replicas:1
The app name for the Pod to match:grafana
The max surge number for the Pod:
The max Unavailabe details for the Pod:
The strategy type for the Pod:
Enter number of old ReplicaSets to retain to allow rollback(default is 10):
Minimum number of seconds for it to be considered available:
maximum time (defaults 600s) for a deployment before it is considered to be failed:
The label for the Pod if any:
The app name for the Pod if any: grafana
Restart policy for all containers within the pod(Always/OnFailure/Never):Always
The name for the init container:prom-busy-bos-set-permis
The init container image:busybox
The command to be executed for the init container:["/bin/chmod","-R","777","/var/lib/grafana"]
Name of the volume mount in init container in the format mountpoints;mountpath:grafana-storage;/var/lib/grafana
Name of the container:grafana
Container image name:grafana/grafana:latest
Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided:
Arguments to the entrypoint. The container image's CMD is used if this is not provided:
Port number to expose on the pods IP address(o-65536):3000
host IP to bind the external port to.:
Each named port in a pod must have a unique name:grafp
Protocol for port. (UDP/TCP/SCTP Defaults to TCP):
Name of the Volume and the mount path enter value as name;path-of-mount here:grafana-storage;/var/lib/grafana,grafana-config;/etc/grafana/provisioning/datasources
determines how mounts are propagated from the host to container and the other way around:
Mount volume value read-only if true:
Path within the volume from which the container's volume should be mounted.(Default to root):
Expanded path within the volume from which the container's volume should be mounted.:
spec for persistent Volume claim:grafana-storage
The Persistent claim name:pvc1-store
spec for Volume Configmap Volume:grafana-config
In the options you have chosen Configmap pls enter the value as name;defaultmode here:grafana-config;420


#Re-run the generator file with "fill" command line argument as show below
ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/deployment$ ./deployment-generator.sh fill


#Note the YAML is auto populated below with the values you entered in the above

apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
namespace: monitoring
labels:
app: grafana
spec:
replicas: 1
selector:
matchLabels:
app: grafana
strategy:
rollingUpdate:
maxSurge:
maxUnavailable:
type:
revisionHistoryLimit:
minReadySeconds:
progressDeadlineSeconds:
template:
metadata:
labels:
app: grafana
spec:
restartPolicy: Always
initContainers:
- name: prom-busy-bos-set-permis
image: busybox
command: ["/bin/chmod","-R","777","/var/lib/grafana"]
volumeMounts:
- name: grafana-storage
mountPath: /var/lib/grafana
containers:
- name: grafana
image: grafana/grafana:latest
command:
args:
ports:
- containerPort: 3000
hostIP:
name: grafp
protocol:
volumeMounts:
- name: grafana-storage
mountPath: /var/lib/grafana
- name: grafana-config
mountPath: /etc/grafana/provisioning/datasources
mountPropagation:
readOnly:
subPath:
subPathExpr:
volumes:
- name: grafana-storage
persistentVolumeClaim:
claimName: pvc1-store
- name: grafana-config
configMap:
name: grafana-config
defaultMode: 420



#Lets apply the YAML on the kubernetes Cluster
ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/deployment$ kubectl apply -f ./dgr.yaml
deployment.apps/grafana created

COOL!

Note : In the final YAML for the Grafana Deployment there is “selector” & “strategy” in the Pod spec section. if you do not wish to have them then go ahead and include them in the array of the generartor.sh file. And you will not see them in the generated YAMLs. No harm if you don't mention in the array and if you still do not populate in the final YAML.

If you observed keenly we have “initContainers”, which is basically giving write permissions to the External Cloud Storage(gp2/3) that we created for the Alerts to be stored for Prometheus & Grafana deployments. If you are using ‘nfs’ as storage you may not need it or may be you do! I haven't tried that, but this code works for AWS, Prometheus, Grafana, csi based cloud storage! .

Also for the deployment, I am making use of the ConfigMap names that I used to generate, pvc name that I generated before. All these values go into the values file(pvcv.yaml) file during the generation of the deployment YAML.

Let us check the pods…

ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/deployment$ kubectl get po -n monitoring
grafana-755fb74f56-cq6f2 1/1 Running 0 112s
prometheus-6cbd7ff5ff-wvfzv 1/1 Running 9 (84m ago) 29d

6. Service: Repeat the steps shown below.

$ vi servvice-generator.sh

#!/bin/bash

set -E

#The YAMl file name for this resource ; this can be changed
pvfile="svr.yaml"
#The Values file name for this resource ; this can be changed
pvvfile="svv.yaml"
#The starting Index in the database(ylgdb.sh) for this resource ; this CANNOT be changed
myindx="1,2,3(labels),6"


if [[ "$1" == "gen" ]]
then
source <(curl -s https://raw.githubusercontent.com/rangapv/CloudNative/main/generator/gen.sh) "gen" "$pvfile" "$pvvfile" "$myindx"

elif [[ "$1" == "fill" ]]
then
source <(curl -s https://raw.githubusercontent.com/rangapv/CloudNative/main/generator/gen.sh) "fill" "$pvfile" "$pvvfile" "$myindx"
else
echo "usuage: service-generator.sh gen/fill"

fi


ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/service$ ls
service-generator.sh svr.yaml svv.yaml svv.yaml.bkp

ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/service$ ./service-generator.sh gen
ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/service$ vi svr.yaml

#This is the skeletal file First time generated
apiVersion:
kind:
metadata:
name:
namespace:
annotations:
spec:
selector:
app:
type:
ports:
- port:
targetPort:
nodePort:

ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/service$ vi svv.yaml
#This is the generated values file that user needs to populate with his values
Version:
kind:
Name for this kind:
Namespaces that you want this kind to be deployed:
The list of annotations for this kind:
App name for the selector for the kind Service:
Type for the kind Service(ClusterIP/NodePort):
Port mapping details for the Service:
The traget Port Number for the Service:
The NodePort number for the Service:

#This is the user filled file
Version:v1
kind:Service
Name for this kind:grafana-service
Namespaces that you want this kind to be deployed:monitoring
The list of annotations for this kind:prometheus.io/scrape:'true',prometheus.io/port:'3000'
App name for the selector for the kind Service:grafana
Type for the kind Service(ClusterIP/NodePort):NodePort
Port mapping details for the Service:3000
The traget Port Number for the Service:3000
The NodePort number for the Service:31999


#Re-run the generator file with "fill" command line argument as show below
ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/service$ ./service-generator.sh fill

#Note the YAML is auto populated below with the values you entered in the above
apiVersion: v1
kind: Service
metadata:
name: grafana-service
namespace: monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '3000'
spec:
selector:
app: grafana
type: NodePort
ports:
- port: 3000
targetPort: 3000
nodePort: 31999


#Lets apply the YAML on the kubernetes Cluster
ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/service$ kubectl apply -f ./svr.yaml
service/grafana-service created

Lets Check all the Services:


ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/service$ kubectl get svc -n monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana-service NodePort 10.98.87.130 <none> 3000:31999/TCP 28s
prometheus-service NodePort 10.110.60.136 <none> 8080:30999/TCP 29d

ALSO:
ubuntu@ip-172-1-166-9:~/cn/generator/Grafana/service$ kubectl get all -n monitoring

NAME READY STATUS RESTARTS AGE
pod/grafana-755fb74f56-cq6f2 1/1 Running 1 (12m ago) 56m
pod/prometheus-6cbd7ff5ff-wvfzv 1/1 Running 10 (12m ago) 29d

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/grafana-service NodePort 10.98.87.130 <none> 3000:31999/TCP 112s
service/prometheus-service NodePort 10.110.60.136 <none> 8080:30999/TCP 29d

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/grafana 1/1 1 1 61m
deployment.apps/prometheus 1/1 1 1 29d

NAME DESIRED CURRENT READY AGE
replicaset.apps/grafana-755fb74f56 1 1 1 56m
replicaset.apps/prometheus-6cbd7ff5ff 1 1 1 29d

Screenshots: Go to http://nodeip:31999

Note: Since we have used the NodePort(31999) option in the Service YAML

Some of the screen shots are shown below

If you have any issues, you can open an issue in the github repo page or alternatively contact me at rangapv@yahoo.com and you can also find me on Twitter @rangapv

--

--