Deploying a Nginx Ingress controller(F5) on to a k8s Cluster to expose the service to the www
The goal of any Kubernetes deployment should be to get to Ingress, let’s see how to make this happen.
This is in continuation from the earlier article wherein we deployed the Kubernetes cluster on the AWS cloud and configured an out-of-tree cloud controller manager. Now in this post we will see how to expose the service running in the cluster to the outside world using the Nginx Ingress Controller which creates a Layer 4 Network LoadBalancer provided by the Cloud provider AWS.
Log into the cluster that we created in the earlier setup or create a new cluster from the repo link either from “Create a k8s Cluster on AWS with Iac tool Terraform” or “Create a Kubernetes(k8s) Cluster using “kubeadm”, with out-of-tree Cloud Controller Manager for AWS”. The YAML configuration definition for the Nginx Ingress(official version from F5) is in my github repo, clone/pull it and navigate to the “nginx-ingress:2.4.1” or the latest “nginx-ingress:3.3.0” directory
$ git pull https://github.com/rangapv/ingress-nginx.git
$ ls
Readme ingress1.yaml ingress2.yaml nginx-ingress:2.4.1 nginx-ingress:3.3.0 postdragon predragon
$ cd nginx-ingress:2.4.1
$ ls
common daemonset deployment kubeapply.sh rbac service
I will list the Ingress YAML here.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress1
spec:
ingressClassName: nginx
rules:
- host: www.simplek8s.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web1-service
port:
number: 80
You need to apply all the YAMLs from these subdirectories. The YAML configs have been customized for the AWS cloud and Nginx Ingress Controller(F5).
Execute the ./kubeapply.sh BASH script that is present in the directory and it will apply all the required YAML for the Nginx Ingress Controller to be setup in the Cluster.
$./kubeapply.sh
secret/default-server-secret created
ingressclass.networking.k8s.io/nginx created
configmap/nginx-config created
namespace/nginx-ingress unchanged
serviceaccount/nginx-ingress unchanged
daemonset.apps/nginx-ingress created
ingress.networking.k8s.io/nginx-ingress1 created
deployment.apps/web1-pod-deployment created
service/web1-service created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-app-protect created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-app-protect created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-app-protect-dos created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-app-protect-dos created
clusterrole.rbac.authorization.k8s.io/nginx-ingress created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress created
service/nginx-ingress created
the Total file applied are 12
the Total Directory traversed are 5
Check on the pods and services that we just created
$ kubectl get daemonset --all-namespaces
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system cloud-controller-manager 1 1 1 1 1 node-role.kubernetes.io/control-plane= 24d
kube-system kube-flannel-ds 2 2 2 2 2 <none> 25d
kube-system kube-proxy 2 2 2 2 2 kubernetes.io/os=linux 25d
nginx-ingress nginx-ingress 1 1 1 1 1 <none> 29h
$kubectl get po -n nginx-ingress
NAME READY STATUS RESTARTS AGE
nginx-ingress-9f8wd 1/1 Running 1 (6h30m ago) 29h
$ kubectl get svc -n nginx-ingress
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress LoadBalancer 10.101.123.138 a3bdea24770e244dd8dfc17f8990cb50-a964598d7c2fb772.elb.us-east-2.amazonaws.com 80:30839/TCP,443:31784/TCP 29h
$ kubectl get po -n default
NAME READY STATUS RESTARTS AGE
web1-pod-deployment-7847b8958c-k9gks 1/1 Running 8 (6h30m ago) 3d22h
$ kubectl get svc -n nginx-ingress -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web1-service ClusterIP 10.97.135.126 <none> 80/TCP 3d22h
$ kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx-ingress1 nginx www.simplek8s.com 80 2d
From the above we see that all the Pods and Services are up and running.
Note: We have a service of type LoadBalancer with the external cloud provider LoadBalancer up and running!
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress LoadBalancer 10.98.35.254 a3e9de441756442739e6dbc6980d9562-5ae3dce1e5cc8eaf.elb.us-west-2.amazonaws.com 80:31906/TCP,443:31961/TCP 17h
We have also got a DNS for the Layer 4 LoadBalancer on the AWS Cloud. Now there are certain configurations that needs to be done out of the box for the LoadBalancer, like populating the target groups to the LoadBalancer listeners; with the ec2 instances that are running the pods and also enabling the Proxy Protocol v2 attribute for the listeners. Some screen shots here
Note: Populating the target group and enabling of the proxy protocol has to be done manually, the Nginx Ingress controller will not do it at least not yet as of version 2.4.1.
Now for the test domain(in my case it is www.simplek8s.com) we need to update the CNAME record on the domain registrar website to match the DNS record of the LoadBalancer (A Record).
The Ingress controller creates the Cloud LoadBalancer automatically and we get the LoadBalancer — A record(DNS name) and not the IP.
CNAME www a3bdea24770e244dd8dfc17f8990cb50-a964598d7c2fb772.elb.us-east-2.amazonaws.com. 1 Hour
To summarize, I am saying if the Clients type www.simplek8s.com on the browser then forward it to my LoadBalancer on AWS cloud which will take up the requests from there to route it to the right k8s node running the pod and hit the service that is configured in the Ingress rules.
BINGO!
Just checking the nginx-pod log…
2022-11-23T13:07:53.279462632Z stdout F 49.207.230.143 - - [23/Nov/2022:13:07:53 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" "-"
Have Fun! If you have any queries kindly open an issue at the github repo and I will address it ASAP, you can also reach out to me at rangapv@yahoo.com or on twitter @rangapv
You may be also interested in the following: