Istio GatewayAPI for your Kubernetes Cluster to enable the Cloud based Load Balancer (Classic) in AWS
From the outset you don’t need to create any ingress resources(like old times) to route traffic to your cluster from the outside world. Instead there are new resources like “httpRoute” and “Gateway” objects that are created which would take care of it.
Pre-requisites:
- Kubernetes cluster : If you don’t have a k8s cluster, then you can follow this link which will create a Kubernetes cluster with cloud controller manager for enabling the external cloud API which would in turn create a Cloud load balancer. If you already have a pre-existing cluster with Cloud controller Manager enabled then you can follow along.
- Istio CRDs : Just run this script in my home git hub repo and the CRDs install will be take care of. This basically installs “istio” with minimal config and “istioctl”.
$ git pull https://github.com/rangapv/CloudNative.git;
$ cd istio
$ ./istio-install.sh
- A domain to test your settings : www.kubedom.com is my sample domain that I have registered with a hosting provider to test this service.
Once we have the CRDs installed let us verify…
ubuntu@ip-172-1-104-212:~/cn$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default httpbin-86869bccff-ts7w6 1/1 Running 9 (68m ago) 3d2h
istio-ingress gateway-istio-659f6c875d-kxzmd 1/1 Running 14 (68m ago) 3d4h
istio-system istiod-977466b69-g62sh 1/1 Running 10 (68m ago) 3d4h
kube-system cloud-controller-manager-cftnw 1/1 Running 41 (68m ago) 37d
kube-system coredns-5d78c9869d-45kbx 1/1 Running 57 (68m ago) 70d
kube-system coredns-5d78c9869d-7vzq2 1/1 Running 57 (68m ago) 70d
kube-system ebs-csi-controller-689b48f644-2ppzw 5/5 Running 280 (68m ago) 70d
kube-system ebs-csi-controller-689b48f644-6dww6 5/5 Running 280 (68m ago) 70d
kube-system ebs-csi-node-55gmn 3/3 Running 170 (68m ago) 70d
kube-system ebs-csi-node-6qc5k 3/3 Running 171 (68m ago) 70d
kube-system etcd-ip-172-1-104-212.us-west-2.compute.internal 1/1 Running 57 (68m ago) 70d
kube-system kube-apiserver-ip-172-1-104-212.us-west-2.compute.internal 1/1 Running 57 (68m ago) 70d
kube-system kube-controller-manager-ip-172-1-104-212.us-west-2.compute.internal 1/1 Running 57 (68m ago) 70d
kube-system kube-flannel-ds-bnsc5 1/1 Running 57 (68m ago) 70d
kube-system kube-flannel-ds-v28h7 1/1 Running 57 (68m ago) 70d
kube-system kube-proxy-ktgbw 1/1 Running 57 (68m ago) 70d
kube-system kube-proxy-xcw98 1/1 Running 56 (68m ago) 70d
kube-system kube-scheduler-ip-172-1-104-212.us-west-2.compute.internal 1/1 Running 57 (68m ago) 70d
kubernetes-dashboard dashboard-metrics-scraper-5cb4f4bb9c-hlnkm 1/1 Running 56 (68m ago) 70d
kubernetes-dashboard kubernetes-dashboard-6967859bff-gxhfp 1/1 Running 56 (68m ago) 70d
All well and good so far.
Now to test the Ingress routing I am using a sample app called “httpbin” by Kenneth Retiz.
# Copyright Istio Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##################################################################################################
# httpbin service
##################################################################################################
apiVersion: v1
kind: ServiceAccount
metadata:
name: httpbin
---
apiVersion: v1
kind: Service
metadata:
name: httpbin
labels:
app: httpbin
service: httpbin
spec:
ports:
- name: http
port: 8000
targetPort: 80
selector:
app: httpbin
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpbin
spec:
replicas: 1
selector:
matchLabels:
app: httpbin
version: v1
template:
metadata:
labels:
app: httpbin
version: v1
spec:
serviceAccountName: httpbin
containers:
- image: docker.io/kong/httpbin
imagePullPolicy: IfNotPresent
name: httpbin
ports:
- containerPort: 80
We can do a kubectl apply the above file.
Next we need to create the “Gateway” object and also the “httpRoute”.
We will make use of the YLGDB.sh database to create (the link) both of these resources as detailed below. A quick reminder we are using a custom Database & scripts which are open to public use to create all the k8s resource YAMLs. More details can be found in this link.
In continuation of the YLGDB database and scripts lets generate the “Gateway.yaml” and “httpRoute.yaml”.
First let us prepare the code as shown below. Navigate to “~/cn/generator/GatewayAPI/istio” directory as shown below
$ git pull https://github.com/rangapv/CloudNative.git
ubuntu@ip-172-1-104-212:~$ cd cn
ubuntu@ip-172-1-104-212:~/cn$ ls
README.md generator helm.sh istio unit-test utilities ylg
ubuntu@ip-172-1-104-212:~/cn$ cd generator/
ubuntu@ip-172-1-104-212:~/cn/generator$ ls
GatewayAPI Grafana Prometheus fill.sh fill.sh.070723 fill.sh.090623 gen.sh gen.sh.070723
ubuntu@ip-172-1-104-212:~/cn/generator$ cd GatewayAPI/
ubuntu@ip-172-1-104-212:~/cn/generator/GatewayAPI$ ls
Class Gate httpRoute istio
ubuntu@ip-172-1-104-212:~/cn/generator/GatewayAPI$ cd istio/
ubuntu@ip-172-1-104-212:~/cn/generator/GatewayAPI/istio$ ls
Class Gate httpRoute
ubuntu@ip-172-1-104-212:~/cn/generator/GatewayAPI/istio$
We have 3 directories under “istio”. Let us generate the Gateway object
- Gateway: We have the generator script as mentioned in the Article on YLGDB.sh for k8s resource generators.
$ cd Gate
$ ls
Gate.sh dgr.yaml dgv.yaml dgv.yaml.bkp
$ vi Gate.sh
#!/usr/bin/env bash
set -E
#The YAMl file name for this resource ; this can be changed
pvfile="dgr.yaml"
#The Values file name for this resource ; this can be changed
pvvfile="dgv.yaml"
#The starting Index in the database(ylgdb.sh) for this resource ; this can have multiple indexes depending on your particular use case
myindx="1,2,3(labels type annotations),38(addresses port protocol selector tls kinds)"
if [[ "$1" == "gen" ]]
then
#source /home/ubuntu/cn/generator/gen.sh "gen" "$pvfile" "$pvvfile" "$myindx"
source <(curl -s https://raw.githubusercontent.com/rangapv/CloudNative/main/generator/gen.sh) "gen" "$pvfile" "$pvvfile" "$myindx"
elif [[ "$1" == "fill" ]]
then
#source /home/ubuntu/cn/generator/gen.sh "fill" "$pvfile" "$pvvfile" "$myindx"
source <(curl -s https://raw.githubusercontent.com/rangapv/CloudNative/main/generator/gen.sh) "fill" "$pvfile" "$pvvfile" "$myindx"
else
echo "usuage: Gate.sh gen/fill"
fi
As you are aware in the generator script we need to give the starting index for Gateway YAML definitions in the database and any attributes that needs to be omitted. As shown below
myindx="1,2,3(labels type annotations),38(addresses port protocol selector tls kinds)"
Now let us run the script with “gen” command line argument.
ubuntu@ip-172-1-104-212:~/cn/generator/GatewayAPI/istio/Gate$ ./Gate.sh gen
Generating Values file for this resource
ubuntu@ip-172-1-104-212:~/cn/generator/GatewayAPI/istio/Gate$
#Generated YAML file
$ vi dgr.yaml
apiVersion:
kind:
metadata:
name:
namespace:
spec:
gatewayClassName:
listeners:
- name:
allowedRoutes:
namespaces:
from:
hostname:
#Generated values file that needs to be filled with specific values
$ vi dgv.yaml
Version:
kind:
Name for this kind:
Namespaces that you want this kind to be deployed:
This is the name of a GatewayClass resource:
The combo of (name;port;protocol FOR more than 1 listener seperate by comma):
Namespaces indicates namespaces from which Routes may be attached to this Listener:
indicates where Routes will be selected for this Gateway(All/Same/Selector):
specifies the virtual hostname to match for protocol types that define this concept:
#filled values file
$ dgv.yaml
Version:gateway.networking.k8s.io/v1beta1
kind:Gateway
Name for this kind:gateway
Namespaces that you want this kind to be deployed:istio-ingress
This is the name of a GatewayClass resource:istio
The combo of (name;port;protocol FOR more than 1 listener seperate by comma):default;80;HTTP
Namespaces indicates namespaces from which Routes may be attached to this Listener:
indicates where Routes will be selected for this Gateway(All/Same/Selector):All
specifies the virtual hostname to match for protocol types that define this concept:"www.kubedom.com"
#Values File
#Now let us re-run the generator with "fill" command line argument
$ ./Gate.sh fill
$ vi dgr.yaml
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: gateway
namespace: istio-ingress
spec:
gatewayClassName: istio
listeners:
- name: default
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: All
hostname: "www.kubedom.com"
$ kubectl apply -f ./dgr.yaml
gateway.gateway.networking.k8s.io/gateway configured
Let us verify the Gateway object..
ubuntu@ip-172-1-104-212:~/cn/generator/GatewayAPI/istio/Gate$ kubectl get gtw -n istio-ingress
NAME CLASS ADDRESS PROGRAMMED AGE
gateway istio a914c423b44c447eeb4a598825c24bf4-1315498576.us-west-2.elb.amazonaws.com True 5d4h
If you note above the ADDRESS field is already populated with the AWS Cloud based Load Balancer since our Kubernetes Cluster that we have created is Cloud Controller enabled.
Now lets create a “httpRoute”:
$ cd httpRoute
$ ls
$ vi httpRoute.sh
#!/usr/bin/env bash
set -E
#The YAMl file name for this resource ; this can be changed
pvfile="dgr.yaml"
#The Values file name for this resource ; this can be changed
pvvfile="dgv.yaml"
#The starting Index in the database(ylgdb.sh) for this resource ; this can have multiple indexes depending on your particular use case
myindx="1,2,3(type annotations labels app),39(headers queryParams method set remove value responseHeaderModifier requestMirror requestRedirect urlRewrite extensionRef)"
if [[ "$1" == "gen" ]]
then
#source /home/ubuntu/cn/generator/gen.sh "gen" "$pvfile" "$pvvfile" "$myindx"
source <(curl -s https://raw.githubusercontent.com/rangapv/CloudNative/main/generator/gen.sh) "gen" "$pvfile" "$pvvfile" "$myindx"
elif [[ "$1" == "fill" ]]
then
#source /home/ubuntu/cn/generator/gen.sh "fill" "$pvfile" "$pvvfile" "$myindx"
source <(curl -s https://raw.githubusercontent.com/rangapv/CloudNative/main/generator/gen.sh) "fill" "$pvfile" "$pvvfile" "$myindx"
else
echo "usuage: httpRoute.sh gen/fill"
fi
Observe the “myindx” variable value for httpRoute.
myindx="1,2,3(type annotations labels app),39(headers queryParams method set remove responseHeaderModifier requestMirror requestRedirect urlRewrite extensionRef)"
The values that are next to the integer 39(..) mentions the starting index for “httpRoute” resource with the default attributes in the database minus the ones I don’t need for my current use case which are mentioned inside the round brackets.
Now let us run the script
ubuntu@ip-172-1-104-212:~/cn/generator/GatewayAPI/istio/httpRoute$ ./httpRoute.sh gen
Generating Values file for this resource
ubuntu@ip-172-1-104-212:~/cn/generator/GatewayAPI/istio/httpRoute$
# The generate YAML file which is not yet populated.
$ vi dgr.yaml
apiVersion:
kind:
metadata:
name:
namespace:
spec:
parentRefs:
- name:
kind:
port:
sectionName:
namespace:
group:
hostnames:
rules:
- backendRefs:
- name:
weight:
port:
kind:
group:
- matches:
- path:
type:
- filters:
- type:
requestHeaderModifier:
add:
- name:
# the Vlaues file...
ubuntu@ip-172-1-104-212:~/cn/generator/GatewayAPI/istio/httpRoute$ vi dgv.yaml
Version:
kind:
Name for this kind:
Namespaces that you want this kind to be deployed:
THe name for this parentrefs:
The valide values are(Gateway/Service):
is the network port this Route targets:
name of a section in a Kubernetes resource(invalid values are /):
The namespaces that needs to appear:
When unspecified (gateway.networking.k8s.io) is inferred:
Valid hostnames:
The name for this backend refs:
Weight specifies the proportion of requests forwarded to the referenced backend:
The port number for backendRefs:
The kind for the backendRefs:
The group for the backendRefs:
How to match(Exact/PathPrefix/Regular-expresssion) enter (type;value) for more than one seperate by comma:
Such as (Core/Extended/Implementation-specific/URL-rewrite/ResponseHeaderModifier etc):
HTTP Header to be matched for add in requestHeaderModifier filter for httpRoute (enter as name;value) as pairs:
#the filled values file
$ vi dgv.yaml
Version: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
Name for this kind: http
Namespaces that you want this kind to be deployed: default
THe name for this parentrefs: gateway
The valide values are(Gateway/Service):
is the network port this Route targets:
name of a section in a Kubernetes resource(invalid values are /):
The namespaces that needs to appear: istio-ingress
When unspecified (gateway.networking.k8s.io) is inferred:
Valid hostnames: ["www.kubedom.com"]
The name for this backend refs: httpbin
Weight specifies the proportion of requests forwarded to the referenced backend:
The port number for backendRefs:8000
The kind for the backendRefs:
The group for the backendRefs:
How to match(Exact/PathPrefix/Regular-expresssion) enter (type;value) for more than one seperate by comma:PathPrefix;/,PathPrefix;/ip,PathPrefix;/headers
Such as (Core/Extended/Implementation-specific/URL-rewrite/ResponseHeaderModifier etc):RequestHeaderModifier
HTTP Header to be matched for add in requestHeaderModifier filter for httpRoute (enter as name;value) as pairs:my-added-header;added-value
NOW re-run the script with "fill" command line argument
$ ./httpRoute.sh fill
#Filled file dgr.yaml
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: http
namespace: default
spec:
parentRefs:
- name: gateway
kind:
port:
sectionName:
namespace: istio-ingress
group:
hostnames: ["www.kubedom.com"]
rules:
- backendRefs:
- name: httpbin
weight:
port: 8000
kind:
group:
- matches:
- path:
type: PathPrefix
value: /
type: PathPrefix
value: /ip
type: PathPrefix
value: /headers
- filters:
- type: RequestHeaderModifier
requestHeaderModifier:
add:
- name: my-added-header
value: added-value
$ kubectl apply -f dgr.yaml
httproute.gateway.networking.k8s.io/http configured
Update the CNAME from the hosting site provider DNS records:
From the Load Balancer name update the CNAME DNS record in the hosting provider c-panel
CNAME www a914c423b44c447eeb4a598825c24bf4-1315498576.us-west-2.elb.amazonaws.com 1 Hour
Also you need to add the instances to the Load Balancer that got created, in the AWS console as shown below.
Go to : htttp://www.kubedom.com
BINGO: We have successfully hosted our app to be Accessible from the Outside World!
Some screen captures…
If you have any issues, you can open an issue in the github repo page or alternatively contact me at rangapv@yahoo.com and you can also find me on X.com(twitter) @rangapv
Some of my other related writings: