Create a Kubernetes(k8s) Cluster using “kubeadm”, with out-of-tree Cloud Controller Manager for AWS

Rangaswamy P V
7 min readNov 1, 2022

--

This article walks you through the steps to create a Kubernetes cluster on an “ec2” instance in AWS cloud and installs the latest out-of-tree cloud controller manager with “containerD” as the runtime and networking using “flannel”.

I have used ec2 “t2.medium” instance with ubuntu 22.04 as the base operating system. Create 2 such instances from the AWS console or your Ansible/Terraform scripts, one for Control Plane and another for Worker Node.

Pre-requisites:

Ubuntu 22.04 requires user inputs for updates, hence I am using “user-data” at the creation time of the instance so that user inputs are not prompted during install of the packages. The user data is as shown below

#! /bin/bash
filenf="/etc/needrestart/needrestart.conf"
nrf1=`sudo cat $filenf | grep "\$nrconf{restart} = 'a';"`
nrf1s="$?"
if [[ ( -z $nrf1 ) && (( $nrf1s -ne 0 )) ]]
then
linenf="\$nrfconf{restart}\ \=\ 'a';"
linemat="\$nrconf{restart} = 'i'"
sudo sed -i "/$linemat/a$linenf\n" $filenf
fi
f1="/etc/ssh/sshd_config"
line20="\#PubkeyAuthentication\ yes"
line2="PubkeyAuthentication\ yes"
line21="PubkeyAcceptedKeyTypes\ \+ssh\-rsa"
line22="HostKeyAlgorithms\ \+ssh\-rsa"
line23="HostbasedAcceptedKeyTypes\ \+ssh\-rsa"
line24="AuthorizedKeysFile\ \ \ \ \ \.ssh\/authorized\_keys\ \.ssh\/authorized\_keys2"
sudo sed -i "/$line20/a$line2\n$line21\n$line22\n$line23\n$line24" $f1
sudo systemctl restart ssh

The above info needs to be added when creating the “ec2” instance in the Advance Details -> User Data section.

Also you would need a Policy(via Roles) association for the instances. I am giving the policy permissions below, for creating and attaching it kindly refer to the AWS documentation(Roles & Policies).

For the Control Node the following Permissions are required.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"ec2:DescribeInstances",
"ec2:DescribeAccountAttributes",
"ec2:DescribeRegions",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVolumes",
"ec2:CreateSecurityGroup",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:ModifyInstanceAttribute",
"ec2:ModifyVolume",
"ec2:AttachVolume",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateRoute",
"ec2:DeleteRoute",
"ec2:DeleteSecurityGroup",
"ec2:DeleteVolume",
"ec2:DetachVolume",
"ec2:RevokeSecurityGroupIngress",
"ec2:DescribeVpcs",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:AttachLoadBalancerToSubnets",
"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:CreateLoadBalancerPolicy",
"elasticloadbalancing:CreateLoadBalancerListeners",
"elasticloadbalancing:ConfigureHealthCheck",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:DeleteLoadBalancerListeners",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"elasticloadbalancing:DetachLoadBalancerFromSubnets",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:ModifyLoadBalancerAttributes",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:CreateListener",
"elasticloadbalancing:CreateTargetGroup",
"elasticloadbalancing:DeleteListener",
"elasticloadbalancing:DeleteTargetGroup",
"elasticloadbalancing:DescribeListeners",
"elasticloadbalancing:DescribeLoadBalancerPolicies",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:DescribeTargetHealth",
"elasticloadbalancing:ModifyListener",
"elasticloadbalancing:ModifyTargetGroup",
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:DeregisterTargets",
"elasticloadbalancing:SetLoadBalancerPoliciesOfListener",
"iam:CreateServiceLinkedRole",
"kms:DescribeKey"
],
"Resource": [
"*"
]
}
]
}

For the Worker Node, the following Permissions are required

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeAccountAttributes",
"ec2:DescribeRegions",
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:BatchGetImage"
],
"Resource": "*"
}
]
}

After the instance is created, log into the ec2 instance using your ssh key-pair. I have written bash scripts which will help create the k8s cluster. The bash script is in my github repo. Run the script like $./simpleccm.sh AWS_ACCESS_KEY AWS_SECRT_KEY for Control Plane booting

$mkdir simplek8s; cd simplek8s
$git init; git pull https://github.com/rangapv/Simplek8s.git
$ls
README.md masterflannel.sh nodeflannel.sh simpleccm.sh simplenodeccm.sh
$./simpleccm.sh AWS_ACCESS_KEY AWS_SECRT_KEY

When you run the above command, your Control Plane is created with all the required k8s components, including the out-of-tree Cloud Controller Manager for AWS. There are lots of nitty gritty that is required like the hostname being in a particular format for AWS, “cloud.conf” file with the Zone information, setting up “awscli”, creating secrets and uploading them to the Cloud Controller container etc. which is entirely taken care by the scripts underneath it, hence the name Simplek8s!

For the Worker Node, transfer the “kubeconfig” file from the master to the worker node via sftp (port22 which is the same as the ssh port) and make sure the Pre-requisites mentioned above along with user-data and the Roles are attached to this “ec2” instance as well.

git clone my simplek8s repo as shown above and run this script instead for the worker node setup...
$./simplenodeccm.sh AWS_ACCESS_KEY AWS_SECRT_KEY

Once the script run to completion enter the JOIN command that we got after running the Control Plane scripts (in the install directory in Control Plane ec2 instance under the file flag.txt has the JOIN command) and you will see the Node joining the cluster.

$ sudo kubeadm join 172.31.45.61:6443 --token 2pi38b.n*****fj31bm --discovery-token-ca-cert-hash sha256:51fc684235*************653ce12 &

The output of the Cluster in a run state is as shown below…

NAME                                          STATUS   ROLES           AGE     VERSION
ip-172-31-33-232.us-east-2.compute.internal Ready <none> 39h v1.25.3
ip-172-31-34-72.us-east-2.compute.internal Ready control-plane 2d15h v1.25.3

The pods in the system …

NAMESPACE              NAME                                                                 READY   STATUS    RESTARTS      AGE
kube-system cloud-controller-manager-gccb5 1/1 Running 2 (21m ago) 17h
kube-system coredns-565d847f94-f7cpm 1/1 Running 7 (21m ago) 2d15h
kube-system coredns-565d847f94-n5622 1/1 Running 7 (21m ago) 2d15h
kube-system etcd-ip-172-31-34-72.us-east-2.compute.internal 1/1 Running 7 (21m ago) 2d15h
kube-system kube-apiserver-ip-172-31-34-72.us-east-2.compute.internal 1/1 Running 7 (21m ago) 2d15h
kube-system kube-controller-manager-ip-172-31-34-72.us-east-2.compute.internal 1/1 Running 7 (21m ago) 2d15h
kube-system kube-flannel-ds-5x4fx 1/1 Running 7 (21m ago) 2d15h
kube-system kube-flannel-ds-wgd22 1/1 Running 6 (21m ago) 39h
kube-system kube-proxy-7lhwk 1/1 Running 5 (21m ago) 39h
kube-system kube-proxy-8ts4g 1/1 Running 7 (21m ago) 2d15h
kube-system kube-scheduler-ip-172-31-34-72.us-east-2.compute.internal 1/1 Running 8 (21m ago) 2d15h
kubernetes-dashboard dashboard-metrics-scraper-59fc748957-cwql5 1/1 Running 5 (21m ago) 2d15h
kubernetes-dashboard kubernetes-dashboard-7474c85698-d6r8c 1/1 Running 5 (21m ago) 2d15h

The important thing to note here is that of the Cloud controller “DaemonSet” YAML as show below. I referred to the kuberentes.io documentation, looked at the cloud-provider github repo (including the helm charts, values files) and with some engineering skills arrived at a working Cloud controller. The controller image that I have used is “image: registry.k8s.io/provider-aws/cloud-controller-manager:v1.23.0-alpha.0”, the versioning might change with time.

#admin/cloud/ccm.yaml# This is an example of how to setup cloud-controller-manger as a Daemonset in your cluster.
# It assumes that your masters can run pods and has the role node-role.kubernetes.io/master
# Note that this Daemonset will not work straight out of the box for your cloud, this is
# meant to be a guideline.
#
#
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cloud-controller-manager
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:cloud-controller-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: ""
kind: ServiceAccount
name: cloud-controller-manager
namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
k8s-app: cloud-controller-manager
name: cloud-controller-manager
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: cloud-controller-manager
updateStrategy:
type: RollingUpdate
template:
metadata:
name: cloud-controller-manager
labels:
k8s-app: cloud-controller-manager
spec:
dnsPolicy: Default
hostNetwork: true
serviceAccountName: cloud-controller-manager
containers:
- name: cloud-controller-manager
# for in-tree providers we use k8s.gcr.io/cloud-controller-manager
# this can be replaced with any other image for out-of-tree providers
#image: k8s.gcr.io/cloud-controller-manager:v1.8.0
image: registry.k8s.io/provider-aws/cloud-controller-manager:v1.23.0-alpha.0
imagePullPolicy: Always
args:
- --cloud-provider=aws # Add your own cloud provider here!
- --cloud-config=/tmp/cloud.conf
- --leader-elect=true
- --use-service-account-credentials
# these flags will vary for every cloud provider
- --allocate-node-cidrs=true
- --configure-cloud-routes=true
- --cluster-cidr=172.17.0.0/16
- --kubeconfig=/tmp/config
- --v=9
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-access-1
key: AWS_ACCESS_KEY_ID
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-access-2
key: AWS_SECRET_ACCESS_KEY
volumeMounts:
- mountPath: /tmp
name: ccmawssecret
readOnly: true
- mountPath: /etc/ssl
name: ssl
readOnly: true
volumes:
- name: ccmawssecret
secret:
secretName: aws-ccm-secret
- name: ssl
hostPath:
path: /etc/ssl
tolerations:
# this is required so CCM can bootstrap itself
- key: node.cloudprovider.kubernetes.io/uninitialized
value: "true"
effect: NoSchedule
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
#- key: "CriticalAddonsOnly"
#operator: Exists
# this is to have the daemonset runnable on master nodes
# the taint may vary depending on your cluster setup
- key: node-role.kubernetes.io/master
effect: NoSchedule
# this is to restrict CCM to only run on master nodes
# the node selector may vary depending on your cluster setup
nodeSelector:
node-role.kubernetes.io/control-plane: ""

The above YAML can be kept as a baseline with only the changes to the secrets mentioned above to match your needs to get the out-of-tree Cloud controller Manager working. The things that may change are secrets “aws-ccm-secret”, “aws-access-1” & “aws-access-2” based on your particular keys. But if you used my script with your access & secret credential keys when running the script like shown above, then you can sit back and not worry a bit. It will work out-of-the-Box, no need to make any changes to the “DaemonSet” YAML.

BINGO!

As a fun side project there is also a BASH script to probe the Cluster to get the details of the components installed along with versions, runtimes and install paths. You DON’T NEED KUBECONFIG file for this to work, Check it out here

$git pull https://github.com/rangapv/kubestatus.git
$./ks
< This is to inform "kubernetes-cluster-status" in this box >
------------------------------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
-------------------
Core-Statistics
-----------------
-------- ------------
Process Install Path
------- -------------
kubeadm /usr/bin/kubeadm
kubelet /usr/bin/kubelet
kubectl /usr/bin/kubectl
-------------------
Runtime
-----------------
-------------------
Container-Runtime
-----------------
This box is using "Containerd" as the runtime
-------------------
Cloud-Environment
-----------------
IT is AWS Cloud
-------------------
Component-Statistics
-----------------
-------------------
Running-Components
-----------------
kubelet kube-apiserver kube-controller-manager cloud-controller-manager kube-scheduler etcd kube-proxy
Total component is 7
-------------------
Component-Version
-----------------
Component Install-Path Version-Info
------------ ------------ ------------
kubelet /usr/bin/kubelet Kubernetes v1.25.3
containerd /usr/bin/containerd containerd containerd.io 1.6.9 1c90a442489720eec95342e1789ee8a5e1b9536f
-------------------
Core-Component-Configfiles
-----------------
---------- -----------
Component Config-file
---------- -----------
kubelet /etc/kubernetes/kubelet.conf
kube-scheduler /etc/kubernetes/scheduler.conf
kube-apiserver /etc/kubernetes/admin.conf
kube-controller-manager /etc/kubernetes/controller-manager.conf
Component Version-Info
------------ ------------------------
kubectl Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.3", GitCommit:"434bfd82814af038ad94d62ebe59b133fcb50506", GitTreeState:"clean", BuildDate:"2022-10-12T10:57:26Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.3", GitCommit:"434bfd82814af038ad94d62ebe59b133fcb50506", GitTreeState:"clean", BuildDate:"2022-10-12T10:49:09Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}
kubeadm kubeadm version: &version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.3", GitCommit:"434bfd82814af038ad94d62ebe59b133fcb50506", GitTreeState:"clean", BuildDate:"2022-10-12T10:55:36Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}kubelet Kubernetes v1.25.3helm version.BuildInfo{Version:"v3.10.1", GitCommit:"9f88ccb6aee40b9a0535fcc7efea6055e1ef72c9", GitTreeState:"clean", GoVersion:"go1.18.7"}-------------------
Cluster-CNI
-----------------
flannel is up and running
-------------------
Node-Status
-----------------
All the core components ("kubeadm kubelet kubectl") of k8s are installed
There are a total "7" components of k8s running on this Box
Looks like this is the Master Node !!

Have Fun! If you have any queries you can reach out to me at rangapv@yahoo.com or on twitter @rangapv

--

--

Rangaswamy P V
Rangaswamy P V

Written by Rangaswamy P V

Works on Devops in Startups, reach me @rangapv on X/twitter or email: rangapv@gmail.com

No responses yet