Install a 3-node single master kubernetes cluster

Posted by Xiping Hu on March 14, 2023

A. Prerequisites

Three CentOS 7 nodes with SELinux , swap and firewall disabled. |hostname|role|ip address| |-|-|-| |k8s01|master|192.168.12.21| |k8s02|node1|192.168.12.22| |ks803|node2|192.168.12.23|

B. Install docker engine

On all of the nodes:

1
2
3
4
yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
systemctl enable --now docker

C. Install kubeadm, kubectl and kubelet

On all nodes:

1
2
3
4
5
6
7
8
9
10
11
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable kubelet

D. install cri-dockerd

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
yum install -y git wget
git clone https://github.com/Mirantis/cri-dockerd.git
wget https://storage.googleapis.com/golang/getgo/installer_linux
chmod +x ./installer_linux
./installer_linux
source ~/.bash_profile
cd cri-dockerd
mkdir bin
go build -o bin/cri-dockerd
mkdir -p /usr/local/bin
install -o root -g root -m 0755 bin/cri-dockerd /usr/local/bin/cri-dockerd
cp -a packaging/systemd/* /etc/systemd/system
sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service
systemctl daemon-reload
systemctl enable cri-docker.service
systemctl start cri-docker.service
systemctl enable --now cri-docker.socket

E. Toggle on kernel parameter and kernel modules

On all nodes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# Apply sysctl params without reboot
sudo sysctl --system

F. Initialize the control-plane node

On node master:

1
2
3
4
kubeadm init --cri-socket=unix:///var/run/cri-dockerd.sock --pod-network-cidr=10.244.0.0/16
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

if some errors occured, use this command to roll back:

1
kubeadm reset --cri-socket=unix:///var/run/cri-dockerd.sock

G. Install flannel as Container Network Interface (CNI)

On node master:

1
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

H. Joining the nodes

On node master, run:

1
kubeadm token list
1
2
TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS
rofj8h.4q1prnb6o2bdcj23   23h         2023-03-14T13:24:52Z   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

to get token, run:

1
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
1
a46f9aec0a82fcb87095ab9d8dd2d23d5f8e1ea41b68ac2f2d67a397645991d0

to get discovery token ca cert hash. On node node1 and node2, run:

1
kubeadm join 192.168.12.21:6443 --token rofj8h.4q1prnb6o2bdcj23 --discovery-token-ca-cert-hash sha256:a46f9aec0a82fcb87095ab9d8dd2d23d5f8e1ea41b68ac2f2d67a397645991d0 --cri-socket=unix:///var/run/cri-dockerd.sock

to join the cluster.then on master node, wait for a few minutes and run:

1
kubectl get nodes

until the two nodes becomes ready.

I. Install kubernetes dashboard

On master node:

1
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

check the deployment status by:

1
kubectl describe pods -n kubernetes-dashboard

then expose the service using NodePort:

1
kubectl --namespace kubernetes-dashboard patch svc kubernetes-dashboard -p '{"spec": {"type": "NodePort"}}'

and get the port exposed:

1
kubectl get svc -n kubernetes-dashboard
1
2
3
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.111.88.187    <none>        8000/TCP        10m
kubernetes-dashboard        NodePort    10.101.252.146   <none>        443:32228/TCP   10m

then we need to create admin user with a token. create the following 3 yaml files: srv-account.yml:

1
2
3
4
5
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

role-bind.yml

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

token.yml

1
2
3
4
5
6
7
8
apiVersion: v1
kind: Secret
metadata:
  name: admin-user-token
  namespace: kubernetes-dashboard
  annotations:
    kubernetes.io/service-account.name: admin-user
type: kubernetes.io/service-account-token

and apply them:

1
2
3
kubectl apply -f srv-account.yml
kubectl apply -f role-bind.yml
kubectl apply -f token.yml

get the token by:

1
kubectl describe secret admin-user-token -n kubernetes-dashboard

visit https://192.168.12.21:32228 for dashboard and login with token.

References

How To Install Kubernetes Dashboard with NodePort / LB Deploy and Access the Kubernetes Dashboard Creating a cluster with kubeadm flannel/kubernetes.md at master - GitHub Service account secret is not listed. How to fix it? dashboard/creating-sample-user.md at master - GitHub