Installation K8S

  • Architecture

    • Single-node
      With a single-node deployment, all the components run on the same server. This is great for testing, learning, and developing around Kubernetes.
    • Single head node, multiple workers
      Adding more workers, a single head node and multiple workers typically will consist of a single node etcd instance running on the head node with the API, the scheduler, and the controller-manager.
    • Multiple head nodes with HA, multiple workers
      Multiple head nodes in an HA configuration and multiple workers add more durability to the cluster. The API server will be fronted by a load balancer, the scheduler and the controller-manager will elect a leader (which is configured via flags). The etcd setup can still be single node.
    • HA etcd, HA head nodes, multiple workers
      The most advanced and resilient setup would be an HA etcd cluster, with HA head nodes and multiple workers. Also, etcd would run as a true cluster, which would provide HA and would run on nodes separate from the Kubernetes head nodes.
  • Installation K8S components in all nodes (workers and master)
    The aim is to install K8S 1.27 So you should see compabilities.
    Linux : linux version Ubunto 20.04
    Containerd : https://containerd.io/releases/ for compatibilities between containerd and Kubernetes.
    disable swap

    swapoff -a
    vi /etc/fstab

    containerd prerequisites

    sudo modprobe overlay
    sudo modprobe br_netfilter

    Setup required sysctl params, these persist across reboots

    cat <<EOF | sudo tee /etc/modules-load.d/containerd.configure
    overlay
    br_netfilter
    EOF
    cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
    net.bridge.bridge-nf-call-iptables  = 1
    net.ipv4.ip_forward                 = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    EOF
    sudo sysctl --system

    Install containerd

    sudo apt-get update
    sudo apt-get install -y containerd

    configure containerd

    sudo mkdir -p /etc/containerd
    containerd config default | sudo tee /etc/containerd/config.toml

    Using the systemd cgroup driver

    sudo vi /etc/containerd/config.toml
    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
    ...
    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

    restart containerd with new configuration

    sudo systemctl restart containerd

    add google's apt repository gpg key

    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

    Deprecated visit https://facsiaginsa.com/kubernetes/install-kubernetes-single-master
    add the kubernetes apt repository

    sudo bash -c 'cat <<EOF>/etc/apt/sources.list.d/kubernetes.list
    deb https://apt.kubernetes.io/ kubernetes-xenial main 
    EOF'

    update the package list and use apt-cache policy to inspect versions available in the repository

    sudo apt-get update
    apt-cache policy kubelet | head -n 20

    Install the required packages, if needed we can request a specific versions

    VESRION=1.20.1-00
    sudo echo $VERSION
    sudo apt-get install -y kubelet=$VERSION kubeadm=$VERSION kubectl=$VERSION
    sudo apt-mark hold kubelet kubeadm kubectl containerd

    check status of ours kubelet and container runtime, containerd should be active

    sudo systemctl status kubelet.service
    sudo systemctl status containerd.service
  • Installation K8S components master node

    • Configure Cluster

Kubeadm configuration file

kubeadm config print init-defaults | tee ClusterConfiguration.yaml

The ip endpoint for the api server localAPEndpoint.advertiseAdress

sed -i 's/ advertiseAddress: 1.2.3.4/ advertiseAddress: 10.2.0.2/' ClusterConfiguration.yaml

Change nodeRegistration.criSocket from docker to containerd

sed -i 's/  criSocket: \/var\/run\/dockershim\.sock/  criSocket: \/run\/containerd\/containerd\.sock/' ClusterConfiguration.yaml

Set the cgroup driver for kubelet to systemd, it's not set in this file yet, the default is cgroupfs

cat <<EOF | cat >> ClusterConfiguration.yaml
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF

check file before init if all is midfied

vi ClusterConfiguration.yaml

Init the cluster with configuration

sudo kubeadm init \
  --config=ClusterConfiguration.yaml

Note : you can have the join command. execute it into the worker to join the master

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • Configure and Install Calico
    Pull down Calico and choose the version compatible with K8S

    curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml -o calico.yaml

    Edit the calico-node DaemonSet and Configure the initial IP Pool prefix by setting
    find the setting for pod network ip adress rang CALICO_IPV4POOL_CIDR and active CALICO_IPV4POOL_CIDR with the range 10.48.0.0/24 (range pods)

    vi calcio.yaml
    `CALICO_IPV4POOL_CIDR` to 10.48.0.0/24

    Apply calcio file

    kubectl apply -f calico.yaml
  • Join worker with master if you dont apply the command before

ensure both are to srtart when the system starts up

sudo systemctl enable kubelet.service
sudo systemctl enable containerd.service

if you dont keep the token from the master node
go to the master and apply thoses commands

kubeadm token list

if there is no token or timeout, create new token

kubeadm token create

or

kubeadm token create --print-join-command

on the control plane you can find the CA cert hash

openssl x509 -pubkey \
-in /etc/kubernetes/pki/ca.crt | openssl rsa \
-pubin -outform der 2>/dev/null | openssl dgst \
-sha256 -hex | sed 's/ˆ.* //'

in the worker join the master or the control plane by executing this command

sudo kubeadm join 10.2.0.2:6443 --token 5u4jqt.w6xzvlx3zh7qf9gx \
    --discovery-token-ca-cert-hash sha256:249324363466a62b77472d633b88e600c58f319e9690a8c0ca699c3e8942deee

return to the master and apply commands bellow
check the second node existance

kubectl get nodes

check if status are ready

kubectl get pods --all-namespaces --watch
  • Check installation

check if all pods are running

kubectl get pods --all-namespaces
#or witrh command 
kubectl get pods --all-namespaces --watch
kubectl get nodes

check systemd units

sudo systemctl status kubelet.service

check out the static pod manifests on the control plane nodeRegistration

ls /etc/kubernetes/manifets

look more closely at API server and etcd's manifests

sudo more /etc/kubernetes/manifests/etcd.yaml
sudo more /etc/kubernetes/manifests/etcd.yaml

check out the directory where the kubeconfig files live for each of the control plane pods

ls /etc/kubernetes
  • Example of command into Worker node
    1  ip link
    2  sudo modprobe overlay
    3  sudo modprobe br_netfilter
    4  cat <<EOF | sudo tee /etc/modules-load.d/containerd.configure
    5  overlay
    6  br_netfilter
    7  EOF
    8  cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
    9  net.bridge.bridge-nf-call-iptables  = 1
   10  net.ipv4.ip_forward                 = 1
   11  net.bridge.bridge-nf-call-ip6tables = 1
   12  EOF
   13  sudo sysctl --system
   14  sudo apt-get update
   15  sudo apt-get install -y containerd
   16  ls /etc/containerd
   17  sudo mkdir -p /etc/containerd
   18  containerd config default | sudo tee /etc/containerd/config.toml
   19  sudo vi /etc/containerd/config.toml
   20  sudo systemctl restart containerd
   21  ps ea|grep containerd
   22  curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
   23  sudo bash -c 'cat <<EOF>/etc/apt/sources.list.d/kubernetes.list
   24  deb https://apt.kubernetes.io/ kubernetes-xenial main
   25  EOF'
   26  sudo apt-get update
   27  apt-cache policy kubelet | head -n 20
   28  VESRION=1.27.1-00
   29  sudo echo $VERSION
   30  VERSION=1.27.1-00
   31  sudo echo $VERSION
   32  sudo apt-get install -y kubelet=$VERSION kubeadm=$VERSION kubectl=$VERSION
   33  sudo apt-mark hold kubelet kubeadm kubectl containerd
   34  sudo systemctl status kubelet.service
   35  sudo systemctl status containerd.service
   36  sudo vi /etc/containerd/config.toml
   37  sudo systemctl restart containerd
   38  sudo systemctl status containerd.service
       kubeadm join 100.92.88.11:6443 --token nlu90p.5m55dmv3zcfpfmkc         --discovery-token-ca-cert-hash sha256:b89ad12d8803ba97588701c684b2b36aa287d28161e060167a865b6ed68deb4c
   41  kubectl get node
  • Install nginx Controller
    Check compatibilities before installing :

    https://github.com/kubernetes/ingress-nginx

install the ingress controller
command bellow or visit the website ; https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/index.md#bare-metal-clusters

    kubectl create namespace ingress-nginx
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml
    kubectl get pods --namespace=ingress-nginx

to expose services via ingress controller

https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/baremetal.md

the yaml file to expose the ingress controller via a service is :

kubectl apply -f - <<EOF 
apiVersion: v1
kind: Service
metadata: 
  name: ingress-nginx
spec:
  selector:
    app.kubernetes.io/component: controller
  externalIPs: 
    - 100.57.100.100
  type: NodePort
  ports: 
    - 
      name: http
      port: 80
      protocol: TCP
      targetPort: 80
    - 
      name: https
      port: 443
      protocol: TCP
      targetPort: 443
EOF

Leave a Reply

Your email address will not be published. Required fields are marked *