Installing Kubernetes with kubeadm

From Server rental store
Jump to navigation Jump to search
🖥️ Need a Server? Compare VPS & GPU hosting deals
PowerVPS → GPU Cloud →
⭐ Recommended MEXC 70% Fee Cashback
Register Now →

This article provides a step-by-step guide to installing a Kubernetes cluster using `kubeadm` on dedicated servers. This method is ideal for setting up a production-ready cluster from scratch, offering significant control and flexibility. For reliable infrastructure, consider dedicated servers with full root access from PowerVPS.

Prerequisites

Before you begin, ensure your dedicated servers meet the following requirements:

  • **Operating System**: A fresh installation of a supported Linux distribution. Ubuntu 20.04 LTS or later is highly recommended.
  • **Hardware**:
   *   **Control Plane Node**: Minimum 2 vCPUs, 2GB RAM.
   *   **Worker Node(s)**: Minimum 1 vCPU, 2GB RAM per worker node.
   *   **Network**: All nodes must be able to communicate with each other over the network. A static IP address for each node is crucial.
  • **Software**:
   *   **SSH Access**: Root or sudo privileges on all nodes.
   *   **Internet Access**: Required to download packages.
   *   **Unique Hostnames**: Each node needs a unique hostname.
   *   **Swap Disabled**: Kubernetes requires swap to be disabled.
   *   **Container Runtime**: Docker or containerd must be installed. We will use containerd for this guide.
   *   **`kubeadm`, `kubelet`, and `kubectl`**: These Kubernetes components need to be installed.

Preparing the Nodes

This section outlines the essential steps to prepare each node in your cluster. These commands should be executed on *all* nodes (control plane and worker nodes).

1. Update System Packages

It's always a good practice to start with an up-to-date system.

sudo apt update && sudo apt upgrade -y

2. Disable Swap

Kubernetes requires swap to be disabled for `kubelet` to function correctly.

sudo swapoff -a
# To disable swap permanently, comment out the swap line in /etc/fstab
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
  • Why this matters*: `kubelet` relies on memory management that can be unpredictable with swap enabled, potentially leading to application instability.

3. Configure Kernel Modules and Sysctl Settings

Ensure necessary kernel modules are loaded and network traffic is handled correctly.

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# sysctl params required by setup, params usually set in modprobe.d
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# Apply sysctl params without reboot
sudo sysctl --system
  • Why this matters*: These settings are crucial for container networking and the bridge networking used by Kubernetes. `net.bridge.bridge-nf-call-iptables` ensures that iptables rules are applied to bridged traffic, which is essential for network policies and service routing. `net.ipv4.ip_forward` allows the node to act as a router for traffic between pods and the external network.

4. Install Containerd

We'll install `containerd`, a popular container runtime.

# Install dependencies
sudo apt install -y apt-transport-https ca-certificates curl gnupg lsb-release

# Add Docker's official GPG key (containerd is managed by Docker)
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

# Set up the stable repository
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install containerd
sudo apt update
sudo apt install -y containerd.io

# Configure containerd to use systemd cgroup driver
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

# Restart containerd
sudo systemctl restart containerd
sudo systemctl enable containerd
  • Why this matters*: `containerd` is responsible for running your containers. Configuring it to use `systemd`'s cgroup driver ensures compatibility with `kubelet`, which also uses `systemd` for process management. This avoids potential conflicts and ensures consistent resource allocation.

5. Install Kubernetes Components

Now, install `kubeadm`, `kubelet`, and `kubectl`.

# Add Kubernetes GPG key
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /usr/share/keyrings/kubernetes-archive-keyring.gpg

# Add Kubernetes repository
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Install components
sudo apt update
sudo apt install -y kubelet kubeadm kubectl

# Hold packages to prevent accidental upgrades
sudo apt-mark hold kubelet kubeadm kubectl
  • Why this matters*: These are the core components of Kubernetes. `kubelet` runs on each node and ensures containers are running in pods. `kubeadm` is used to bootstrap the cluster, and `kubectl` is the command-line tool for interacting with the cluster. Holding the packages prevents them from being updated to versions that might break your cluster configuration.

Initializing the Control Plane Node

This section details how to set up your first node as the Kubernetes control plane. This command should only be run on the designated control plane node.

1. Initialize the Cluster

Use `kubeadm init` to create the control plane. Replace `192.168.1.100` with your control plane node's IP address and `your_pod_network_cidr` with your desired Pod CIDR range (e.g., `10.244.0.0/16`).

sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.1.100
  • Expected Output Example*:
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [pod-network-configuration.yaml]"
  • Why this matters*: This command bootstraps the control plane components (API server, etcd, scheduler, controller manager) and configures the kubelet on the control plane node. The `--pod-network-cidr` flag is essential for the chosen Container Network Interface (CNI) plugin to allocate IP addresses to pods. The `--apiserver-advertise-address` ensures the API server is reachable on the specified IP.

2. Configure `kubectl` for Regular User

To interact with the cluster using `kubectl` as a non-root user, follow these steps:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • Why this matters*: This copies the administrative configuration file to your user's home directory, allowing `kubectl` to authenticate with the Kubernetes API server.

3. Install a Pod Network

Kubernetes requires a network plugin (CNI) to enable pods to communicate. We'll use Flannel as an example.

# Apply the Flannel manifest
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
  • Expected Output Example*:
podsecuritypolicy.policy.kubernetes.io/privileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
configmap/kube-flannel-cfg created
  • Why this matters*: Without a CNI plugin, pods cannot communicate with each other, even on the same node. Flannel provides a simple overlay network that allows pods across different nodes to exchange traffic.

4. Verify Cluster Status

Check if the control plane components are running and if your nodes are ready.

kubectl get nodes
  • Expected Output Example (after installing Flannel)*:
NAME             STATUS   ROLES           AGE   VERSION
your-master-node   Ready    control-plane   2m    v1.25.0
kubectl get pods -n kube-system
  • Expected Output Example*:
NAME                                     READY   STATUS    RESTARTS   AGE
coredns-564dff919d-abcde                 1/1     Running   0          3m
coredns-564dff919d-fghij                 1/1     Running   0          3m
etcd-your-master-node                    1/1     Running   0          3m
kube-apiserver-your-master-node          1/1     Running   0          3m
kube-controller-manager-your-master-node 1/1     Running   0          3m
kube-flannel-ds-abcde                    1/1     Running   0          1m
kube-proxy-xyz12                         1/1     Running   0          3m
kube-scheduler-your-master-node          1/1     Running   0          3m
  • Why this matters*: This is your first check to ensure the cluster is operational. `kubectl get nodes` shows the status of your cluster nodes, and `kubectl get pods -n kube-system` verifies that the core Kubernetes components are running.

Joining Worker Nodes to the Cluster

This section explains how to add your worker nodes to the Kubernetes cluster. You will perform these steps on each worker node.

1. Get the Join Command

On your control plane node, run the following command to get the `kubeadm join` command.

sudo kubeadm token create --print-join-command
  • Expected Output Example*:
kubeadm join 192.168.1.100:6443 --token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef

This command will output a `kubeadm join` command that includes a token and a CA certificate hash. You will need this for each worker node.

2. Join the Worker Node

On each worker node, execute the `kubeadm join` command obtained from the control plane.

sudo kubeadm join 192.168.1.100:6443 --token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef
  • Why this matters*: This command uses the generated token and CA certificate hash to securely authenticate the worker node with the control plane's API server. It then configures the `kubelet` on the worker node to join the cluster.

3. Verify Worker Node Joining

On your control plane node, check the status of your nodes again.

kubectl get nodes
  • Expected Output Example (after joining a worker node)*:
NAME             STATUS   ROLES           AGE   VERSION
your-master-node   Ready    control-plane   5m    v1.25.0
your-worker-node   Ready    <none>          1m    v1.25.0
  • Why this matters*: This confirms that your worker node has successfully joined the cluster and is ready to accept workloads.

Troubleshooting

  • **`kubelet` not starting**:
   *   Check `journalctl -u kubelet` for detailed error messages.
   *   Ensure swap is disabled (`sudo swapoff -a` and check `/etc/fstab`).
   *   Verify `containerd` is running (`sudo systemctl status containerd`).
   *   Check `/etc/containerd/config.toml` for `SystemdCgroup = true`.
  • **`kubectl get nodes` shows `NotReady`**:
   *   Ensure the pod network (Flannel in this guide) is installed correctly (`kubectl apply -f ...`).
   *   Check the status of the `kube-flannel-ds` pods in the `kube-system` namespace (`kubectl get pods -n kube-system`).
   *   Verify network connectivity between nodes. Ensure firewalls are not blocking required ports (e.g., 6443, 2379-2380, 10250, 10251, 10252).
  • **`kubeadm join` fails**:
   *   Double-check the token and CA certificate hash. Tokens expire after 24 hours. If it has expired, generate a new one on the control plane: `sudo kubeadm token create --print-join-command`.
   *   Ensure the control plane's API server is reachable from the worker node (try `telnet 192.168.1.100 6443`).
   *   Check `/etc/kubernetes/kubelet.conf` on the worker node for any configuration issues.

Conclusion

You have successfully installed a Kubernetes cluster using `kubeadm`. This setup provides a solid foundation for deploying and managing containerized applications. For robust and scalable deployments, consider using dedicated servers from PowerVPS to ensure optimal performance and reliability.