Installing Kubernetes with kubeadm
This article provides a step-by-step guide to installing a Kubernetes cluster using `kubeadm` on dedicated servers. This method is ideal for setting up a production-ready cluster from scratch, offering significant control and flexibility. For reliable infrastructure, consider dedicated servers with full root access from PowerVPS.
Prerequisites
Before you begin, ensure your dedicated servers meet the following requirements:
- **Operating System**: A fresh installation of a supported Linux distribution. Ubuntu 20.04 LTS or later is highly recommended.
- **Hardware**: * **Control Plane Node**: Minimum 2 vCPUs, 2GB RAM. * **Worker Node(s)**: Minimum 1 vCPU, 2GB RAM per worker node. * **Network**: All nodes must be able to communicate with each other over the network. A static IP address for each node is crucial.
- **Software**: * **SSH Access**: Root or sudo privileges on all nodes. * **Internet Access**: Required to download packages. * **Unique Hostnames**: Each node needs a unique hostname. * **Swap Disabled**: Kubernetes requires swap to be disabled. * **Container Runtime**: Docker or containerd must be installed. We will use containerd for this guide. * **`kubeadm`, `kubelet`, and `kubectl`**: These Kubernetes components need to be installed.
Preparing the Nodes
This section outlines the essential steps to prepare each node in your cluster. These commands should be executed on *all* nodes (control plane and worker nodes).
1. Update System Packages
It's always a good practice to start with an up-to-date system.sudo apt update && sudo apt upgrade -y
2. Disable Swap
Kubernetes requires swap to be disabled for `kubelet` to function correctly.sudo swapoff -a # To disable swap permanently, comment out the swap line in /etc/fstab sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
3. Configure Kernel Modules and Sysctl Settings
Ensure necessary kernel modules are loaded and network traffic is handled correctly.cat <sudo tee /etc/modules-load.d/k8s.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe br_netfilter
# sysctl params required by setup, params usually set in modprobe.d cat <
sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 EOF # Apply sysctl params without reboot sudo sysctl --system
4. Install Containerd
We'll install `containerd`, a popular container runtime.# Install dependencies sudo apt install -y apt-transport-https ca-certificates curl gnupg lsb-release# Add Docker's official GPG key (containerd is managed by Docker) curl -fsSL https://download.docker.com/linux/ubuntu/gpg
sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg # Set up the stable repository echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable"
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null # Install containerd sudo apt update sudo apt install -y containerd.io
# Configure containerd to use systemd cgroup driver sudo mkdir -p /etc/containerd containerd config default
sudo tee /etc/containerd/config.toml sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml# Restart containerd sudo systemctl restart containerd sudo systemctl enable containerd
5. Install Kubernetes Components
Now, install `kubeadm`, `kubelet`, and `kubectl`.# Add Kubernetes GPG key curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpgsudo gpg --dearmor -o /usr/share/keyrings/kubernetes-archive-keyring.gpg # Add Kubernetes repository echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main"
sudo tee /etc/apt/sources.list.d/kubernetes.list # Install components sudo apt update sudo apt install -y kubelet kubeadm kubectl
# Hold packages to prevent accidental upgrades sudo apt-mark hold kubelet kubeadm kubectl
Initializing the Control Plane Node
This section details how to set up your first node as the Kubernetes control plane. This command should only be run on the designated control plane node.
1. Initialize the Cluster
Use `kubeadm init` to create the control plane. Replace `192.168.1.100` with your control plane node's IP address and `your_pod_network_cidr` with your desired Pod CIDR range (e.g., `10.244.0.0/16`).sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.1.100
Your Kubernetes control-plane has initialized successfullyTo start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster. Run "kubectl apply -f [pod-network-configuration.yaml]"
2. Configure `kubectl` for Regular User
To interact with the cluster using `kubectl` as a non-root user, follow these steps:mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
3. Install a Pod Network
Kubernetes requires a network plugin (CNI) to enable pods to communicate. We'll use Flannel as an example.# Apply the Flannel manifest kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy.kubernetes.io/privileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created configmap/kube-flannel-cfg created
4. Verify Cluster Status
Check if the control plane components are running and if your nodes are ready.kubectl get nodes
NAME STATUS ROLES AGE VERSION your-master-node Ready control-plane 2m v1.25.0
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE coredns-564dff919d-abcde 1/1 Running 0 3m coredns-564dff919d-fghij 1/1 Running 0 3m etcd-your-master-node 1/1 Running 0 3m kube-apiserver-your-master-node 1/1 Running 0 3m kube-controller-manager-your-master-node 1/1 Running 0 3m kube-flannel-ds-abcde 1/1 Running 0 1m kube-proxy-xyz12 1/1 Running 0 3m kube-scheduler-your-master-node 1/1 Running 0 3m
Joining Worker Nodes to the Cluster
This section explains how to add your worker nodes to the Kubernetes cluster. You will perform these steps on each worker node.
1. Get the Join Command
On your control plane node, run the following command to get the `kubeadm join` command.sudo kubeadm token create --print-join-command
kubeadm join 192.168.1.100:6443 --token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef
This command will output a `kubeadm join` command that includes a token and a CA certificate hash. You will need this for each worker node.
2. Join the Worker Node
On each worker node, execute the `kubeadm join` command obtained from the control plane.sudo kubeadm join 192.168.1.100:6443 --token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef
3. Verify Worker Node Joining
On your control plane node, check the status of your nodes again.kubectl get nodes
NAME STATUS ROLES AGE VERSION your-master-node Ready control-plane 5m v1.25.0 your-worker-node Ready <none> 1m v1.25.0
Troubleshooting
Conclusion
You have successfully installed a Kubernetes cluster using `kubeadm`. This setup provides a solid foundation for deploying and managing containerized applications. For robust and scalable deployments, consider using dedicated servers from PowerVPS to ensure optimal performance and reliability.
Category:Containerization Category:Kubernetes Category:System Administration