<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://serverrental.store/index.php?action=history&amp;feed=atom&amp;title=Installing_Kubernetes_with_kubeadm</id>
	<title>Installing Kubernetes with kubeadm - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://serverrental.store/index.php?action=history&amp;feed=atom&amp;title=Installing_Kubernetes_with_kubeadm"/>
	<link rel="alternate" type="text/html" href="https://serverrental.store/index.php?title=Installing_Kubernetes_with_kubeadm&amp;action=history"/>
	<updated>2026-04-14T21:27:31Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.36.1</generator>
	<entry>
		<id>https://serverrental.store/index.php?title=Installing_Kubernetes_with_kubeadm&amp;diff=5838&amp;oldid=prev</id>
		<title>Admin: New server guide</title>
		<link rel="alternate" type="text/html" href="https://serverrental.store/index.php?title=Installing_Kubernetes_with_kubeadm&amp;diff=5838&amp;oldid=prev"/>
		<updated>2026-04-14T10:01:14Z</updated>

		<summary type="html">&lt;p&gt;New server guide&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;This article provides a step-by-step guide to installing a Kubernetes cluster using `kubeadm` on dedicated servers. This method is ideal for setting up a production-ready cluster from scratch, offering significant control and flexibility. For reliable infrastructure, consider dedicated servers with full root access from [https://powervps.net/?from=32 PowerVPS].&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Before you begin, ensure your dedicated servers meet the following requirements:&lt;br /&gt;
&lt;br /&gt;
*   **Operating System**: A fresh installation of a supported Linux distribution. Ubuntu 20.04 LTS or later is highly recommended.&lt;br /&gt;
*   **Hardware**:&lt;br /&gt;
    *   **Control Plane Node**: Minimum 2 vCPUs, 2GB RAM.&lt;br /&gt;
    *   **Worker Node(s)**: Minimum 1 vCPU, 2GB RAM per worker node.&lt;br /&gt;
    *   **Network**: All nodes must be able to communicate with each other over the network. A static IP address for each node is crucial.&lt;br /&gt;
*   **Software**:&lt;br /&gt;
    *   **SSH Access**: Root or sudo privileges on all nodes.&lt;br /&gt;
    *   **Internet Access**: Required to download packages.&lt;br /&gt;
    *   **Unique Hostnames**: Each node needs a unique hostname.&lt;br /&gt;
    *   **Swap Disabled**: Kubernetes requires swap to be disabled.&lt;br /&gt;
    *   **Container Runtime**: Docker or containerd must be installed. We will use containerd for this guide.&lt;br /&gt;
    *   **`kubeadm`, `kubelet`, and `kubectl`**: These Kubernetes components need to be installed.&lt;br /&gt;
&lt;br /&gt;
== Preparing the Nodes ==&lt;br /&gt;
&lt;br /&gt;
This section outlines the essential steps to prepare each node in your cluster. These commands should be executed on *all* nodes (control plane and worker nodes).&lt;br /&gt;
&lt;br /&gt;
=== 1. Update System Packages ===&lt;br /&gt;
It's always a good practice to start with an up-to-date system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sudo apt update &amp;amp;&amp;amp; sudo apt upgrade -y&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. Disable Swap ===&lt;br /&gt;
Kubernetes requires swap to be disabled for `kubelet` to function correctly.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sudo swapoff -a&lt;br /&gt;
# To disable swap permanently, comment out the swap line in /etc/fstab&lt;br /&gt;
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Why this matters*: `kubelet` relies on memory management that can be unpredictable with swap enabled, potentially leading to application instability.&lt;br /&gt;
&lt;br /&gt;
=== 3. Configure Kernel Modules and Sysctl Settings ===&lt;br /&gt;
Ensure necessary kernel modules are loaded and network traffic is handled correctly.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat &amp;lt;&amp;lt;EOF | sudo tee /etc/modules-load.d/k8s.conf&lt;br /&gt;
overlay&lt;br /&gt;
br_netfilter&lt;br /&gt;
EOF&lt;br /&gt;
&lt;br /&gt;
sudo modprobe overlay&lt;br /&gt;
sudo modprobe br_netfilter&lt;br /&gt;
&lt;br /&gt;
# sysctl params required by setup, params usually set in modprobe.d&lt;br /&gt;
cat &amp;lt;&amp;lt;EOF | sudo tee /etc/sysctl.d/k8s.conf&lt;br /&gt;
net.bridge.bridge-nf-call-iptables  = 1&lt;br /&gt;
net.bridge.bridge-nf-call-ip6tables = 1&lt;br /&gt;
net.ipv4.ip_forward                 = 1&lt;br /&gt;
EOF&lt;br /&gt;
&lt;br /&gt;
# Apply sysctl params without reboot&lt;br /&gt;
sudo sysctl --system&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Why this matters*: These settings are crucial for container networking and the bridge networking used by Kubernetes. `net.bridge.bridge-nf-call-iptables` ensures that iptables rules are applied to bridged traffic, which is essential for network policies and service routing. `net.ipv4.ip_forward` allows the node to act as a router for traffic between pods and the external network.&lt;br /&gt;
&lt;br /&gt;
=== 4. Install Containerd ===&lt;br /&gt;
We'll install `containerd`, a popular container runtime.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Install dependencies&lt;br /&gt;
sudo apt install -y apt-transport-https ca-certificates curl gnupg lsb-release&lt;br /&gt;
&lt;br /&gt;
# Add Docker's official GPG key (containerd is managed by Docker)&lt;br /&gt;
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg&lt;br /&gt;
&lt;br /&gt;
# Set up the stable repository&lt;br /&gt;
echo \&lt;br /&gt;
  &amp;quot;deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \&lt;br /&gt;
  $(lsb_release -cs) stable&amp;quot; | sudo tee /etc/apt/sources.list.d/docker.list &amp;gt; /dev/null&lt;br /&gt;
&lt;br /&gt;
# Install containerd&lt;br /&gt;
sudo apt update&lt;br /&gt;
sudo apt install -y containerd.io&lt;br /&gt;
&lt;br /&gt;
# Configure containerd to use systemd cgroup driver&lt;br /&gt;
sudo mkdir -p /etc/containerd&lt;br /&gt;
containerd config default | sudo tee /etc/containerd/config.toml&lt;br /&gt;
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml&lt;br /&gt;
&lt;br /&gt;
# Restart containerd&lt;br /&gt;
sudo systemctl restart containerd&lt;br /&gt;
sudo systemctl enable containerd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Why this matters*: `containerd` is responsible for running your containers. Configuring it to use `systemd`'s cgroup driver ensures compatibility with `kubelet`, which also uses `systemd` for process management. This avoids potential conflicts and ensures consistent resource allocation.&lt;br /&gt;
&lt;br /&gt;
=== 5. Install Kubernetes Components ===&lt;br /&gt;
Now, install `kubeadm`, `kubelet`, and `kubectl`.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Add Kubernetes GPG key&lt;br /&gt;
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /usr/share/keyrings/kubernetes-archive-keyring.gpg&lt;br /&gt;
&lt;br /&gt;
# Add Kubernetes repository&lt;br /&gt;
echo &amp;quot;deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main&amp;quot; | sudo tee /etc/apt/sources.list.d/kubernetes.list&lt;br /&gt;
&lt;br /&gt;
# Install components&lt;br /&gt;
sudo apt update&lt;br /&gt;
sudo apt install -y kubelet kubeadm kubectl&lt;br /&gt;
&lt;br /&gt;
# Hold packages to prevent accidental upgrades&lt;br /&gt;
sudo apt-mark hold kubelet kubeadm kubectl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Why this matters*: These are the core components of Kubernetes. `kubelet` runs on each node and ensures containers are running in pods. `kubeadm` is used to bootstrap the cluster, and `kubectl` is the command-line tool for interacting with the cluster. Holding the packages prevents them from being updated to versions that might break your cluster configuration.&lt;br /&gt;
&lt;br /&gt;
== Initializing the Control Plane Node ==&lt;br /&gt;
&lt;br /&gt;
This section details how to set up your first node as the Kubernetes control plane. This command should only be run on the designated control plane node.&lt;br /&gt;
&lt;br /&gt;
=== 1. Initialize the Cluster ===&lt;br /&gt;
Use `kubeadm init` to create the control plane. Replace `192.168.1.100` with your control plane node's IP address and `your_pod_network_cidr` with your desired Pod CIDR range (e.g., `10.244.0.0/16`).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.1.100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Expected Output Example*:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Your Kubernetes control-plane has initialized successfully!&lt;br /&gt;
&lt;br /&gt;
To start using your cluster, you need to run the following as a regular user:&lt;br /&gt;
&lt;br /&gt;
  mkdir -p $HOME/.kube&lt;br /&gt;
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config&lt;br /&gt;
  sudo chown $(id -u):$(id -g) $HOME/.kube/config&lt;br /&gt;
&lt;br /&gt;
You should now deploy a pod network to the cluster.&lt;br /&gt;
Run &amp;quot;kubectl apply -f [pod-network-configuration.yaml]&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Why this matters*: This command bootstraps the control plane components (API server, etcd, scheduler, controller manager) and configures the kubelet on the control plane node. The `--pod-network-cidr` flag is essential for the chosen Container Network Interface (CNI) plugin to allocate IP addresses to pods. The `--apiserver-advertise-address` ensures the API server is reachable on the specified IP.&lt;br /&gt;
&lt;br /&gt;
=== 2. Configure `kubectl` for Regular User ===&lt;br /&gt;
To interact with the cluster using `kubectl` as a non-root user, follow these steps:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p $HOME/.kube&lt;br /&gt;
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config&lt;br /&gt;
sudo chown $(id -u):$(id -g) $HOME/.kube/config&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Why this matters*: This copies the administrative configuration file to your user's home directory, allowing `kubectl` to authenticate with the Kubernetes API server.&lt;br /&gt;
&lt;br /&gt;
=== 3. Install a Pod Network ===&lt;br /&gt;
Kubernetes requires a network plugin (CNI) to enable pods to communicate. We'll use Flannel as an example.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Apply the Flannel manifest&lt;br /&gt;
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Expected Output Example*:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
podsecuritypolicy.policy.kubernetes.io/privileged created&lt;br /&gt;
clusterrole.rbac.authorization.k8s.io/flannel created&lt;br /&gt;
clusterrolebinding.rbac.authorization.k8s.io/flannel created&lt;br /&gt;
daemonset.apps/kube-flannel-ds-amd64 created&lt;br /&gt;
daemonset.apps/kube-flannel-ds-arm64 created&lt;br /&gt;
configmap/kube-flannel-cfg created&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Why this matters*: Without a CNI plugin, pods cannot communicate with each other, even on the same node. Flannel provides a simple overlay network that allows pods across different nodes to exchange traffic.&lt;br /&gt;
&lt;br /&gt;
=== 4. Verify Cluster Status ===&lt;br /&gt;
Check if the control plane components are running and if your nodes are ready.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
kubectl get nodes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Expected Output Example (after installing Flannel)*:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
NAME             STATUS   ROLES           AGE   VERSION&lt;br /&gt;
your-master-node   Ready    control-plane   2m    v1.25.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
kubectl get pods -n kube-system&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Expected Output Example*:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
NAME                                     READY   STATUS    RESTARTS   AGE&lt;br /&gt;
coredns-564dff919d-abcde                 1/1     Running   0          3m&lt;br /&gt;
coredns-564dff919d-fghij                 1/1     Running   0          3m&lt;br /&gt;
etcd-your-master-node                    1/1     Running   0          3m&lt;br /&gt;
kube-apiserver-your-master-node          1/1     Running   0          3m&lt;br /&gt;
kube-controller-manager-your-master-node 1/1     Running   0          3m&lt;br /&gt;
kube-flannel-ds-abcde                    1/1     Running   0          1m&lt;br /&gt;
kube-proxy-xyz12                         1/1     Running   0          3m&lt;br /&gt;
kube-scheduler-your-master-node          1/1     Running   0          3m&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Why this matters*: This is your first check to ensure the cluster is operational. `kubectl get nodes` shows the status of your cluster nodes, and `kubectl get pods -n kube-system` verifies that the core Kubernetes components are running.&lt;br /&gt;
&lt;br /&gt;
== Joining Worker Nodes to the Cluster ==&lt;br /&gt;
&lt;br /&gt;
This section explains how to add your worker nodes to the Kubernetes cluster. You will perform these steps on each worker node.&lt;br /&gt;
&lt;br /&gt;
=== 1. Get the Join Command ===&lt;br /&gt;
On your control plane node, run the following command to get the `kubeadm join` command.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sudo kubeadm token create --print-join-command&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Expected Output Example*:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
kubeadm join 192.168.1.100:6443 --token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This command will output a `kubeadm join` command that includes a token and a CA certificate hash. You will need this for each worker node.&lt;br /&gt;
&lt;br /&gt;
=== 2. Join the Worker Node ===&lt;br /&gt;
On each worker node, execute the `kubeadm join` command obtained from the control plane.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sudo kubeadm join 192.168.1.100:6443 --token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Why this matters*: This command uses the generated token and CA certificate hash to securely authenticate the worker node with the control plane's API server. It then configures the `kubelet` on the worker node to join the cluster.&lt;br /&gt;
&lt;br /&gt;
=== 3. Verify Worker Node Joining ===&lt;br /&gt;
On your control plane node, check the status of your nodes again.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
kubectl get nodes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Expected Output Example (after joining a worker node)*:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
NAME             STATUS   ROLES           AGE   VERSION&lt;br /&gt;
your-master-node   Ready    control-plane   5m    v1.25.0&lt;br /&gt;
your-worker-node   Ready    &amp;amp;lt;none&amp;amp;gt;          1m    v1.25.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Why this matters*: This confirms that your worker node has successfully joined the cluster and is ready to accept workloads.&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&lt;br /&gt;
*   **`kubelet` not starting**:&lt;br /&gt;
    *   Check `journalctl -u kubelet` for detailed error messages.&lt;br /&gt;
    *   Ensure swap is disabled (`sudo swapoff -a` and check `/etc/fstab`).&lt;br /&gt;
    *   Verify `containerd` is running (`sudo systemctl status containerd`).&lt;br /&gt;
    *   Check `/etc/containerd/config.toml` for `SystemdCgroup = true`.&lt;br /&gt;
&lt;br /&gt;
*   **`kubectl get nodes` shows `NotReady`**:&lt;br /&gt;
    *   Ensure the pod network (Flannel in this guide) is installed correctly (`kubectl apply -f ...`).&lt;br /&gt;
    *   Check the status of the `kube-flannel-ds` pods in the `kube-system` namespace (`kubectl get pods -n kube-system`).&lt;br /&gt;
    *   Verify network connectivity between nodes. Ensure firewalls are not blocking required ports (e.g., 6443, 2379-2380, 10250, 10251, 10252).&lt;br /&gt;
&lt;br /&gt;
*   **`kubeadm join` fails**:&lt;br /&gt;
    *   Double-check the token and CA certificate hash. Tokens expire after 24 hours. If it has expired, generate a new one on the control plane: `sudo kubeadm token create --print-join-command`.&lt;br /&gt;
    *   Ensure the control plane's API server is reachable from the worker node (try `telnet 192.168.1.100 6443`).&lt;br /&gt;
    *   Check `/etc/kubernetes/kubelet.conf` on the worker node for any configuration issues.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
You have successfully installed a Kubernetes cluster using `kubeadm`. This setup provides a solid foundation for deploying and managing containerized applications. For robust and scalable deployments, consider using dedicated servers from [https://powervps.net/?from=32 PowerVPS] to ensure optimal performance and reliability.&lt;br /&gt;
&lt;br /&gt;
[[Category:Containerization]]&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:System Administration]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
</feed>