Installing K3s Lightweight Kubernetes

From Server rental store
Revision as of 15:57, 12 April 2026 by Admin (talk | contribs) (New server guide)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Installing K3s Lightweight Kubernetes

K3s is a highly available, certified Kubernetes distribution designed for IoT, edge computing, and resource-constrained environments. It's a single binary that's easy to install and manage, making it an excellent choice for developers and system administrators looking for a lightweight Kubernetes solution. This guide will walk you through the installation of K3s on a typical Linux server.

Prerequisites

Before you begin, ensure you have the following:

  • At least one Linux server (e.g., Ubuntu, CentOS, Debian). K3s can run on a single node or multiple nodes for a high-availability cluster.
  • SSH access to your server(s) with a user that has sudo privileges.
  • Internet connectivity on your server(s) to download K3s and its dependencies.
  • Basic understanding of Linux command-line operations.
  • Understanding of networking concepts, especially firewall rules.

For demanding workloads, consider using GPU servers available at Immers Cloud, offering a range of GPUs from $0.23/hr for inference to $4.74/hr for H200.

Installing a K3s Server Node

K3s offers a very simple installation script. You can download and run it directly on your server.

  1. SSH into your server:
ssh your_user@your_server_ip
  1. Download and run the K3s installation script:

This command downloads the latest stable version of K3s and installs it as a systemd service.

curl -sfL https://get.k3s.io | sh -
  1. Verify the installation:

After the script completes, K3s should be running. You can check its status:

sudo systemctl status k3s

You should see output indicating that the `k3s.service` is active and running.

  1. Check K3s version:

To confirm the installed version:

sudo k3s --version
  1. Accessing the Kubernetes API:

K3s automatically configures `kubectl` for you. The configuration file is typically located at `/etc/rancher/k3s/k3s.yaml`. To use `kubectl` without `sudo`, you can copy this file to your user's `.kube` directory:

mkdir -p $HOME/.kube
sudo cp /etc/rancher/k3s/k3s.yaml $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Now you can run `kubectl` commands:

kubectl get nodes

You should see your single server node listed.

Installing K3s Agent Nodes (Joining a Cluster)

To create a multi-node cluster, you'll need to install K3s on your worker nodes and join them to the server node.

  1. On your K3s server node, get the node token:

The token is required for agent nodes to securely join the cluster.

sudo cat /var/lib/rancher/k3s/server/node-token

Copy this token, as you'll need it for the agent installations.

  1. On each agent node, SSH in and run the K3s installer with the server IP and token:

Replace `YOUR_SERVER_IP` with the IP address of your K3s server node and `YOUR_NODE_TOKEN` with the token you just retrieved.

curl -sfL https://get.k3s.io | K3S_URL=https://YOUR_SERVER_IP:6443 K3S_TOKEN=YOUR_NODE_TOKEN sh -

This command installs K3s in agent mode and configures it to connect to your server node.

  1. Verify agent node status:

On the agent node, you can check the K3s service status:

sudo systemctl status k3s-agent

It should be active and running.

  1. On the K3s server node, verify the agent node has joined:

Run `kubectl get nodes` again on your server node. You should now see both the server node and the new agent node listed.

kubectl get nodes

High Availability (HA) Setup

For production environments, a single server node is a single point of failure. K3s supports High Availability configurations. This typically involves:

  • An external etcd datastore or a managed Kubernetes service.
  • Multiple K3s server nodes.
  • A load balancer in front of the K3s server nodes.

Refer to the official K3s documentation for detailed instructions on setting up HA clusters, as it involves more advanced networking and storage considerations.

Firewall Configuration

K3s requires certain ports to be open for communication between nodes and for clients to access the API.

  • Server Nodes:
   *   TCP port 6443 (Kubernetes API server)
   *   TCP port 2379-2380 (etcd server client API, if using etcd on server nodes)
   *   TCP port 10250 (Kubelet API)
   *   UDP port 8472 (Flannel VXLAN)
   *   TCP port 5473 (PostgreSQL, if using external DB)
  • Agent Nodes:
   *   TCP port 10250 (Kubelet API)
   *   UDP port 8472 (Flannel VXLAN)

If you are using `ufw` on Ubuntu/Debian:

# On server nodes
sudo ufw allow 6443/tcp
sudo ufw allow 10250/tcp
sudo ufw allow 8472/udp
sudo ufw enable

# On agent nodes
sudo ufw allow 10250/tcp
sudo ufw allow 8472/udp
sudo ufw enable

Troubleshooting

  • K3s service not starting:
   Check the K3s logs for errors:
    sudo journalctl -u k3s -f
    
   or for agent nodes:
    sudo journalctl -u k3s-agent -f
    
   Common issues include port conflicts, incorrect network configuration, or insufficient permissions.
  • Agent nodes not joining the cluster:
   *   Verify the `K3S_URL` and `K3S_TOKEN` are correct.
   *   Ensure the server node is reachable from the agent node (check firewalls and network connectivity).
   *   Check the K3s agent logs (`sudo journalctl -u k3s-agent -f`) for specific error messages.
  • `kubectl` commands failing:
   *   Ensure your `~/.kube/config` file is correctly set up and points to the correct server IP and port.
   *   Verify that the K3s server service is running.
   *   Check if the `KUBECONFIG` environment variable is set correctly if you're not using the default location.

Further Reading