Installing K3s Lightweight Kubernetes
= Installing K3s Lightweight Kubernetes = K3s is a highly available, certified Kubernetes distribution designed for IoT, edge computing, and resource-constrained environments. It's a single binary that's easy to install and manage, making it an excellent choice for developers and system administrators looking for a lightweight Kubernetes solution. This guide will walk you through the installation of K3s on a typical Linux server.
Prerequisites
Before you begin, ensure you have the following:- At least one Linux server (e.g., Ubuntu, CentOS, Debian). K3s can run on a single node or multiple nodes for a high-availability cluster.
- SSH access to your server(s) with a user that has sudo privileges.
- Internet connectivity on your server(s) to download K3s and its dependencies.
- Basic understanding of Linux command-line operations.
- Understanding of networking concepts, especially firewall rules.
For demanding workloads, consider using GPU servers available at Immers Cloud, offering a range of GPUs from $0.23/hr for inference to $4.74/hr for H200.
Installing a K3s Server Node
K3s offers a very simple installation script. You can download and run it directly on your server.# SSH into your server:
ssh your_user@your_server_ip
# Download and run the K3s installation script: This command downloads the latest stable version of K3s and installs it as a systemd service.
curl -sfL https://get.k3s.iosh -
# Verify the installation: After the script completes, K3s should be running. You can check its status:
sudo systemctl status k3sYou should see output indicating that the `k3s.service` is active and running.
# Check K3s version: To confirm the installed version:
sudo k3s --version
# Accessing the Kubernetes API: K3s automatically configures `kubectl` for you. The configuration file is typically located at `/etc/rancher/k3s/k3s.yaml`. To use `kubectl` without `sudo`, you can copy this file to your user's `.kube` directory:
mkdir -p $HOME/.kube sudo cp /etc/rancher/k3s/k3s.yaml $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configNow you can run `kubectl` commands:
kubectl get nodesYou should see your single server node listed.
Installing K3s Agent Nodes (Joining a Cluster)
To create a multi-node cluster, you'll need to install K3s on your worker nodes and join them to the server node.# On your K3s server node, get the node token: The token is required for agent nodes to securely join the cluster.
sudo cat /var/lib/rancher/k3s/server/node-tokenCopy this token, as you'll need it for the agent installations.
# On each agent node, SSH in and run the K3s installer with the server IP and token: Replace `YOUR_SERVER_IP` with the IP address of your K3s server node and `YOUR_NODE_TOKEN` with the token you just retrieved.
curl -sfL https://get.k3s.ioThis command installs K3s in agent mode and configures it to connect to your server node.K3S_URL=https://YOUR_SERVER_IP:6443 K3S_TOKEN=YOUR_NODE_TOKEN sh -
# Verify agent node status: On the agent node, you can check the K3s service status:
sudo systemctl status k3s-agentIt should be active and running.
# On the K3s server node, verify the agent node has joined: Run `kubectl get nodes` again on your server node. You should now see both the server node and the new agent node listed.
kubectl get nodes
High Availability (HA) Setup
For production environments, a single server node is a single point of failure. K3s supports High Availability configurations. This typically involves:Refer to the official K3s documentation for detailed instructions on setting up HA clusters, as it involves more advanced networking and storage considerations.
Firewall Configuration
K3s requires certain ports to be open for communication between nodes and for clients to access the API.If you are using `ufw` on Ubuntu/Debian:
# On server nodes sudo ufw allow 6443/tcp sudo ufw allow 10250/tcp sudo ufw allow 8472/udp sudo ufw enable# On agent nodes sudo ufw allow 10250/tcp sudo ufw allow 8472/udp sudo ufw enable
Troubleshooting
sudo journalctl -u k3s -f
or for agent nodes:
sudo journalctl -u k3s-agent -f
Common issues include port conflicts, incorrect network configuration, or insufficient permissions.Further Reading
Category:Containerization Category:Kubernetes Category:Cloud Computing