<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://serverrental.store/index.php?action=history&amp;feed=atom&amp;title=Installing_K3s_Lightweight_Kubernetes</id>
	<title>Installing K3s Lightweight Kubernetes - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://serverrental.store/index.php?action=history&amp;feed=atom&amp;title=Installing_K3s_Lightweight_Kubernetes"/>
	<link rel="alternate" type="text/html" href="https://serverrental.store/index.php?title=Installing_K3s_Lightweight_Kubernetes&amp;action=history"/>
	<updated>2026-04-15T15:07:15Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.36.1</generator>
	<entry>
		<id>https://serverrental.store/index.php?title=Installing_K3s_Lightweight_Kubernetes&amp;diff=5742&amp;oldid=prev</id>
		<title>Admin: New server guide</title>
		<link rel="alternate" type="text/html" href="https://serverrental.store/index.php?title=Installing_K3s_Lightweight_Kubernetes&amp;diff=5742&amp;oldid=prev"/>
		<updated>2026-04-12T15:57:40Z</updated>

		<summary type="html">&lt;p&gt;New server guide&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;= Installing K3s Lightweight Kubernetes =&lt;br /&gt;
K3s is a highly available, certified Kubernetes distribution designed for [[Internet of Things|IoT]], edge computing, and resource-constrained environments. It's a single binary that's easy to install and manage, making it an excellent choice for developers and system administrators looking for a lightweight Kubernetes solution. This guide will walk you through the installation of K3s on a typical Linux server.&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
Before you begin, ensure you have the following:&lt;br /&gt;
&lt;br /&gt;
*   At least one Linux server (e.g., Ubuntu, CentOS, Debian). K3s can run on a single node or multiple nodes for a high-availability cluster.&lt;br /&gt;
*   SSH access to your server(s) with a user that has sudo privileges.&lt;br /&gt;
*   Internet connectivity on your server(s) to download K3s and its dependencies.&lt;br /&gt;
*   Basic understanding of Linux command-line operations.&lt;br /&gt;
*   Understanding of networking concepts, especially firewall rules.&lt;br /&gt;
&lt;br /&gt;
For demanding workloads, consider using GPU servers available at [https://en.immers.cloud/signup/r/20241007-8310688-334/ Immers Cloud], offering a range of GPUs from $0.23/hr for inference to $4.74/hr for H200.&lt;br /&gt;
&lt;br /&gt;
== Installing a K3s Server Node ==&lt;br /&gt;
K3s offers a very simple installation script. You can download and run it directly on your server.&lt;br /&gt;
&lt;br /&gt;
# '''SSH into your server:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ssh your_user@your_server_ip&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# '''Download and run the K3s installation script:'''&lt;br /&gt;
This command downloads the latest stable version of K3s and installs it as a systemd service.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
curl -sfL https://get.k3s.io | sh -&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# '''Verify the installation:'''&lt;br /&gt;
After the script completes, K3s should be running. You can check its status:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sudo systemctl status k3s&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You should see output indicating that the `k3s.service` is active and running.&lt;br /&gt;
&lt;br /&gt;
# '''Check K3s version:'''&lt;br /&gt;
To confirm the installed version:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sudo k3s --version&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# '''Accessing the Kubernetes API:'''&lt;br /&gt;
K3s automatically configures `kubectl` for you. The configuration file is typically located at `/etc/rancher/k3s/k3s.yaml`. To use `kubectl` without `sudo`, you can copy this file to your user's `.kube` directory:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p $HOME/.kube&lt;br /&gt;
sudo cp /etc/rancher/k3s/k3s.yaml $HOME/.kube/config&lt;br /&gt;
sudo chown $(id -u):$(id -g) $HOME/.kube/config&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now you can run `kubectl` commands:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
kubectl get nodes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You should see your single server node listed.&lt;br /&gt;
&lt;br /&gt;
== Installing K3s Agent Nodes (Joining a Cluster) ==&lt;br /&gt;
To create a multi-node cluster, you'll need to install K3s on your worker nodes and join them to the server node.&lt;br /&gt;
&lt;br /&gt;
# '''On your K3s server node, get the node token:'''&lt;br /&gt;
The token is required for agent nodes to securely join the cluster.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sudo cat /var/lib/rancher/k3s/server/node-token&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Copy this token, as you'll need it for the agent installations.&lt;br /&gt;
&lt;br /&gt;
# '''On each agent node, SSH in and run the K3s installer with the server IP and token:'''&lt;br /&gt;
Replace `YOUR_SERVER_IP` with the IP address of your K3s server node and `YOUR_NODE_TOKEN` with the token you just retrieved.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
curl -sfL https://get.k3s.io | K3S_URL=https://YOUR_SERVER_IP:6443 K3S_TOKEN=YOUR_NODE_TOKEN sh -&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This command installs K3s in agent mode and configures it to connect to your server node.&lt;br /&gt;
&lt;br /&gt;
# '''Verify agent node status:'''&lt;br /&gt;
On the agent node, you can check the K3s service status:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sudo systemctl status k3s-agent&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It should be active and running.&lt;br /&gt;
&lt;br /&gt;
# '''On the K3s server node, verify the agent node has joined:'''&lt;br /&gt;
Run `kubectl get nodes` again on your server node. You should now see both the server node and the new agent node listed.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
kubectl get nodes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== High Availability (HA) Setup ==&lt;br /&gt;
For production environments, a single server node is a single point of failure. K3s supports High Availability configurations. This typically involves:&lt;br /&gt;
&lt;br /&gt;
*   An external etcd datastore or a managed Kubernetes service.&lt;br /&gt;
*   Multiple K3s server nodes.&lt;br /&gt;
*   A load balancer in front of the K3s server nodes.&lt;br /&gt;
&lt;br /&gt;
Refer to the official K3s documentation for detailed instructions on setting up HA clusters, as it involves more advanced networking and storage considerations.&lt;br /&gt;
&lt;br /&gt;
== Firewall Configuration ==&lt;br /&gt;
K3s requires certain ports to be open for communication between nodes and for clients to access the API.&lt;br /&gt;
&lt;br /&gt;
*   '''Server Nodes:'''&lt;br /&gt;
    *   TCP port 6443 (Kubernetes API server)&lt;br /&gt;
    *   TCP port 2379-2380 (etcd server client API, if using etcd on server nodes)&lt;br /&gt;
    *   TCP port 10250 (Kubelet API)&lt;br /&gt;
    *   UDP port 8472 (Flannel VXLAN)&lt;br /&gt;
    *   TCP port 5473 (PostgreSQL, if using external DB)&lt;br /&gt;
&lt;br /&gt;
*   '''Agent Nodes:'''&lt;br /&gt;
    *   TCP port 10250 (Kubelet API)&lt;br /&gt;
    *   UDP port 8472 (Flannel VXLAN)&lt;br /&gt;
&lt;br /&gt;
If you are using `ufw` on Ubuntu/Debian:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# On server nodes&lt;br /&gt;
sudo ufw allow 6443/tcp&lt;br /&gt;
sudo ufw allow 10250/tcp&lt;br /&gt;
sudo ufw allow 8472/udp&lt;br /&gt;
sudo ufw enable&lt;br /&gt;
&lt;br /&gt;
# On agent nodes&lt;br /&gt;
sudo ufw allow 10250/tcp&lt;br /&gt;
sudo ufw allow 8472/udp&lt;br /&gt;
sudo ufw enable&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
*   '''K3s service not starting:'''&lt;br /&gt;
    Check the K3s logs for errors:&lt;br /&gt;
    &amp;lt;pre&amp;gt;&lt;br /&gt;
    sudo journalctl -u k3s -f&lt;br /&gt;
    &amp;lt;/pre&amp;gt;&lt;br /&gt;
    or for agent nodes:&lt;br /&gt;
    &amp;lt;pre&amp;gt;&lt;br /&gt;
    sudo journalctl -u k3s-agent -f&lt;br /&gt;
    &amp;lt;/pre&amp;gt;&lt;br /&gt;
    Common issues include port conflicts, incorrect network configuration, or insufficient permissions.&lt;br /&gt;
&lt;br /&gt;
*   '''Agent nodes not joining the cluster:'''&lt;br /&gt;
    *   Verify the `K3S_URL` and `K3S_TOKEN` are correct.&lt;br /&gt;
    *   Ensure the server node is reachable from the agent node (check firewalls and network connectivity).&lt;br /&gt;
    *   Check the K3s agent logs (`sudo journalctl -u k3s-agent -f`) for specific error messages.&lt;br /&gt;
&lt;br /&gt;
*   '''`kubectl` commands failing:'''&lt;br /&gt;
    *   Ensure your `~/.kube/config` file is correctly set up and points to the correct server IP and port.&lt;br /&gt;
    *   Verify that the K3s server service is running.&lt;br /&gt;
    *   Check if the `KUBECONFIG` environment variable is set correctly if you're not using the default location.&lt;br /&gt;
&lt;br /&gt;
== Further Reading ==&lt;br /&gt;
*   [[Kubernetes Basics]]&lt;br /&gt;
*   [[Docker Installation]]&lt;br /&gt;
*   [https://k3s.io/ Official K3s Documentation]&lt;br /&gt;
&lt;br /&gt;
[[Category:Containerization]]&lt;br /&gt;
[[Category:Kubernetes]]&lt;br /&gt;
[[Category:Cloud Computing]]&lt;br /&gt;
&lt;br /&gt;
{{Exchange Box}}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
</feed>