Join our Telegram: @serverrental_wiki | BTC Analysis | Trading Signals | Telegraph
Memory Management in Linux
Memory Management in Linux
This guide provides an overview of key memory management concepts in Linux, including swap space, the Out-Of-Memory (OOM) killer, huge pages, and memory overcommit. Understanding these mechanisms is crucial for optimizing server performance and stability.
Prerequisites
- A Linux server (e.g., Ubuntu, CentOS, Debian)
- Root or sudo privileges
- Basic understanding of the Linux command line
Understanding RAM and Swap
Random Access Memory (RAM) is volatile, high-speed memory used by the CPU for active processes. When the system runs out of physical RAM, it can use a portion of the hard drive or SSD as virtual memory, known as swap space. This allows the system to continue operating but at a significantly slower pace.
Checking Swap Usage
You can check your current swap usage with the following commands:
sudo swapon --show
This will list all active swap devices and their sizes.
free -h
The `free -h` command provides a human-readable overview of total, used, and free RAM and swap. Look at the 'Swap' row for usage statistics.
Managing Swap Space
If you need to add swap space, you can create a swap file.
1. Create a swap file: Replace `4G` with your desired swap size (e.g., `1G`, `8G`).
sudo fallocate -l 4G /swapfile
2. Set correct permissions: Only the root user should be able to read and write to the swap file.
sudo chmod 600 /swapfile
3. Format the file as swap:
sudo mkswap /swapfile
4. Enable the swap file:
sudo swapon /swapfile
5. Make it permanent: Add an entry to `/etc/fstab` so the swap file is mounted on boot.
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
To disable swap:
sudo swapoff /swapfile
And then remove the line from `/etc/fstab`.
The Out-Of-Memory (OOM) Killer
The OOM killer is a Linux kernel process that activates when the system is critically low on memory. Its purpose is to prevent a complete system crash by terminating one or more processes to free up memory. The OOM killer selects processes based on a scoring system (oom_score) that favors processes consuming more memory and running for a longer time.
Understanding OOM Scores
You can view the OOM score for all processes:
cat /proc/meminfo | grep OOM
To see the OOM score for a specific process, find its PID first:
ps aux | grep <process_name>
Then check its score:
cat /proc/<PID>/oom_score
Configuring OOM Killer Behavior
You can adjust the `oom_score_adj` for a process to influence whether it's a target for the OOM killer. A higher value makes it more likely to be killed, while a lower value (or negative) makes it less likely.
To make a process less likely to be killed (e.g., a critical database):
echo -500 | sudo tee /proc/<PID>/oom_score_adj
To make a process more likely to be killed:
echo 500 | sudo tee /proc/<PID>/oom_score_adj
To disable OOM killer for a specific process (use with extreme caution!):
echo -1000 | sudo tee /proc/<PID>/oom_score_adj
For persistent changes across reboots, you would typically use systemd service files or other init system configurations.
HugePages
HugePages are a feature that allows the Linux kernel to use larger memory pages than the standard 4KB pages. This can significantly improve performance for applications that access large amounts of memory, such as databases and virtual machines. By reducing the number of page table entries the CPU needs to manage, it can decrease TLB (Translation Lookaside Buffer) misses and improve memory access speed.
Checking HugePages Configuration
You can check the current status of HugePages:
grep HugePages_ /proc/meminfo
This will show you the total number of HugePages available and the number currently free.
Configuring HugePages
The configuration of HugePages is typically done via kernel boot parameters or by writing to `/proc/sys/vm/`. The exact method can vary between distributions.
For example, to reserve a certain number of HugePages (e.g., 1024 pages of 2MB each, totaling 2GB): This is often done in `/etc/sysctl.conf` or a file in `/etc/sysctl.d/`.
Add the following lines:
vm.nr_hugepages = 1024
Then apply the changes:
sudo sysctl -p
Important: You need to ensure that the reserved HugePages are actually contiguous physical memory. If the system is heavily fragmented, it might not be able to allocate the requested HugePages. It's often best to configure this at boot time.
Memory Overcommit
Memory overcommit is a memory management technique where the kernel allows processes to request more memory than is physically available. When a process requests memory using `mmap()` or `brk()`, the kernel doesn't immediately check if that memory is available. Instead, it assumes the process will not use all the requested memory. This can improve performance by avoiding unnecessary memory checks.
Understanding Overcommit Behavior
The `vm.overcommit_memory` kernel parameter controls this behavior:
- `0`: Default behavior. The kernel performs a heuristic check to estimate if the memory request is likely to be granted.
- `1`: Always overcommit. The kernel always grants memory requests, regardless of available memory. This can lead to the OOM killer being invoked more frequently if processes actually try to use the overcommitted memory.
- `2`: Don't overcommit. The kernel will deny memory requests if they would exceed the sum of physical RAM and swap, minus a small reserve.
You can check the current setting:
cat /proc/sys/vm/overcommit_memory
Configuring Overcommit Behavior
To change the overcommit setting, you can use `sysctl`:
To set always overcommit (use with caution):
sudo sysctl vm.overcommit_memory=1
To set not to overcommit:
sudo sysctl vm.overcommit_memory=2
To make these changes permanent, add them to `/etc/sysctl.conf` or a file in `/etc/sysctl.d/`:
echo 'vm.overcommit_memory = 2' | sudo tee -a /etc/sysctl.conf
Then apply the changes:
sudo sysctl -p
The `vm.overcommit_ratio` parameter (when `vm.overcommit_memory` is `2`) determines the percentage of RAM + swap that can be overcommitted. The default is usually 50.
Troubleshooting
- System is slow or unresponsive: Check `free -h` for high swap usage. Consider adding more swap or optimizing memory-hungry applications.
- Processes are being killed unexpectedly: This is likely the OOM killer. Check `dmesg | grep -i oom` for messages. You may need to increase RAM, add swap, or tune `oom_score_adj` for critical processes.
- Applications fail to start with "Cannot allocate memory" errors: This could be due to strict overcommit settings (`vm.overcommit_memory=2`) or actual memory exhaustion. Review your overcommit settings and application memory requirements.
- HugePages not being allocated: Ensure you have enough contiguous free memory. Reconfiguring HugePages at boot time is often more reliable.