Your setup (Core i7-7700, GTX 1080, Ubuntu 22.04, and 64 GB RAM) should be capable of running LLM models if the NVIDIA

From Server rental store
Revision as of 03:59, 13 January 2025 by Admin (talk | contribs) (Created page with "== Installing NVIDIA and CUDA Drivers on Ubuntu 22.04 for LLM Models == The **GTX 1080** GPU supports **CUDA 12.x** and can be used for running large language models (LLMs) i...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Installing NVIDIA and CUDA Drivers on Ubuntu 22.04 for LLM Models

The **GTX 1080** GPU supports **CUDA 12.x** and can be used for running large language models (LLMs) if the drivers and toolkit are installed correctly.

1. Check GPU Compatibility

To check if your NVIDIA GPU is recognized by the system, run:

nvidia-smi

If you see "NVIDIA driver not loaded," the driver is not installed correctly or not loaded.

2. Verify CUDA Installation

To confirm the CUDA toolkit version:

nvcc --version

This should display the installed CUDA version (e.g., `CUDA 12.1`).

3. Reinstallation Steps for NVIDIA Drivers and CUDA

Step 1: Clean Up Old Installations

Remove previous versions of NVIDIA drivers and CUDA:

sudo apt-get --purge remove "*nvidia*"
sudo apt-get --purge remove "cuda*"
sudo apt-get autoremove
sudo apt-get autoclean

Step 2: Add NVIDIA Driver Repository

Ensure the latest NVIDIA driver repository is added:

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get update

Step 3: Install the Recommended NVIDIA Driver

For the **GTX 1080**, the recommended driver is **535**:

sudo apt install nvidia-driver-535

To check available drivers:

ubuntu-drivers devices

Step 4: Install CUDA 12.x Toolkit

Download the CUDA `.deb` installer:

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-repo-ubuntu2204_12.2.0-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu2204_12.2.0-1_amd64.deb
sudo apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/7fa2af80.pub
sudo apt-get update
sudo apt-get install cuda

Reboot the system after installation:

sudo reboot

4. Post-Installation Verification

After rebooting, verify that the drivers and CUDA toolkit are installed:

nvidia-smi
nvcc --version

5. Testing in Python

To ensure that CUDA is recognized by PyTorch:

import torch
print(torch.cuda.is_available())  # Should return True
print(torch.cuda.get_device_name(0))  # Should display "NVIDIA GTX 1080"

6. Environment Variable Configuration (If Needed)

If there are issues with recognizing the CUDA toolkit:

  • Add the following lines to `~/.bashrc`:
export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
  • Apply changes:
source ~/.bashrc

7. VPS-Specific Considerations

  • Confirm that the VPS provider allows full GPU passthrough.
  • Some providers may restrict access to low-level GPU features, preventing CUDA from functioning.

8. Useful Commands for Debugging

  • **Monitor GPU usage:**
watch -n 1 nvidia-smi
  • **Reinstall NVIDIA driver:**
sudo apt install --reinstall nvidia-driver-535

By following these steps, your **NVIDIA GTX 1080** GPU on **Ubuntu 22.04** should be ready to run LLM models efficiently.