Running Stable Diffusion on GPU Server
= Running Stable Diffusion on a GPU Server =
This guide provides a comprehensive walkthrough for setting up and running Stable Diffusion on a Linux-based GPU server. We will cover installation, VRAM requirements, and optimization techniques to ensure a smooth and efficient experience.
Prerequisites
Before you begin, ensure you have the following:- A Linux Server with a Compatible GPU: NVIDIA GPUs are highly recommended due to CUDA support. Ensure your GPU has sufficient VRAM (see VRAM Requirements section). You can find powerful GPU servers at Immers Cloud, with options starting from $0.23/hr for inference.
- SSH Access: You'll need to connect to your server via SSH.
- Basic Linux Command-Line Proficiency: Familiarity with commands like `cd`, `ls`, `sudo`, `apt`, and `git`.
- NVIDIA Drivers Installed: Ensure the correct NVIDIA drivers are installed and functioning. You can check this by running:
nvidia-smiIf this command doesn't show your GPU information, you'll need to install the drivers first. Refer to your Linux distribution's documentation or NVIDIA's official website for installation instructions.
Understanding VRAM Requirements
Stable Diffusion's VRAM (Video Random Access Memory) requirements depend largely on the model size, resolution of generated images, and batch size.For demanding tasks, consider renting a dedicated GPU server from providers like Immers Cloud, which offers a range of GPUs from consumer-grade to enterprise-level H200s.
Installation Steps
This guide will focus on installing Stable Diffusion via AUTOMATIC1111's Stable Diffusion Web UI, a popular and feature-rich interface.
Step 1: Install Python and Git
Ensure you have Python 3.10.6 and Git installed. If not, you can install them using your distribution's package manager. For Debian/Ubuntu-based systems:sudo apt update sudo apt install python3 python3-venv git -y
Step 2: Clone the Stable Diffusion Web UI Repository
Create a directory for your Stable Diffusion installation and clone the repository:git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git cd stable-diffusion-webui
Step 3: Download Stable Diffusion Models
You need to download the Stable Diffusion model checkpoints (files ending in `.ckpt` or `.safetensors`). The most common ones are SD v1.5 and SDXL.Place the downloaded model files into the `stable-diffusion-webui/models/Stable-diffusion` directory. If the directory doesn't exist, create it:
mkdir -p models/Stable-diffusion
You can typically find download links for these models on Hugging Face (e.g., search for "runwayml/stable-diffusion-v1-5" or "stabilityai/stable-diffusion-xl-base-1.0"). After downloading, move the `.ckpt` or `.safetensors` files into the `models/Stable-diffusion` folder.
Step 4: Configure the Web UI Script
The `webui-user.sh` script is used to launch the Stable Diffusion Web UI. You can modify it to set specific environment variables or command-line arguments.Edit the `webui-user.sh` file:
nano webui-user.sh
Find the line that starts with `export COMMANDLINE_ARGS=` and uncomment it. You can add arguments here to control performance and features. Some useful arguments include:
A good starting configuration for a server with 10GB+ VRAM would be:
export COMMANDLINE_ARGS="--xformers --listen"
For servers with less VRAM (e.g., 8GB), you might try:
export COMMANDLINE_ARGS="--xformers --medvram --listen"
Save and exit the editor (Ctrl+X, Y, Enter in nano).
Step 5: Launch Stable Diffusion
Now, run the launch script:bash webui-user.sh
The first time you run this, it will download and install all necessary Python dependencies. This can take a considerable amount of time. Subsequent launches will be much faster.
Once the dependencies are installed and the server is ready, you will see a message indicating that the web UI is running and providing a URL, typically:
Running on local URL: http://127.0.0.1:7860
If you used the `--listen` argument, you can access it from your local machine's browser by navigating to `
Optimization Tips
Troubleshooting
Related Articles
Category:AI and GPU Category:Server Administration Category:Stable Diffusion