InvokeAI documentation
- InvokeAI Server Configuration
This article details the recommended and minimum server configurations for running the InvokeAI image generation platform. It is intended for system administrators and those familiar with server deployment. Before proceeding, ensure you have a basic understanding of Linux server administration and Python virtual environments.
Overview
InvokeAI is a resource-intensive application, particularly relying on a powerful GPU for efficient image generation. This document outlines hardware and software requirements to ensure a smooth and performant experience. Proper configuration is crucial for both single-user deployments and for serving multiple users. We'll cover hardware specifications, operating system considerations, and essential software dependencies.
Hardware Requirements
The following table outlines suggested hardware configurations. These are guidelines, and actual performance may vary based on model complexity and usage patterns.
Minimum | Recommended | High-End | ||
---|---|---|---|---|
CPU: 16-core Intel/AMD processor | CPU: 24+ core Intel/AMD processor | RAM: 64GB DDR4 | RAM: 128GB+ DDR4 ECC | GPU: NVIDIA GeForce RTX 3090 24GB VRAM | GPU: NVIDIA RTX A6000 48GB VRAM or NVIDIA H100 | Storage: 1TB NVMe SSD | Storage: 2TB+ NVMe SSD RAID 0 | Network: 10Gbps Ethernet | Network: 25Gbps+ Ethernet |
Note: VRAM (Video RAM) is *critical*. InvokeAI heavily utilizes GPU memory. Insufficient VRAM will result in errors or extremely slow performance. Consider using multiple GPUs for increased throughput. For production environments, a dedicated network switch is highly recommended.
Software Requirements
InvokeAI relies on several software components. This section details the required versions and configuration.
Operating System
- Linux: Ubuntu 20.04 or 22.04 LTS are the officially supported distributions. Other distributions *may* work, but are not officially supported. Ensure your server has a fully updated kernel. See Ubuntu server installation for details.
- Python: Python 3.9 or 3.10 are officially supported. Using a Python virtual environment is *strongly* recommended to isolate InvokeAI's dependencies.
- CUDA Toolkit: The appropriate CUDA Toolkit version must be installed to support your NVIDIA GPU. Check the NVIDIA documentation for compatibility.
- cuDNN: cuDNN is a GPU-accelerated library for deep neural networks. It is required by InvokeAI. Download and install the version compatible with your CUDA Toolkit.
Dependencies
The following table lists essential Python packages and their recommended versions. These are typically installed via `pip` within a virtual environment.
Package | Recommended Version | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
torch | 1.13.1 or later | torchvision | 0.14.1 or later | transformers | 4.28.1 or later | diffusers | 0.17.0 or later | accelerate | 0.18.0 or later | Pillow | 9.0.0 or later | xformers | 0.0.20 or later (optional, for faster inference) |
Important: Always refer to the official InvokeAI documentation for the most up-to-date dependency list. Using outdated or incompatible packages can lead to instability or errors. See the Python package management article for more details.
Server Configuration Details
GPU Configuration
- Ensure the NVIDIA drivers are correctly installed and recognized by the system. Use the `nvidia-smi` command to verify.
- Configure CUDA environment variables correctly. The `CUDA_HOME` and `LD_LIBRARY_PATH` variables must point to the CUDA Toolkit installation directory.
- For multi-GPU setups, ensure the GPUs are properly configured and recognized by the system. The `torch.cuda.device_count()` function can be used to verify.
Storage Configuration
- Use an NVMe SSD for optimal performance. Avoid using traditional hard drives (HDDs) as they will significantly slow down image generation.
- Consider using a RAID configuration (RAID 0 or RAID 1) for increased speed or redundancy.
- Ensure sufficient disk space is available for the InvokeAI installation, models, and generated images.
Network Configuration
- Configure a static IP address for the server.
- Ensure the firewall allows access to the necessary ports for the InvokeAI web interface and API.
- For multi-user deployments, consider using a reverse proxy (such as Nginx or Apache) to handle incoming requests and distribute the load across multiple InvokeAI instances. See Reverse proxy configuration for details.
Security Considerations
- Keep the operating system and all software packages up to date with the latest security patches.
- Configure a strong firewall to restrict access to the server.
- Use strong passwords for all user accounts.
- Consider using SSH key authentication instead of passwords for remote access.
- Regularly back up the server and its data.
Troubleshooting
If you encounter issues during installation or operation, consult the InvokeAI troubleshooting guide. Common problems include CUDA errors, out-of-memory errors, and dependency conflicts. Checking the server logs is often the first step in diagnosing problems.
Intel-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Core i7-6700K/7700 Server | 64 GB DDR4, NVMe SSD 2 x 512 GB | CPU Benchmark: 8046 |
Core i7-8700 Server | 64 GB DDR4, NVMe SSD 2x1 TB | CPU Benchmark: 13124 |
Core i9-9900K Server | 128 GB DDR4, NVMe SSD 2 x 1 TB | CPU Benchmark: 49969 |
Core i9-13900 Server (64GB) | 64 GB RAM, 2x2 TB NVMe SSD | |
Core i9-13900 Server (128GB) | 128 GB RAM, 2x2 TB NVMe SSD | |
Core i5-13500 Server (64GB) | 64 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Server (128GB) | 128 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Workstation | 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000 |
AMD-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Ryzen 5 3600 Server | 64 GB RAM, 2x480 GB NVMe | CPU Benchmark: 17849 |
Ryzen 7 7700 Server | 64 GB DDR5 RAM, 2x1 TB NVMe | CPU Benchmark: 35224 |
Ryzen 9 5950X Server | 128 GB RAM, 2x4 TB NVMe | CPU Benchmark: 46045 |
Ryzen 9 7950X Server | 128 GB DDR5 ECC, 2x2 TB NVMe | CPU Benchmark: 63561 |
EPYC 7502P Server (128GB/1TB) | 128 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/2TB) | 128 GB RAM, 2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/4TB) | 128 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/1TB) | 256 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/4TB) | 256 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 9454P Server | 256 GB RAM, 2x2 TB NVMe |
Order Your Dedicated Server
Configure and order your ideal server configuration
Need Assistance?
- Telegram: @powervps Servers at a discounted price
⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️