Server rental store

AI Best Practices

AI Best Practices: Server Configuration

This article outlines best practices for server configuration when deploying and running Artificial Intelligence (AI) workloads on our MediaWiki infrastructure. These guidelines are designed to maximize performance, stability, and scalability. This is a guide for newcomers to understand the key considerations. See Special:MyPage for contact information if you have questions.

1. Hardware Considerations

AI workloads, particularly those involving Machine Learning (ML), are computationally intensive. Choosing the right hardware is paramount. A solid foundation is crucial for successful deployment; see Help:Contents for general MediaWiki help.

The following table outlines recommended minimum specifications. These are minimums; performance will improve with increased resources. Always consult with the Help:System administrators before making hardware changes.

Component Minimum Specification Recommended Specification
CPU Intel Xeon Silver 4210 or AMD EPYC 7282 Intel Xeon Gold 6248R or AMD EPYC 7763
RAM 64 GB DDR4 2666 MHz 256 GB DDR4 3200 MHz
Storage 1 TB NVMe SSD 4 TB NVMe SSD (RAID 1 recommended)
GPU NVIDIA Tesla T4 (16GB) NVIDIA A100 (80GB) or equivalent AMD Instinct MI250X
Network 10 Gbps Ethernet 40 Gbps InfiniBand or 25 Gbps Ethernet

Consider the type of AI workload. Deep Learning (DL) benefits massively from GPU acceleration. Natural Language Processing (NLP) may be more CPU-bound, but still benefits from fast storage and ample RAM. See Special:Search for previous discussions.

2. Operating System & Software Stack

We currently standardize on Ubuntu Server 22.04 LTS for AI workloads. This provides a stable base with excellent package availability. Other distributions may be considered with prior approval from Help:Policy.

The following software is essential:

⚠️ Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock. ⚠️