AI in Stoke-on-Trent
- AI in Stoke-on-Trent: Server Configuration
This article details the server configuration supporting Artificial Intelligence (AI) initiatives within Stoke-on-Trent. It is intended as a technical guide for new system administrators and developers contributing to the project. This deployment focuses on providing robust computational resources for model training, inference, and data processing. We will cover hardware specifications, software stack, network configuration, and security considerations. Understanding these components is crucial for maintaining a stable and efficient AI infrastructure. See also our Data Storage Policy and Network Security Guide.
Hardware Overview
The core of our AI infrastructure consists of a cluster of high-performance servers housed within the secure data centre at [Location Redacted]. These servers are designed for parallel processing, crucial for the computationally intensive tasks associated with AI. The following table outlines the specifications for each server node. Further details on Server Room Access can be found on the internal wiki.
Server Component | Specification |
---|---|
CPU | Dual Intel Xeon Gold 6338 (32 cores/64 threads per CPU) |
RAM | 512 GB DDR4 ECC Registered 3200MHz |
GPU | 4 x NVIDIA A100 80GB PCIe 4.0 |
Storage (OS) | 1 TB NVMe SSD |
Storage (Data) | 16 TB NVMe SSD (RAID 0) |
Network Interface | Dual 100GbE QSFP28 |
Power Supply | 2 x 2000W Redundant Power Supplies |
We also utilize a dedicated storage server for large datasets. Its specifications are detailed below. This server is critical for supporting the Data Pipeline.
Component | Specification |
---|---|
CPU | AMD EPYC 7763 (64 cores) |
RAM | 1TB DDR4 ECC Registered |
Storage | 512TB NVMe SSD (RAID 6) |
Network Interface | 4 x 40GbE QSFP+ |
Finally, a smaller cluster of edge servers are deployed at various locations throughout Stoke-on-Trent for real-time inference. These servers have reduced specifications but are optimized for low latency. Refer to the Edge Deployment Guide.
Software Stack
The servers run Ubuntu Server 22.04 LTS. The core software stack consists of the following components:
- **CUDA Toolkit:** 11.8 – provides the necessary libraries and tools for GPU acceleration. See the CUDA Installation Guide.
- **cuDNN:** 8.6 – a library for deep neural networks.
- **TensorFlow:** 2.12 – a popular open-source machine learning framework.
- **PyTorch:** 2.0 – another widely used machine learning framework.
- **Docker:** 20.10 – for containerization and application deployment. Read the Docker Best Practices.
- **Kubernetes:** 1.26 – for container orchestration.
- **NFS:** For shared storage access. See NFS Configuration.
- **Prometheus & Grafana:** For monitoring and alerting.
The following table summarizes the software versions installed on the primary server nodes.
Software | Version |
---|---|
Ubuntu Server | 22.04 LTS |
CUDA Toolkit | 11.8 |
cuDNN | 8.6 |
TensorFlow | 2.12 |
PyTorch | 2.0 |
Docker | 20.10 |
Kubernetes | 1.26 |
Network Configuration
The server cluster is connected to the internal network via a dedicated 100GbE network. A firewall is in place to restrict access to the servers. All traffic is monitored for security threats. The network is segmented to isolate the AI infrastructure from other systems. See Firewall Rules for details. We utilize a private IP address range of 192.168.10.0/24 for internal communication. DNS resolution is handled by our internal DNS servers. Consider reading the Network Troubleshooting Guide.
Security Considerations
Security is paramount. Access to the servers is restricted to authorized personnel only. Multi-factor authentication is required for all administrative access. Regular security audits are conducted to identify and address vulnerabilities. Data encryption is used both in transit and at rest. All software is kept up-to-date with the latest security patches. We adhere to the Data Security Policy. Intrusion detection systems are in place to monitor for malicious activity. Review the Incident Response Plan in case of security breaches.
Future Expansion
We plan to expand the AI infrastructure in the coming months by adding more GPU servers and increasing the storage capacity. We are also exploring the use of new technologies, such as federated learning and reinforcement learning. See Future Project Roadmap.
Special:Search/AI Special:Search/Server Special:Search/Stoke-on-Trent Main Page Help:Contents Manual:Configuration Manual:Wiki markup Data Governance Server Maintenance Schedule Backup and Recovery Disaster Recovery Plan Change Management Process Security Audit Logs Monitoring Dashboard Contact Support
Intel-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Core i7-6700K/7700 Server | 64 GB DDR4, NVMe SSD 2 x 512 GB | CPU Benchmark: 8046 |
Core i7-8700 Server | 64 GB DDR4, NVMe SSD 2x1 TB | CPU Benchmark: 13124 |
Core i9-9900K Server | 128 GB DDR4, NVMe SSD 2 x 1 TB | CPU Benchmark: 49969 |
Core i9-13900 Server (64GB) | 64 GB RAM, 2x2 TB NVMe SSD | |
Core i9-13900 Server (128GB) | 128 GB RAM, 2x2 TB NVMe SSD | |
Core i5-13500 Server (64GB) | 64 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Server (128GB) | 128 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Workstation | 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000 |
AMD-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Ryzen 5 3600 Server | 64 GB RAM, 2x480 GB NVMe | CPU Benchmark: 17849 |
Ryzen 7 7700 Server | 64 GB DDR5 RAM, 2x1 TB NVMe | CPU Benchmark: 35224 |
Ryzen 9 5950X Server | 128 GB RAM, 2x4 TB NVMe | CPU Benchmark: 46045 |
Ryzen 9 7950X Server | 128 GB DDR5 ECC, 2x2 TB NVMe | CPU Benchmark: 63561 |
EPYC 7502P Server (128GB/1TB) | 128 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/2TB) | 128 GB RAM, 2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/4TB) | 128 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/1TB) | 256 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/4TB) | 256 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 9454P Server | 256 GB RAM, 2x2 TB NVMe |
Order Your Dedicated Server
Configure and order your ideal server configuration
Need Assistance?
- Telegram: @powervps Servers at a discounted price
⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️