AI in Worthing
- AI in Worthing: Server Configuration Documentation
This document details the server configuration for the "AI in Worthing" project, intended to provide a centralised resource for understanding the infrastructure supporting our artificial intelligence initiatives within the Worthing area. This guide is designed for new contributors and system administrators managing the project's servers.
Overview
The "AI in Worthing" project utilizes a distributed server architecture to handle the computational demands of training and deploying various AI models. This architecture consists of three primary server types: Data Ingestion Servers, Processing Servers (GPU-accelerated), and Serving Servers. Each server type is configured with specific hardware and software to optimise performance for its designated role. Server architecture is a key consideration in maintaining the project's scalability. We utilize Linux server administration best-practices throughout.
Data Ingestion Servers
These servers are responsible for collecting, cleaning, and pre-processing data from various sources, including local sensors, public datasets, and APIs. They act as the entry point for all data used in the AI models. The data is then staged for use by the Processing Servers. Data pipelines are critical to this stage.
Specification | Value |
---|---|
Operating System | Ubuntu Server 22.04 LTS |
CPU | Intel Xeon Silver 4310 (12 cores) |
RAM | 64 GB DDR4 ECC |
Storage | 4 x 4TB SATA HDD (RAID 10) + 1 x 500GB NVMe SSD (OS) |
Network Interface | 10GbE |
Database | PostgreSQL 14 |
Software installed includes Python, Pandas, Scikit-learn, and the necessary drivers for accessing external data sources. These servers use Version control for all code. Data security is managed through Firewall configuration.
Processing Servers
These are the workhorses of the AI project, responsible for the computationally intensive task of training and evaluating AI models. They are equipped with high-end GPUs to accelerate these processes. GPU computing is essential for our models.
Specification | Value |
---|---|
Operating System | Ubuntu Server 22.04 LTS |
CPU | AMD EPYC 7763 (64 cores) |
RAM | 128 GB DDR4 ECC |
Storage | 2 x 2TB NVMe SSD (RAID 1) |
GPU | 4 x NVIDIA A100 (80GB) |
Network Interface | 100GbE InfiniBand |
Software includes TensorFlow, PyTorch, CUDA, and cuDNN. We utilise Containerization with Docker to ensure reproducibility and isolate model dependencies. Server monitoring is handled by Nagios.
Serving Servers
These servers are responsible for deploying and serving trained AI models to end-users. They are optimised for low latency and high throughput. Model deployment is a key task.
Specification | Value |
---|---|
Operating System | Ubuntu Server 22.04 LTS |
CPU | Intel Xeon Gold 6338 (32 cores) |
RAM | 64 GB DDR4 ECC |
Storage | 1 x 1TB NVMe SSD |
Network Interface | 10GbE |
Web Server | Nginx |
Software includes Flask, Gunicorn, and the necessary libraries for model inference. These servers are behind a load balancer managed by HAProxy. Regular Security audits are performed to protect against vulnerabilities. API documentation is maintained for all deployed models.
Network Configuration
All servers are connected via a dedicated VLAN with a 100GbE backbone. Internal DNS is managed by BIND9. Network segmentation is implemented to isolate different server types and improve security. Network monitoring is performed using Prometheus and Grafana.
Security Considerations
All servers are hardened using industry best practices, including regular security updates, strong passwords, and intrusion detection systems. Access to the servers is restricted to authorised personnel only, using SSH key authentication. Data encryption is employed both in transit and at rest.
Future Expansion
We anticipate future expansion of the server infrastructure to accommodate the growing demands of the AI project. This will likely involve adding more Processing Servers and potentially exploring cloud-based solutions like Amazon Web Services or Google Cloud Platform. Regular capacity planning is conducted to ensure sufficient resources are available.
Server maintenance is scheduled weekly.
Intel-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Core i7-6700K/7700 Server | 64 GB DDR4, NVMe SSD 2 x 512 GB | CPU Benchmark: 8046 |
Core i7-8700 Server | 64 GB DDR4, NVMe SSD 2x1 TB | CPU Benchmark: 13124 |
Core i9-9900K Server | 128 GB DDR4, NVMe SSD 2 x 1 TB | CPU Benchmark: 49969 |
Core i9-13900 Server (64GB) | 64 GB RAM, 2x2 TB NVMe SSD | |
Core i9-13900 Server (128GB) | 128 GB RAM, 2x2 TB NVMe SSD | |
Core i5-13500 Server (64GB) | 64 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Server (128GB) | 128 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Workstation | 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000 |
AMD-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Ryzen 5 3600 Server | 64 GB RAM, 2x480 GB NVMe | CPU Benchmark: 17849 |
Ryzen 7 7700 Server | 64 GB DDR5 RAM, 2x1 TB NVMe | CPU Benchmark: 35224 |
Ryzen 9 5950X Server | 128 GB RAM, 2x4 TB NVMe | CPU Benchmark: 46045 |
Ryzen 9 7950X Server | 128 GB DDR5 ECC, 2x2 TB NVMe | CPU Benchmark: 63561 |
EPYC 7502P Server (128GB/1TB) | 128 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/2TB) | 128 GB RAM, 2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/4TB) | 128 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/1TB) | 256 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/4TB) | 256 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 9454P Server | 256 GB RAM, 2x2 TB NVMe |
Order Your Dedicated Server
Configure and order your ideal server configuration
Need Assistance?
- Telegram: @powervps Servers at a discounted price
⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️