AI in Plymouth
---
- AI in Plymouth: Server Configuration
This article details the server configuration supporting the "AI in Plymouth" project, a local initiative focused on deploying artificial intelligence solutions for city management and public services. This guide is intended for new system administrators and engineers contributing to the project. It outlines the hardware, software, and networking components involved.
Overview
The "AI in Plymouth" project relies on a distributed server infrastructure to handle large datasets, complex model training, and real-time inference. The core infrastructure is housed within the Plymouth Data Centre, with supplemental resources provisioned through a hybrid cloud model using Cloud Services. The system is designed for scalability, redundancy, and security. This document details the core on-premise server configuration. Refer to the Cloud Integration page for details on cloud resources.
Hardware Specifications
The primary servers are based on a modular design, allowing for easy upgrades and maintenance. The following table details the specifications of the core AI processing servers:
Component | Specification | Quantity |
---|---|---|
CPU | Dual Intel Xeon Gold 6338 (32 cores/64 threads per CPU) | 8 |
RAM | 512 GB DDR4 ECC Registered 3200MHz | 8 |
GPU | NVIDIA A100 80GB PCIe 4.0 | 8 |
Storage (OS/Boot) | 1TB NVMe PCIe Gen4 SSD | 8 |
Storage (Data) | 16TB SAS 12Gbps 7.2K RPM HDD (RAID 6) | 4 Arrays |
Network Interface | Dual 100Gbps Ethernet (Mellanox ConnectX-6) | 8 |
Power Supply | Redundant 2000W 80+ Platinum | 8 |
Data storage is handled by a separate cluster of servers, detailed below:
Component | Specification | Quantity |
---|---|---|
CPU | Intel Xeon Silver 4310 (12 cores/24 threads) | 4 |
RAM | 128GB DDR4 ECC Registered 3200MHz | 4 |
Storage | 64TB SAS 12Gbps 7.2K RPM HDD (RAID 6) | 2 Arrays |
Network Interface | Quad 10Gbps Ethernet | 4 |
Finally, a small cluster of metadata servers manages the data lake:
Component | Specification | Quantity |
---|---|---|
CPU | Intel Xeon E-2336 (8 cores/16 threads) | 3 |
RAM | 64GB DDR4 ECC Registered 3200MHz | 3 |
Storage | 2TB NVMe PCIe Gen4 SSD (RAID 1) | 3 |
Network Interface | Dual 10Gbps Ethernet | 3 |
Software Stack
The servers run a customized Linux distribution based on Ubuntu Server 22.04 LTS. The core software components are:
- Operating System: Ubuntu Server 22.04 LTS, kernel 5.15.
- Containerization: Docker and Kubernetes are used for application deployment and orchestration.
- AI Frameworks: TensorFlow 2.12, PyTorch 2.0, and Scikit-learn 1.2.
- Data Storage: Ceph is used for distributed storage and data replication.
- Message Queue: RabbitMQ for asynchronous task processing.
- Monitoring: Prometheus and Grafana for system monitoring and alerting. See the Monitoring Dashboard page for details.
- Version Control: Git with GitHub for code management.
- Security: iptables and Fail2Ban for firewall and intrusion prevention.
Networking Configuration
The server infrastructure is segmented into three logical networks:
- Management Network: 192.168.1.0/24 – Used for server administration and monitoring.
- Data Network: 10.0.0.0/16 – Used for data transfer between servers.
- Public Network: Connected to the internet via a dedicated 1Gbps fiber connection. Access is controlled via a Firewall Configuration.
Network redundancy is achieved through link aggregation and multiple network paths. All servers have access to a dedicated DNS Server.
Security Considerations
Security is a paramount concern. Key security measures include:
- Regular security audits and vulnerability scanning.
- Role-Based Access Control (RBAC) using LDAP.
- Data encryption at rest and in transit.
- Intrusion detection and prevention systems.
- Regular backups and disaster recovery procedures (see Backup Procedures).
- Network segmentation and firewall rules.
Future Expansion
Planned future expansions include:
- Adding more GPU servers to increase processing capacity.
- Implementing a dedicated Hadoop Cluster for large-scale data processing.
- Integrating with additional cloud services for burst capacity.
- Exploring the use of Federated Learning techniques.
Intel-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Core i7-6700K/7700 Server | 64 GB DDR4, NVMe SSD 2 x 512 GB | CPU Benchmark: 8046 |
Core i7-8700 Server | 64 GB DDR4, NVMe SSD 2x1 TB | CPU Benchmark: 13124 |
Core i9-9900K Server | 128 GB DDR4, NVMe SSD 2 x 1 TB | CPU Benchmark: 49969 |
Core i9-13900 Server (64GB) | 64 GB RAM, 2x2 TB NVMe SSD | |
Core i9-13900 Server (128GB) | 128 GB RAM, 2x2 TB NVMe SSD | |
Core i5-13500 Server (64GB) | 64 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Server (128GB) | 128 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Workstation | 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000 |
AMD-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Ryzen 5 3600 Server | 64 GB RAM, 2x480 GB NVMe | CPU Benchmark: 17849 |
Ryzen 7 7700 Server | 64 GB DDR5 RAM, 2x1 TB NVMe | CPU Benchmark: 35224 |
Ryzen 9 5950X Server | 128 GB RAM, 2x4 TB NVMe | CPU Benchmark: 46045 |
Ryzen 9 7950X Server | 128 GB DDR5 ECC, 2x2 TB NVMe | CPU Benchmark: 63561 |
EPYC 7502P Server (128GB/1TB) | 128 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/2TB) | 128 GB RAM, 2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/4TB) | 128 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/1TB) | 256 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/4TB) | 256 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 9454P Server | 256 GB RAM, 2x2 TB NVMe |
Order Your Dedicated Server
Configure and order your ideal server configuration
Need Assistance?
- Telegram: @powervps Servers at a discounted price
⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️