AI in Peterborough
- AI in Peterborough: Server Configuration Documentation
This document details the server configuration powering the "AI in Peterborough" project, a local initiative utilizing artificial intelligence for urban planning and resource management. This guide is intended for new system administrators and developers contributing to the project. It covers hardware, software, networking, and security aspects of the server infrastructure.
Overview
The "AI in Peterborough" project relies on a distributed server architecture to handle the computationally intensive tasks associated with machine learning models and data processing. The core infrastructure consists of three primary servers: a data ingestion server, a model training server, and a serving/inference server. These are supplemented by a dedicated database server. All servers are located within a secure, climate-controlled data center managed by Peterborough City Council. [Data Center Access] procedures must be followed for physical access. This documentation assumes familiarity with basic Linux server administration and networking concepts. Consult the [Linux Fundamentals] page for a refresher.
Hardware Specifications
The following tables outline the hardware specifications for each server. All servers utilize solid-state drives (SSDs) for optimal performance.
Data Ingestion Server
Component | Specification |
---|---|
CPU | Intel Xeon Gold 6248R (24 cores) |
RAM | 128GB DDR4 ECC Registered |
Storage | 2 x 2TB NVMe SSD (RAID 1) |
Network Interface | Dual 10GbE |
Power Supply | 1200W Redundant |
This server is responsible for collecting, cleaning, and preparing data from various sources, including [Sensor Networks], [City Databases], and public APIs. See the [Data Pipeline] documentation for more details.
Model Training Server
Component | Specification |
---|---|
CPU | 2 x AMD EPYC 7763 (64 cores total) |
RAM | 256GB DDR4 ECC Registered |
GPU | 4 x NVIDIA A100 (80GB VRAM each) |
Storage | 4 x 4TB NVMe SSD (RAID 0) |
Network Interface | Dual 10GbE |
Power Supply | 1600W Redundant |
The Model Training Server utilizes the powerful GPUs for training complex machine learning models. [TensorFlow] and [PyTorch] are the primary frameworks employed. Access to this server is restricted to authorized data scientists. Refer to the [Model Training Procedures] document.
Serving/Inference Server
Component | Specification |
---|---|
CPU | Intel Xeon Silver 4210 (10 cores) |
RAM | 64GB DDR4 ECC Registered |
Storage | 1 x 1TB NVMe SSD |
Network Interface | Dual 1GbE |
Power Supply | 750W Redundant |
This server hosts the trained models and provides real-time inference capabilities for applications such as [Traffic Prediction] and [Resource Allocation]. It is designed for high availability and low latency. See the [API Documentation] for details on accessing the inference endpoints.
Database Server
This server hosts the PostgreSQL database containing all project data.
Component | Specification |
---|---|
CPU | Intel Xeon E-2224 (6 cores) |
RAM | 64GB DDR4 ECC Registered |
Storage | 2 x 4TB SAS HDD (RAID 1) |
Network Interface | 1GbE |
Power Supply | 600W Redundant |
Software Configuration
All servers run Ubuntu Server 20.04 LTS. The following software packages are installed and configured:
- Docker: For containerization of applications and dependencies. See [Docker Usage Guide].
- Kubernetes: For orchestration of Docker containers. Configuration details are available in the [Kubernetes Cluster Setup].
- PostgreSQL: The primary database for storing project data. [PostgreSQL Administration] provides detailed instructions.
- Nginx: As a reverse proxy and load balancer. [Nginx Configuration] details the setup.
- Prometheus: For monitoring server performance. [Prometheus Monitoring] explains the monitoring dashboard.
- Grafana: For visualizing metrics collected by Prometheus.
- Python 3.8: The primary programming language for data science and machine learning.
Networking
The servers are connected via a dedicated VLAN within the Peterborough City Council network.
- Data Ingestion Server: 192.168.10.10
- Model Training Server: 192.168.10.11
- Serving/Inference Server: 192.168.10.12
- Database Server: 192.168.10.13
All servers have static IP addresses assigned. [Network Diagram] provides a visual representation of the network topology. Firewall rules are configured to restrict access to necessary ports only. See [Firewall Rules] for details.
Security Considerations
Security is paramount. The following measures are in place:
- Regular Security Audits: Conducted quarterly by the IT Security team. [Audit Reports] are available upon request.
- 'Intrusion Detection System (IDS): Monitors network traffic for malicious activity.
- 'Role-Based Access Control (RBAC): Restricts access to resources based on user roles. See the [RBAC Policy].
- Data Encryption: All sensitive data is encrypted at rest and in transit.
- Regular Backups: Automated backups are performed daily and stored offsite. [Backup and Recovery Procedures] are documented.
Data Center Access Linux Fundamentals Sensor Networks City Databases Data Pipeline TensorFlow PyTorch Model Training Procedures API Documentation Traffic Prediction Resource Allocation Docker Usage Guide Kubernetes Cluster Setup PostgreSQL Administration Nginx Configuration Prometheus Monitoring Network Diagram Firewall Rules Audit Reports RBAC Policy Backup and Recovery Procedures
Intel-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Core i7-6700K/7700 Server | 64 GB DDR4, NVMe SSD 2 x 512 GB | CPU Benchmark: 8046 |
Core i7-8700 Server | 64 GB DDR4, NVMe SSD 2x1 TB | CPU Benchmark: 13124 |
Core i9-9900K Server | 128 GB DDR4, NVMe SSD 2 x 1 TB | CPU Benchmark: 49969 |
Core i9-13900 Server (64GB) | 64 GB RAM, 2x2 TB NVMe SSD | |
Core i9-13900 Server (128GB) | 128 GB RAM, 2x2 TB NVMe SSD | |
Core i5-13500 Server (64GB) | 64 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Server (128GB) | 128 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Workstation | 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000 |
AMD-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Ryzen 5 3600 Server | 64 GB RAM, 2x480 GB NVMe | CPU Benchmark: 17849 |
Ryzen 7 7700 Server | 64 GB DDR5 RAM, 2x1 TB NVMe | CPU Benchmark: 35224 |
Ryzen 9 5950X Server | 128 GB RAM, 2x4 TB NVMe | CPU Benchmark: 46045 |
Ryzen 9 7950X Server | 128 GB DDR5 ECC, 2x2 TB NVMe | CPU Benchmark: 63561 |
EPYC 7502P Server (128GB/1TB) | 128 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/2TB) | 128 GB RAM, 2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/4TB) | 128 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/1TB) | 256 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/4TB) | 256 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 9454P Server | 256 GB RAM, 2x2 TB NVMe |
Order Your Dedicated Server
Configure and order your ideal server configuration
Need Assistance?
- Telegram: @powervps Servers at a discounted price
⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️