Server rental store

AI in Peterborough

# AI in Peterborough: Server Configuration Documentation

This document details the server configuration powering the "AI in Peterborough" project, a local initiative utilizing artificial intelligence for urban planning and resource management. This guide is intended for new system administrators and developers contributing to the project. It covers hardware, software, networking, and security aspects of the server infrastructure.

Overview

The "AI in Peterborough" project relies on a distributed server architecture to handle the computationally intensive tasks associated with machine learning models and data processing. The core infrastructure consists of three primary servers: a data ingestion server, a model training server, and a serving/inference server. These are supplemented by a dedicated database server. All servers are located within a secure, climate-controlled data center managed by Peterborough City Council. [Data Center Access] procedures must be followed for physical access. This documentation assumes familiarity with basic Linux server administration and networking concepts. Consult the [Linux Fundamentals] page for a refresher.

Hardware Specifications

The following tables outline the hardware specifications for each server. All servers utilize solid-state drives (SSDs) for optimal performance.

Data Ingestion Server

Component Specification
CPU Intel Xeon Gold 6248R (24 cores)
RAM 128GB DDR4 ECC Registered
Storage 2 x 2TB NVMe SSD (RAID 1)
Network Interface Dual 10GbE
Power Supply 1200W Redundant

This server is responsible for collecting, cleaning, and preparing data from various sources, including [Sensor Networks], [City Databases], and public APIs. See the [Data Pipeline] documentation for more details.

Model Training Server

Component Specification
CPU 2 x AMD EPYC 7763 (64 cores total)
RAM 256GB DDR4 ECC Registered
GPU 4 x NVIDIA A100 (80GB VRAM each)
Storage 4 x 4TB NVMe SSD (RAID 0)
Network Interface Dual 10GbE
Power Supply 1600W Redundant

The Model Training Server utilizes the powerful GPUs for training complex machine learning models. [TensorFlow] and [PyTorch] are the primary frameworks employed. Access to this server is restricted to authorized data scientists. Refer to the [Model Training Procedures] document.

Serving/Inference Server

Component Specification
CPU Intel Xeon Silver 4210 (10 cores)
RAM 64GB DDR4 ECC Registered
Storage 1 x 1TB NVMe SSD
Network Interface Dual 1GbE
Power Supply 750W Redundant

This server hosts the trained models and provides real-time inference capabilities for applications such as [Traffic Prediction] and [Resource Allocation]. It is designed for high availability and low latency. See the [API Documentation] for details on accessing the inference endpoints.

Database Server

This server hosts the PostgreSQL database containing all project data.

Component Specification
CPU Intel Xeon E-2224 (6 cores)
RAM 64GB DDR4 ECC Registered
Storage 2 x 4TB SAS HDD (RAID 1)
Network Interface 1GbE
Power Supply 600W Redundant

Software Configuration

All servers run Ubuntu Server 20.04 LTS. The following software packages are installed and configured:

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️