Server rental store

AI in Gravesend

AI in Gravesend: Server Configuration

This article details the server configuration for the “AI in Gravesend” project, a research initiative utilizing machine learning to analyze historical data pertaining to the town of Gravesend, Kent. This document is aimed at newcomers to the server infrastructure and provides a comprehensive overview of the hardware and software components. Understanding this setup is crucial for developers, data scientists, and system administrators involved in the project. See also Server Administration Guide and Data Security Protocols.

Overview

The “AI in Gravesend” project requires significant computational resources for data processing, model training, and serving predictions. The server infrastructure is comprised of three primary nodes: a data ingestion node, a processing/training node, and a serving node. These nodes are interconnected via a dedicated 10Gbps network. Detailed network diagrams are available at Network Topology. The operating system across all nodes is Ubuntu Server 22.04 LTS. Regular backups are performed using Backup and Recovery Procedures.

Hardware Specifications

The following tables detail the hardware specifications for each server node.

Data Ingestion Node

This node is responsible for collecting, validating, and storing raw data from various sources, including historical records, census data, and local archives. See Data Sources for more information on the data itself.

Component Specification
CPU Intel Xeon Silver 4310 (12 Cores, 2.1 GHz)
RAM 64 GB DDR4 ECC Registered
Storage 2 x 8 TB SAS 7.2K RPM HDDs (RAID 1) + 1 x 1 TB NVMe SSD (OS & Metadata)
Network Interface 10Gbps Ethernet
Power Supply 850W Redundant

Processing/Training Node

This is the most computationally intensive node, dedicated to training and evaluating machine learning models. GPU acceleration is critical for reducing training times. Refer to Machine Learning Algorithms Used for specifics.

Component Specification
CPU AMD EPYC 7763 (64 Cores, 2.45 GHz)
RAM 256 GB DDR4 ECC Registered
GPU 2 x NVIDIA A100 (80GB HBM2e)
Storage 4 x 4 TB NVMe SSDs (RAID 0)
Network Interface 10Gbps Ethernet
Power Supply 1600W Redundant

Serving Node

This node hosts the trained models and provides an API for accessing predictions. It is optimized for low latency and high availability. See API Documentation for details on the API.

Component Specification
CPU Intel Xeon Gold 6338 (32 Cores, 2.0 GHz)
RAM 128 GB DDR4 ECC Registered
Storage 2 x 2 TB NVMe SSDs (RAID 1)
Network Interface 10Gbps Ethernet
Power Supply 1200W Redundant

Software Configuration

All nodes utilize Docker containers for application isolation and reproducibility. Docker Configuration Guide details the container setup.

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️