Server rental store

AI in Epsom

# AI in Epsom: Server Configuration Documentation

This document details the server configuration for the "AI in Epsom" project, a local initiative leveraging artificial intelligence for environmental monitoring and resource management. This guide is intended for new system administrators and developers onboarding to the project. It outlines hardware, software, networking, and security considerations.

Project Overview

The "AI in Epsom" project aims to analyze sensor data collected throughout the borough of Epsom to provide insights into air quality, traffic flow, and water usage. This data is processed using machine learning models hosted on a dedicated server cluster. The project relies on robust data pipelines and secure access controls. See Data Pipeline Architecture for a detailed overview of the data flow. This system integrates with existing Epsom Borough Council IT Infrastructure.

Hardware Configuration

The core of the "AI in Epsom" project resides on three dedicated servers, named 'Athena', 'Hermes', and 'Apollo'. Each server is housed in a secure data center managed by the Epsom Data Center Team.

Server Name Role CPU RAM Storage
Athena Primary Model Training & Inference Intel Xeon Gold 6248R (24 cores) 256 GB DDR4 ECC 4 x 2TB NVMe SSD (RAID 10)
Hermes Data Ingestion & Preprocessing AMD EPYC 7443P (24 cores) 128 GB DDR4 ECC 2 x 4TB SATA HDD (RAID 1)
Apollo Database & API Server Intel Xeon Silver 4210 (10 cores) 64 GB DDR4 ECC 1 x 1TB NVMe SSD

These servers are connected via a dedicated 10Gbps network. Power redundancy is provided by dual power supplies and an Uninterruptible Power Supply (UPS). Refer to Data Center Power Redundancy for details on the UPS configuration. Server monitoring is handled by the Server Monitoring System.

Software Stack

The software stack is built around a Linux operating system and leverages open-source tools wherever possible.

Component Version Description
Operating System Ubuntu Server 22.04 LTS Provides the base operating environment.
Python 3.10 Primary programming language for AI models and data pipelines.
TensorFlow 2.12 Machine learning framework used for model training and inference.
PostgreSQL 14 Relational database for storing sensor data and model metadata.
Flask 2.2.2 Web framework for creating the API endpoints.
Nginx 1.23 Web server and reverse proxy for handling API requests.

All software is managed using Ansible Configuration Management to ensure consistency across servers. The Software Licensing Policy details the licensing requirements for all software used.

Networking Configuration

The servers are configured with static IP addresses within a private network segment (192.168.10.0/24).

Server Name IP Address Gateway DNS Server
Athena 192.168.10.10 192.168.10.1 8.8.8.8, 8.8.4.4
Hermes 192.168.10.11 192.168.10.1 8.8.8.8, 8.8.4.4
Apollo 192.168.10.12 192.168.10.1 8.8.8.8, 8.8.4.4

Access to the servers is restricted via a firewall (UFW) and SSH keys. External access to the API is provided through a reverse proxy (Nginx) with SSL/TLS encryption. See the Network Security Policy for comprehensive details. The Firewall Configuration Guide details the specific UFW rules.

Security Considerations

Security is paramount for the "AI in Epsom" project. The following security measures are in place:

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️