AI in Southend-on-Sea
AI in Southend-on-Sea: Server Configuration
This article details the server infrastructure supporting Artificial Intelligence (AI) initiatives within Southend-on-Sea. It’s aimed at newcomers to our MediaWiki site and provides a technical overview for those assisting with system administration, development, and monitoring. This document covers the hardware, software, and network configuration necessary for running our AI workloads. Please also refer to the System Security Guidelines and Data Backup Procedures for related information.
Overview
The AI infrastructure in Southend-on-Sea is designed for scalability and reliability, supporting a range of AI applications including Traffic Management System, Predictive Policing, and Citizen Service Chatbots. The system is built upon a distributed architecture, utilizing both on-premise servers and cloud resources (specifically Amazon Web Services). This hybrid approach allows for flexibility and cost optimization. The initial setup was detailed in the Project Chimera Documentation.
Hardware Configuration
Our core on-premise servers are housed in the Southend Data Centre. These servers are dedicated to running AI models and processing data. The specifications are as follows:
Server Role | CPU | RAM | Storage | Network Interface |
---|---|---|---|---|
AI Processing (x4) | Intel Xeon Gold 6338 (32 cores) | 512 GB DDR4 ECC REG | 8 x 4TB NVMe SSD (RAID 0) | 100Gbps Ethernet |
Data Storage (x2) | Intel Xeon Silver 4310 (12 cores) | 128 GB DDR4 ECC REG | 16 x 16TB SAS HDD (RAID 6) | 25Gbps Ethernet |
Model Training (x2) | AMD EPYC 7763 (64 cores) | 1TB DDR4 ECC REG | 4 x 8TB NVMe SSD (RAID 0) + 4 x 16TB SAS HDD | 100Gbps Ethernet |
These servers are managed via Server Monitoring Dashboard and monitored by the Network Operations Centre. Regular hardware audits are conducted as outlined in the Asset Management Policy.
Software Stack
The software stack is crucial for enabling AI functionality. We primarily use Linux-based operating systems and open-source AI frameworks.
Component | Version | Description |
---|---|---|
Operating System | Ubuntu Server 22.04 LTS | The base operating system for all servers. |
AI Framework | TensorFlow 2.12.0 | Primary framework for model development and deployment. |
Machine Learning Library | PyTorch 2.0.1 | Alternative framework for research and experimentation. |
Data Science Language | Python 3.10 | The primary programming language used for data science and AI. |
Containerization | Docker 24.0.5 | Used for packaging and deploying AI applications. |
Orchestration | Kubernetes 1.27 | Manages and scales containerized applications. |
All software is kept up-to-date via Automated Patch Management. Specific model deployment procedures are detailed in the Model Deployment Guide.
Network Configuration
The network infrastructure is designed to handle the high bandwidth requirements of AI workloads.
Network Segment | IP Range | Description |
---|---|---|
Management Network | 192.168.1.0/24 | Used for server management and monitoring. |
AI Processing Network | 10.0.0.0/16 | Dedicated network for communication between AI processing servers. |
Data Storage Network | 10.1.0.0/16 | Network for accessing data storage servers. |
Public Network | (Dynamic via ISP) | Access to the internet for model updates and external APIs. |
Firewall rules are configured according to the Network Security Policy. Network performance is monitored using Network Performance Monitoring Tools. The entire network diagram is available at Network Topology Documentation. We utilize a Load Balancing System to distribute traffic efficiently.
Data Flow
Data flows from various sources (e.g., Traffic Cameras, Police Databases, Citizen Service Portal) into the data storage servers. The AI processing servers then access this data, train models, and deploy them for inference. Results are then used by the respective applications. Detailed data lineage is tracked using Data Governance Tools.
Future Expansion
Planned upgrades include the addition of GPU servers for accelerated model training and increased storage capacity. We are also investigating the use of Federated Learning to improve data privacy. This expansion is documented in the Future Infrastructure Plan.
Intel-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Core i7-6700K/7700 Server | 64 GB DDR4, NVMe SSD 2 x 512 GB | CPU Benchmark: 8046 |
Core i7-8700 Server | 64 GB DDR4, NVMe SSD 2x1 TB | CPU Benchmark: 13124 |
Core i9-9900K Server | 128 GB DDR4, NVMe SSD 2 x 1 TB | CPU Benchmark: 49969 |
Core i9-13900 Server (64GB) | 64 GB RAM, 2x2 TB NVMe SSD | |
Core i9-13900 Server (128GB) | 128 GB RAM, 2x2 TB NVMe SSD | |
Core i5-13500 Server (64GB) | 64 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Server (128GB) | 128 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Workstation | 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000 |
AMD-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Ryzen 5 3600 Server | 64 GB RAM, 2x480 GB NVMe | CPU Benchmark: 17849 |
Ryzen 7 7700 Server | 64 GB DDR5 RAM, 2x1 TB NVMe | CPU Benchmark: 35224 |
Ryzen 9 5950X Server | 128 GB RAM, 2x4 TB NVMe | CPU Benchmark: 46045 |
Ryzen 9 7950X Server | 128 GB DDR5 ECC, 2x2 TB NVMe | CPU Benchmark: 63561 |
EPYC 7502P Server (128GB/1TB) | 128 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/2TB) | 128 GB RAM, 2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/4TB) | 128 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/1TB) | 256 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/4TB) | 256 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 9454P Server | 256 GB RAM, 2x2 TB NVMe |
Order Your Dedicated Server
Configure and order your ideal server configuration
Need Assistance?
- Telegram: @powervps Servers at a discounted price
⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️