AI in Leicester
```wiki
- REDIRECT AI in Leicester
AI in Leicester: Server Configuration Documentation
This document details the server configuration supporting the "AI in Leicester" project. This is intended as a technical resource for new system administrators and developers working with the platform. The system utilizes a distributed architecture to handle the significant computational demands of machine learning models. Please review the System Architecture Overview before proceeding.
Overview
The "AI in Leicester" project focuses on applying artificial intelligence to urban challenges within the city of Leicester. This requires processing large datasets related to traffic flow, environmental monitoring, and public services. The server infrastructure is designed for scalability, reliability, and performance. It’s crucial to understand the Data Flow Diagram to appreciate how these components interact. The core components are detailed below, and are further explained in the Deployment Guide.
Hardware Specifications
The server infrastructure consists of three primary tiers: Data Ingestion, Processing, and Serving. Each tier has specific hardware requirements.
Tier | Server Role | CPU | RAM | Storage | Network Interface |
---|---|---|---|---|---|
Data Ingestion | Data Collectors | 2 x Intel Xeon Silver 4310 | 64 GB DDR4 ECC | 4 x 4TB SATA HDD (RAID 10) | 10 Gbps Ethernet |
Processing | Model Training Nodes | 2 x AMD EPYC 7763 | 256 GB DDR4 ECC | 2 x 2TB NVMe SSD (RAID 1) | 100 Gbps Infiniband |
Serving | Inference Servers | 2 x Intel Xeon Gold 6338 | 128 GB DDR4 ECC | 2 x 1TB NVMe SSD (RAID 1) | 25 Gbps Ethernet |
These specifications are subject to change based on performance monitoring and evolving project needs. Refer to the Hardware Revision History for the latest updates. Regular hardware audits are performed as detailed in the Maintenance Schedule.
Software Stack
The software stack is built around a Linux operating system, utilizing containerization for deployment and management. Specifically, we leverage Ubuntu Server 22.04 LTS as the base OS.
Component | Software | Version | Purpose |
---|---|---|---|
Operating System | Ubuntu Server | 22.04 LTS | Base OS for all servers |
Containerization | Docker | 20.10.14 | Application packaging and deployment |
Orchestration | Kubernetes | 1.25.4 | Container orchestration and management |
Machine Learning Framework | TensorFlow | 2.12.0 | Core ML framework |
Data Storage | PostgreSQL | 14.7 | Database for metadata and processed data |
Message Queue | RabbitMQ | 3.9.11 | Asynchronous communication between services |
All code is version controlled using Git and hosted on a private GitLab instance. Continuous Integration and Continuous Deployment (CI/CD) pipelines are implemented using Jenkins.
Network Configuration
The network is segmented into three zones: Public, DMZ, and Private. The Data Ingestion servers reside in the DMZ, while the Processing and Serving tiers are located within the Private network for enhanced security. Firewalls are configured using iptables to restrict access based on the principle of least privilege.
Zone | Subnet | Access Control | Key Services |
---|---|---|---|
Public | 192.168.1.0/24 | Limited to HTTP/HTTPS | Load Balancers |
DMZ | 172.16.0.0/24 | Restricted to necessary ports | Data Collectors, API Gateway |
Private | 10.0.0.0/16 | Internal communication only | Model Training Nodes, Inference Servers, Database |
DNS resolution is handled by BIND9 servers. Regular network security audits are conducted as outlined in the Security Policy. Monitoring is handled by Prometheus and Grafana giving us detailed insight into network traffic and performance.
Security Considerations
Security is paramount. All servers are hardened according to the Security Hardening Guide. Regular vulnerability scans are performed using Nessus. Access to the servers is controlled using SSH keys and multi-factor authentication. Data encryption is implemented both in transit and at rest. The Incident Response Plan details procedures for handling security breaches.
Future Enhancements
Planned enhancements include migrating to a GPU cluster for accelerated model training and exploring the use of Federated Learning to improve data privacy. We are also evaluating the use of Kafka as a replacement for RabbitMQ to handle higher message throughput.
Back to Main Page Project Documentation Troubleshooting Guide Contact Support Server Monitoring Dashboard
```
Intel-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Core i7-6700K/7700 Server | 64 GB DDR4, NVMe SSD 2 x 512 GB | CPU Benchmark: 8046 |
Core i7-8700 Server | 64 GB DDR4, NVMe SSD 2x1 TB | CPU Benchmark: 13124 |
Core i9-9900K Server | 128 GB DDR4, NVMe SSD 2 x 1 TB | CPU Benchmark: 49969 |
Core i9-13900 Server (64GB) | 64 GB RAM, 2x2 TB NVMe SSD | |
Core i9-13900 Server (128GB) | 128 GB RAM, 2x2 TB NVMe SSD | |
Core i5-13500 Server (64GB) | 64 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Server (128GB) | 128 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Workstation | 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000 |
AMD-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Ryzen 5 3600 Server | 64 GB RAM, 2x480 GB NVMe | CPU Benchmark: 17849 |
Ryzen 7 7700 Server | 64 GB DDR5 RAM, 2x1 TB NVMe | CPU Benchmark: 35224 |
Ryzen 9 5950X Server | 128 GB RAM, 2x4 TB NVMe | CPU Benchmark: 46045 |
Ryzen 9 7950X Server | 128 GB DDR5 ECC, 2x2 TB NVMe | CPU Benchmark: 63561 |
EPYC 7502P Server (128GB/1TB) | 128 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/2TB) | 128 GB RAM, 2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/4TB) | 128 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/1TB) | 256 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/4TB) | 256 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 9454P Server | 256 GB RAM, 2x2 TB NVMe |
Order Your Dedicated Server
Configure and order your ideal server configuration
Need Assistance?
- Telegram: @powervps Servers at a discounted price
⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️