AI in Hampshire
- AI in Hampshire: Server Configuration and Deployment
This article details the server configuration for the "AI in Hampshire" project, a local initiative utilizing Artificial Intelligence for county-wide data analysis. This guide is aimed at new system administrators and developers joining the project, outlining the hardware, software, and network setup.
Project Overview
The "AI in Hampshire" project aims to improve public services through data-driven insights. This involves collecting, processing, and analyzing data related to transportation, healthcare, and environmental factors. The server infrastructure is designed for scalability, reliability, and security. The core technology is based on Python and utilises several machine learning libraries like TensorFlow and PyTorch. Data is initially stored in a PostgreSQL database before being processed by the AI models. We also utilize a Redis cache for frequently accessed data.
Hardware Configuration
The server infrastructure consists of three primary server types: Data Ingestion, Processing, and Serving. Each type has specific hardware requirements detailed below. All servers are located in a secure data center in Winchester.
Server Type | CPU | RAM | Storage | Network Interface |
---|---|---|---|---|
Data Ingestion | Intel Xeon Silver 4310 (12 Cores) | 64 GB DDR4 ECC | 4 TB NVMe SSD (RAID 1) | 10 Gbps Ethernet |
Processing | 2 x AMD EPYC 7763 (64 Cores each) | 256 GB DDR4 ECC | 8 x 4 TB NVMe SSD (RAID 0) | 100 Gbps Ethernet |
Serving | Intel Xeon Gold 6338 (32 Cores) | 128 GB DDR4 ECC | 2 x 2 TB NVMe SSD (RAID 1) | 25 Gbps Ethernet |
All servers run on a VMware ESXi hypervisor, allowing for flexible resource allocation and simplified management. Virtual machines are allocated based on workload demands. The physical servers are monitored using Nagios for uptime and performance.
Software Configuration
The software stack is designed for efficient AI model training and deployment. The operating system used across all servers is Ubuntu Server 22.04 LTS.
Server Type | Operating System | Core Software | Security Software |
---|---|---|---|
Data Ingestion | Ubuntu Server 22.04 LTS | Apache Kafka, Logstash, Fluentd | Fail2Ban, UFW |
Processing | Ubuntu Server 22.04 LTS | Python 3.10, TensorFlow 2.12, PyTorch 1.13, CUDA Toolkit 11.8, cuDNN 8.6 | SELinux, ClamAV |
Serving | Ubuntu Server 22.04 LTS | Flask, Gunicorn, Nginx | Snort, Suricata |
All source code is managed using Git and stored on a private GitLab instance. Continuous Integration/Continuous Deployment (CI/CD) pipelines are implemented using Jenkins to automate the build and deployment process. We utilise Docker containers for application isolation and portability.
Network Configuration
The network infrastructure is segmented to enhance security and performance. Each server type resides on a separate VLAN.
VLAN ID | Server Type | Subnet | Gateway |
---|---|---|---|
10 | Data Ingestion | 192.168.10.0/24 | 192.168.10.1 |
20 | Processing | 192.168.20.0/24 | 192.168.20.1 |
30 | Serving | 192.168.30.0/24 | 192.168.30.1 |
A dedicated firewall, a Cisco ASA 5516-X, protects the network perimeter. Internal communication between servers is secured using TLS/SSL encryption. The servers are also configured to use DNSSEC for enhanced DNS security. Network monitoring is performed using PRTG Network Monitor. We also employ a Reverse Proxy to handle incoming requests to the serving servers.
Future Considerations
Future plans include expanding the processing cluster with additional GPU servers and implementing a distributed database system like CockroachDB for improved scalability and fault tolerance. We are also investigating the use of Kubernetes for more robust container orchestration. The long-term goal is to create a fully automated and self-healing infrastructure.
Main Page
Special:AllPages
Help:Contents
Manual:Configuration
Manual:Installation
Project:AI in Hampshire
Winchester
PostgreSQL
Redis
Python
TensorFlow
PyTorch
Ubuntu Server
VMware
Nagios
Git
GitLab
Jenkins
Docker
Cisco ASA 5516-X
TLS/SSL
DNSSEC
PRTG Network Monitor
CockroachDB
Kubernetes
Reverse Proxy
SELinux
Fail2Ban
UFW
ClamAV
Snort
Suricata
Apache Kafka
Logstash
Fluentd
CUDA Toolkit
cuDNN
Flask
Gunicorn
Nginx
Intel-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Core i7-6700K/7700 Server | 64 GB DDR4, NVMe SSD 2 x 512 GB | CPU Benchmark: 8046 |
Core i7-8700 Server | 64 GB DDR4, NVMe SSD 2x1 TB | CPU Benchmark: 13124 |
Core i9-9900K Server | 128 GB DDR4, NVMe SSD 2 x 1 TB | CPU Benchmark: 49969 |
Core i9-13900 Server (64GB) | 64 GB RAM, 2x2 TB NVMe SSD | |
Core i9-13900 Server (128GB) | 128 GB RAM, 2x2 TB NVMe SSD | |
Core i5-13500 Server (64GB) | 64 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Server (128GB) | 128 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Workstation | 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000 |
AMD-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Ryzen 5 3600 Server | 64 GB RAM, 2x480 GB NVMe | CPU Benchmark: 17849 |
Ryzen 7 7700 Server | 64 GB DDR5 RAM, 2x1 TB NVMe | CPU Benchmark: 35224 |
Ryzen 9 5950X Server | 128 GB RAM, 2x4 TB NVMe | CPU Benchmark: 46045 |
Ryzen 9 7950X Server | 128 GB DDR5 ECC, 2x2 TB NVMe | CPU Benchmark: 63561 |
EPYC 7502P Server (128GB/1TB) | 128 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/2TB) | 128 GB RAM, 2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/4TB) | 128 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/1TB) | 256 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/4TB) | 256 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 9454P Server | 256 GB RAM, 2x2 TB NVMe |
Order Your Dedicated Server
Configure and order your ideal server configuration
Need Assistance?
- Telegram: @powervps Servers at a discounted price
⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️