AI in Farnham
AI in Farnham: Server Configuration Documentation
This document details the server configuration for the "AI in Farnham" project, a research initiative exploring the application of artificial intelligence within the local Farnham community. This guide is intended for new system administrators and developers contributing to the project. It covers hardware specifications, software stack, network configuration, and security considerations.
Overview
The "AI in Farnham" project relies on a distributed server infrastructure to handle the demands of data processing, model training, and real-time inference. The primary server, nicknamed "Ada," is the central hub, while several edge devices ("Turing Units") collect and pre-process data. This documentation focuses primarily on the configuration of Ada, the central server.
Hardware Specifications
Ada is a dedicated server hosted within the University of Surrey's Farnham campus data center. The following table details its key hardware components:
Component | Specification | Quantity | Notes |
---|---|---|---|
CPU | Intel Xeon Gold 6248R (24 cores) | 2 | High core count for parallel processing. |
RAM | 256GB DDR4 ECC Registered | 1 | Crucial for handling large datasets. |
Storage (OS) | 512GB NVMe SSD | 1 | Fast boot and system responsiveness. |
Storage (Data) | 16TB RAID 6 HDD | 1 | Redundant storage for data durability. |
GPU | NVIDIA RTX A6000 (48GB VRAM) | 2 | Accelerates machine learning workloads. |
Network Interface | 10 Gigabit Ethernet | 2 | High-bandwidth connectivity. |
Power Supply | 1600W Redundant | 2 | Ensures high availability. |
Software Stack
The software stack is designed for flexibility and ease of maintenance. It utilizes a Linux distribution, containerization, and a robust package management system.
Operating System
- Distribution: Ubuntu Server 22.04 LTS
- Kernel: 5.15.0-76-generic
Containerization
- Platform: Docker 20.10.17
- Orchestration: Docker Compose 2.17.2 – Used for managing multi-container applications. See Docker Compose Documentation for details.
Programming Languages & Frameworks
- Python: 3.10.6 – Primary language for AI development. See Python Programming Guide.
- TensorFlow: 2.12.0 – Machine learning framework. See TensorFlow Tutorials.
- PyTorch: 2.0.1 – Alternative machine learning framework. See PyTorch Documentation.
- Scikit-learn: 1.2.2 – Machine learning library for data analysis. See Scikit-learn User Guide.
Database
- PostgreSQL: 14.7 – Relational database for storing metadata and results. See PostgreSQL Administration.
Web Server
- Nginx: 1.22.1 – Reverse proxy and web server. See Nginx Configuration.
Network Configuration
The server is configured with a static IP address and uses DNS for hostname resolution. Network segmentation is implemented to isolate the AI services from the general university network.
Parameter | Value |
---|---|
Hostname | ada.ai-farnham.surrey.ac.uk |
IP Address | 192.168.1.10 |
Subnet Mask | 255.255.255.0 |
Gateway | 192.168.1.1 |
DNS Servers | 8.8.8.8, 8.8.4.4 |
Firewall rules are managed using `ufw` (Uncomplicated Firewall). Only necessary ports are opened to the outside world, including SSH (port 22), HTTP (port 80), and HTTPS (port 443). See UFW Configuration Guide. Internal communication between containers is managed through Docker's internal networking.
Security Considerations
Security is paramount. The following measures are in place to protect the server and its data:
- SSH Access: Restricted to authorized keys only. Password authentication is disabled. See SSH Key Management.
- Firewall: `ufw` is enabled and configured with strict rules.
- Regular Updates: The operating system and software packages are updated regularly using `apt`. See System Updates.
- Data Encryption: Sensitive data is encrypted at rest and in transit.
- Intrusion Detection: Fail2ban is used to detect and block malicious login attempts. See Fail2ban Configuration.
- User Access Control: Principle of least privilege is enforced. Users are granted only the necessary permissions. See User Account Management.
Data Storage & Backup
Data is stored on the 16TB RAID 6 array. Regular backups are performed using `rsync` to an offsite storage location. The backup schedule is weekly full backups and daily incremental backups. These are stored securely according to Data Backup Policies.
Backup Type | Frequency | Destination |
---|---|---|
Full Backup | Weekly | Offsite Secure Storage |
Incremental Backup | Daily | Offsite Secure Storage |
Monitoring and Logging
System performance and security events are monitored using Prometheus and Grafana. Logs are collected and analyzed using the ELK stack (Elasticsearch, Logstash, Kibana). See Prometheus Monitoring Guide and ELK Stack Setup. Alerts are configured to notify administrators of critical issues.
Future Enhancements
Planned enhancements include:
- Implementing a more robust intrusion detection system.
- Automating the deployment process using Ansible. See Ansible Automation.
- Adding support for additional machine learning frameworks.
- Scaling the infrastructure to accommodate increasing data volumes.
Main Page
Server Administration
AI Development
Network Security
Database Management
Docker Documentation
PostgreSQL Administration
Nginx Configuration
SSH Key Management
System Updates
Fail2ban Configuration
User Account Management
Data Backup Policies
Prometheus Monitoring Guide
ELK Stack Setup
Ansible Automation
Intel-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Core i7-6700K/7700 Server | 64 GB DDR4, NVMe SSD 2 x 512 GB | CPU Benchmark: 8046 |
Core i7-8700 Server | 64 GB DDR4, NVMe SSD 2x1 TB | CPU Benchmark: 13124 |
Core i9-9900K Server | 128 GB DDR4, NVMe SSD 2 x 1 TB | CPU Benchmark: 49969 |
Core i9-13900 Server (64GB) | 64 GB RAM, 2x2 TB NVMe SSD | |
Core i9-13900 Server (128GB) | 128 GB RAM, 2x2 TB NVMe SSD | |
Core i5-13500 Server (64GB) | 64 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Server (128GB) | 128 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Workstation | 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000 |
AMD-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Ryzen 5 3600 Server | 64 GB RAM, 2x480 GB NVMe | CPU Benchmark: 17849 |
Ryzen 7 7700 Server | 64 GB DDR5 RAM, 2x1 TB NVMe | CPU Benchmark: 35224 |
Ryzen 9 5950X Server | 128 GB RAM, 2x4 TB NVMe | CPU Benchmark: 46045 |
Ryzen 9 7950X Server | 128 GB DDR5 ECC, 2x2 TB NVMe | CPU Benchmark: 63561 |
EPYC 7502P Server (128GB/1TB) | 128 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/2TB) | 128 GB RAM, 2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/4TB) | 128 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/1TB) | 256 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/4TB) | 256 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 9454P Server | 256 GB RAM, 2x2 TB NVMe |
Order Your Dedicated Server
Configure and order your ideal server configuration
Need Assistance?
- Telegram: @powervps Servers at a discounted price
⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️