AI in Tunbridge Wells
AI in Tunbridge Wells - Server Configuration
This article details the server configuration supporting the "AI in Tunbridge Wells" project, a local initiative focused on applying artificial intelligence to solve community-based problems. This guide is aimed at new contributors and system administrators involved in maintaining the project's infrastructure. This document assumes familiarity with basic Linux administration and networking concepts. Please refer to Help:Contents for general MediaWiki help.
Project Overview
The "AI in Tunbridge Wells" project utilizes a multi-server architecture to handle data ingestion, model training, and API serving. The system is designed for scalability and resilience. The core components are detailed below, with specific server configurations outlined in the following sections. We utilize a Git repository for all configuration files. See Manual:Configuration for general MediaWiki configuration information.
Server Roles & Architecture
The project employs three primary server roles:
- Data Ingestion Server: Responsible for collecting, cleaning, and preparing data for model training.
- Training Server: Hosts the resources for training and validating AI models. This includes powerful GPUs.
- API Server: Serves trained models via a RESTful API for use by local applications and services.
These servers are interconnected via a VPN for secure communication. Network configuration is managed using DNS and DHCP. We also use SSH for remote administration.
Data Ingestion Server Configuration
This server handles data acquisition from various sources, including public datasets and local sensors. It runs a custom Python script for data wrangling and storage.
Component | Specification | Version |
---|---|---|
Operating System | Ubuntu Server 22.04 LTS | 5.15.0-76-generic |
CPU | Intel Xeon E3-1225 v6 | N/A |
RAM | 32 GB DDR4 | N/A |
Storage | 2 x 2TB SATA III HDD (RAID 1) | N/A |
Network Interface | 1 Gbps Ethernet | N/A |
Database | PostgreSQL 14 | 14.9 |
Key software includes Python 3.10, PostgreSQL, and various data processing libraries (Pandas, NumPy, Scikit-learn). See Help:Linking to internal pages for more on linking.
Training Server Configuration
The Training Server is the most resource-intensive component, requiring high-performance GPUs for model training.
Component | Specification | Version |
---|---|---|
Operating System | Ubuntu Server 22.04 LTS | 5.15.0-76-generic |
CPU | AMD Ryzen Threadripper 3970X | N/A |
RAM | 64 GB DDR4 ECC | N/A |
Storage | 1 x 1TB NVMe SSD (OS) + 4 x 8TB SATA III HDD (RAID 0) | N/A |
GPU | 2 x NVIDIA GeForce RTX 3090 | 528.47.85 |
CUDA Toolkit | 11.8 | N/A |
Deep Learning Framework | TensorFlow 2.12 | 2.12.0 |
This server uses NVIDIA’s CUDA toolkit and TensorFlow for accelerated training. Regular backups are performed using rsync. Refer to Help:Tables for table formatting details.
API Server Configuration
The API Server provides access to trained models via a RESTful interface. It’s built using Flask and deployed behind a reverse proxy (Nginx).
Component | Specification | Version |
---|---|---|
Operating System | Debian 11 | 5.10.0-23-amd64 |
CPU | Intel Core i5-10400 | N/A |
RAM | 16 GB DDR4 | N/A |
Storage | 500GB SATA III SSD | N/A |
Web Server | Nginx | 1.22.1 |
Application Framework | Flask | 2.3.2 |
Python Version | 3.9 | N/A |
Nginx handles SSL termination and load balancing. The API is documented using Swagger. For more information on API design, see Help:Wiki markup. We use systemd to manage the API server process.
Security Considerations
All servers are protected by a firewall (UFW). Regular security audits are conducted. Access to the servers is restricted to authorized personnel via SSH with key-based authentication. Data is encrypted both in transit and at rest. We follow best practices outlined in Manual:Security issues.
Future Enhancements
Planned enhancements include:
- Implementing a containerization strategy using Docker and Kubernetes.
- Automating the deployment process using Ansible.
- Exploring the use of more advanced machine learning frameworks like PyTorch.
Intel-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Core i7-6700K/7700 Server | 64 GB DDR4, NVMe SSD 2 x 512 GB | CPU Benchmark: 8046 |
Core i7-8700 Server | 64 GB DDR4, NVMe SSD 2x1 TB | CPU Benchmark: 13124 |
Core i9-9900K Server | 128 GB DDR4, NVMe SSD 2 x 1 TB | CPU Benchmark: 49969 |
Core i9-13900 Server (64GB) | 64 GB RAM, 2x2 TB NVMe SSD | |
Core i9-13900 Server (128GB) | 128 GB RAM, 2x2 TB NVMe SSD | |
Core i5-13500 Server (64GB) | 64 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Server (128GB) | 128 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Workstation | 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000 |
AMD-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Ryzen 5 3600 Server | 64 GB RAM, 2x480 GB NVMe | CPU Benchmark: 17849 |
Ryzen 7 7700 Server | 64 GB DDR5 RAM, 2x1 TB NVMe | CPU Benchmark: 35224 |
Ryzen 9 5950X Server | 128 GB RAM, 2x4 TB NVMe | CPU Benchmark: 46045 |
Ryzen 9 7950X Server | 128 GB DDR5 ECC, 2x2 TB NVMe | CPU Benchmark: 63561 |
EPYC 7502P Server (128GB/1TB) | 128 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/2TB) | 128 GB RAM, 2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/4TB) | 128 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/1TB) | 256 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/4TB) | 256 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 9454P Server | 256 GB RAM, 2x2 TB NVMe |
Order Your Dedicated Server
Configure and order your ideal server configuration
Need Assistance?
- Telegram: @powervps Servers at a discounted price
⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️