AI in Reigate

From Server rental store
Jump to navigation Jump to search
  1. AI in Reigate: Server Configuration

This article details the server configuration powering the “AI in Reigate” project, a local initiative utilizing artificial intelligence for community benefit. This document is intended for system administrators and those contributing to the project’s infrastructure. Understanding this configuration is crucial for maintenance, scaling, and troubleshooting. Please refer to our Development Standards page before making any changes.

Overview

The “AI in Reigate” project relies on a cluster of servers located in a secure data center in Reigate, Surrey. These servers handle data ingestion, model training, inference, and API access. The chosen architecture prioritizes scalability, redundancy, and security. We use a hybrid cloud approach, with core processing handled on-premise and some data storage utilizing Cloud Storage Providers. This allows for cost optimization and data sovereignty compliance. Detailed information on our Data Privacy Policy is available.

Hardware Configuration

The core of the infrastructure consists of three primary server nodes, each with dedicated roles: Master Node, Worker Node 1, and Worker Node 2. Each node runs Ubuntu Server 22.04 LTS.

Server Node Role CPU RAM Storage
Master Node Orchestration, API Gateway, Database Server Intel Xeon Gold 6248R (24 cores) 128 GB DDR4 ECC 2 x 1 TB NVMe SSD (RAID 1)
Worker Node 1 Model Training, Data Preprocessing AMD EPYC 7763 (64 cores) 256 GB DDR4 ECC 4 x 4 TB SATA HDD (RAID 10) + 1 x 500 GB NVMe SSD
Worker Node 2 Model Inference, Real-time Data Analysis Intel Xeon Platinum 8280 (28 cores) 128 GB DDR4 ECC 2 x 1 TB NVMe SSD (RAID 1) + 1 x 2 TB SATA HDD

Networking is handled by a dedicated Gigabit Ethernet switch with link aggregation configured for increased bandwidth and redundancy. A separate Firewall Configuration document outlines security measures.

Software Stack

The software stack is built around Python 3.9 and utilizes several key libraries and frameworks. We use Docker for containerization and Kubernetes for orchestration.

Software Component Version Purpose
Python 3.9.18 Core programming language
TensorFlow 2.12.0 Machine learning framework
PyTorch 2.0.1 Machine learning framework
Docker 20.10.21 Containerization
Kubernetes 1.26.3 Container orchestration
PostgreSQL 14.8 Database management system
Nginx 1.23.3 Reverse proxy and web server

The Master Node also hosts a REST API built using Flask, providing access to the AI models. Further details on the API can be found in the API Documentation.

Database Configuration

A PostgreSQL database stores metadata, model parameters, and logging information. The database is configured with replication for high availability.

Parameter Value
Database Name ai_reigate
User ai_user
Replication Mode Synchronous
Connection Pool Size 50
Backup Schedule Daily

Database backups are stored offsite using Backup Procedures. Access to the database is strictly controlled and monitored. Refer to the Database Security Policy for more details.

Monitoring and Logging

Comprehensive monitoring and logging are essential for maintaining the stability and performance of the system. We utilize Prometheus for metric collection and Grafana for visualization. Logs are aggregated using ELK Stack (Elasticsearch, Logstash, Kibana). Alerts are configured to notify administrators of critical issues. Details on the Alerting System are available.

Future Considerations

Planned upgrades include migrating to a GPU-accelerated server for faster model training, and exploring the use of Serverless Computing for specific tasks. We are also investigating the integration of more advanced Security Protocols.



Main Page AI Models Used Data Collection Methods Security Protocols API Documentation Development Standards Cloud Storage Providers Firewall Configuration REST API Flask Gigabit Ethernet Database Security Policy Backup Procedures Alerting System Serverless Computing Ubuntu Server Prometheus Grafana ELK Stack


Intel-Based Server Configurations

Configuration Specifications Benchmark
Core i7-6700K/7700 Server 64 GB DDR4, NVMe SSD 2 x 512 GB CPU Benchmark: 8046
Core i7-8700 Server 64 GB DDR4, NVMe SSD 2x1 TB CPU Benchmark: 13124
Core i9-9900K Server 128 GB DDR4, NVMe SSD 2 x 1 TB CPU Benchmark: 49969
Core i9-13900 Server (64GB) 64 GB RAM, 2x2 TB NVMe SSD
Core i9-13900 Server (128GB) 128 GB RAM, 2x2 TB NVMe SSD
Core i5-13500 Server (64GB) 64 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Server (128GB) 128 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Workstation 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000

AMD-Based Server Configurations

Configuration Specifications Benchmark
Ryzen 5 3600 Server 64 GB RAM, 2x480 GB NVMe CPU Benchmark: 17849
Ryzen 7 7700 Server 64 GB DDR5 RAM, 2x1 TB NVMe CPU Benchmark: 35224
Ryzen 9 5950X Server 128 GB RAM, 2x4 TB NVMe CPU Benchmark: 46045
Ryzen 9 7950X Server 128 GB DDR5 ECC, 2x2 TB NVMe CPU Benchmark: 63561
EPYC 7502P Server (128GB/1TB) 128 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/2TB) 128 GB RAM, 2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/4TB) 128 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/1TB) 256 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/4TB) 256 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 9454P Server 256 GB RAM, 2x2 TB NVMe

Order Your Dedicated Server

Configure and order your ideal server configuration

Need Assistance?

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️