AI in Dartford

From Server rental store
Revision as of 05:14, 16 April 2025 by Admin (talk | contribs) (Automated server configuration article)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
  1. AI in Dartford: Server Configuration

This article details the server configuration powering the "AI in Dartford" initiative. It is aimed at new members of the system administration team and provides a comprehensive overview of the hardware, software, and network setup. Please read carefully and refer to related Internal Documentation for further details.

Overview

The "AI in Dartford" project utilizes a cluster of servers located in the Dartford data center. These servers are dedicated to machine learning tasks, specifically natural language processing (NLP) and computer vision. The primary goal is to analyze data related to Dartford Borough Council services and improve citizen engagement. The system employs a hybrid cloud approach, leveraging both on-premise hardware and cloud-based resources via Cloud Integration.

Hardware Configuration

The core of the system consists of five dedicated servers. Each server is built with high-performance components to handle the computationally intensive demands of AI workloads.

Server Name Role CPU RAM Storage
dartford-ai-01 Master Node (Kubernetes Control Plane) Intel Xeon Gold 6248R (24 cores) 256 GB DDR4 ECC 2 x 1 TB NVMe SSD (RAID 1)
dartford-ai-02 Worker Node (Model Training) AMD EPYC 7763 (64 cores) 512 GB DDR4 ECC 4 x 2 TB NVMe SSD (RAID 10)
dartford-ai-03 Worker Node (Model Training) AMD EPYC 7763 (64 cores) 512 GB DDR4 ECC 4 x 2 TB NVMe SSD (RAID 10)
dartford-ai-04 Inference Server Intel Xeon Silver 4210 (10 cores) 128 GB DDR4 ECC 1 x 1 TB NVMe SSD
dartford-ai-05 Data Storage & Backup Dual Intel Xeon Silver 4208 (8 cores each) 64 GB DDR4 ECC 8 x 8 TB SAS HDD (RAID 6)

All servers run on a dedicated 10Gbps network segment. Power redundancy is provided by dual power supplies and a UPS system, described in the Power Management document. Hardware monitoring is conducted via SNMP Monitoring.

Software Stack

The software stack is built around Kubernetes for container orchestration. This allows for efficient resource utilization and scalability.

Component Version Description
Operating System Ubuntu Server 22.04 LTS Provides the base OS for all servers.
Kubernetes v1.27.x Container orchestration platform. See Kubernetes Documentation for details.
Docker 20.10.x Container runtime.
NVIDIA Drivers 535.104.05 Required for GPU acceleration.
TensorFlow 2.12.x Machine learning framework.
PyTorch 2.0.x Alternative machine learning framework.
Prometheus 2.40.x Monitoring system.

All code is version controlled using Git Repository. The deployment pipeline is automated using CI/CD Pipeline. Security updates are managed via Automated Patching.

Network Configuration

The servers are connected to the internal network via a dedicated VLAN.

Interface IP Address Subnet Mask Gateway
eth0 (dartford-ai-01) 192.168.10.10 255.255.255.0 192.168.10.1
eth0 (dartford-ai-02) 192.168.10.11 255.255.255.0 192.168.10.1
eth0 (dartford-ai-03) 192.168.10.12 255.255.255.0 192.168.10.1
eth0 (dartford-ai-04) 192.168.10.13 255.255.255.0 192.168.10.1
eth0 (dartford-ai-05) 192.168.10.14 255.255.255.0 192.168.10.1

Firewall rules are managed using Firewall Configuration. Access to the servers is restricted to authorized personnel only, as outlined in the Access Control Policy. Network performance is monitored via Network Monitoring Tools. The DNS configuration is detailed in DNS Records.

Future Considerations

Future plans include upgrading the GPUs on the worker nodes to NVIDIA A100s for increased performance. We are also exploring the integration of a dedicated model registry and versioning system using MLflow Integration. Further expansion of the storage capacity is anticipated based on data growth projections outlined in the Capacity Planning Report.


Server Room Access Emergency Procedures Data Backup Policy Security Audit Logs


Intel-Based Server Configurations

Configuration Specifications Benchmark
Core i7-6700K/7700 Server 64 GB DDR4, NVMe SSD 2 x 512 GB CPU Benchmark: 8046
Core i7-8700 Server 64 GB DDR4, NVMe SSD 2x1 TB CPU Benchmark: 13124
Core i9-9900K Server 128 GB DDR4, NVMe SSD 2 x 1 TB CPU Benchmark: 49969
Core i9-13900 Server (64GB) 64 GB RAM, 2x2 TB NVMe SSD
Core i9-13900 Server (128GB) 128 GB RAM, 2x2 TB NVMe SSD
Core i5-13500 Server (64GB) 64 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Server (128GB) 128 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Workstation 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000

AMD-Based Server Configurations

Configuration Specifications Benchmark
Ryzen 5 3600 Server 64 GB RAM, 2x480 GB NVMe CPU Benchmark: 17849
Ryzen 7 7700 Server 64 GB DDR5 RAM, 2x1 TB NVMe CPU Benchmark: 35224
Ryzen 9 5950X Server 128 GB RAM, 2x4 TB NVMe CPU Benchmark: 46045
Ryzen 9 7950X Server 128 GB DDR5 ECC, 2x2 TB NVMe CPU Benchmark: 63561
EPYC 7502P Server (128GB/1TB) 128 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/2TB) 128 GB RAM, 2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/4TB) 128 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/1TB) 256 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/4TB) 256 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 9454P Server 256 GB RAM, 2x2 TB NVMe

Order Your Dedicated Server

Configure and order your ideal server configuration

Need Assistance?

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️