AI in Salisbury
AI in Salisbury: Server Configuration
This document details the server configuration for the "AI in Salisbury" project, intended for new team members and system administrators. It outlines the hardware, software, and network setup necessary to support the project's computational demands. This project focuses on deploying and testing machine learning models for analyzing local data within the Salisbury region. Understanding these configurations is crucial for maintaining system stability and facilitating future expansion. Please refer to the System Administration Guidelines for general site policies.
Overview
The "AI in Salisbury" project utilizes a cluster of servers located in the Salisbury data center. These servers are responsible for data ingestion, model training, and real-time inference. The cluster is designed for scalability and redundancy, employing a combination of physical and virtualized resources. The current system is based on a distributed architecture, utilizing Apache Kafka for message queuing and PostgreSQL for persistent data storage. It relies heavily on Python for scripting and model deployment. We also utilize Docker for containerization of services.
Hardware Configuration
The server cluster consists of three primary types of machines: Master Nodes, Worker Nodes, and Storage Nodes. Each type is configured with specific hardware to optimize performance for its designated task. Detailed specifications are provided below.
Server Type | CPU | Memory (RAM) | Storage (SSD) | Network Interface |
---|---|---|---|---|
Master Nodes (2) | 2 x Intel Xeon Gold 6248R (24 cores/48 threads) | 256 GB DDR4 ECC REG | 1 TB NVMe PCIe Gen4 | 10 Gbps Ethernet |
Worker Nodes (8) | 2 x AMD EPYC 7763 (64 cores/128 threads) | 512 GB DDR4 ECC REG | 4 TB NVMe PCIe Gen4 (RAID 0) | 25 Gbps Ethernet |
Storage Nodes (3) | 2 x Intel Xeon Silver 4210 (10 cores/20 threads) | 128 GB DDR4 ECC REG | 16 TB SATA HDD (RAID 6) | 10 Gbps Ethernet |
These specifications represent the current hardware baseline. See the Hardware Procurement Policy for details on future upgrades. Power consumption is monitored via Nagios and alerts are configured for excessive usage. The physical servers are managed through IPMI for remote access and control.
Software Stack
The software stack is built around a Linux-based operating system and a collection of open-source tools. We utilize Ubuntu Server 22.04 LTS as the standard operating system for all servers.
Component | Version | Purpose |
---|---|---|
Operating System | Ubuntu Server 22.04 LTS | Base operating system for all servers |
Kubernetes | v1.27 | Container orchestration platform |
Docker | 20.10.17 | Containerization platform |
Python | 3.10 | Primary scripting and model deployment language |
TensorFlow | 2.12 | Machine Learning Framework |
PyTorch | 2.0 | Machine Learning Framework |
PostgreSQL | 14 | Relational database for persistent data storage |
Apache Kafka | 3.3.1 | Distributed streaming platform |
Regular software updates are performed according to the Security Patching Schedule. All code is version controlled using Git and managed through GitLab.
Network Configuration
The server cluster is connected to the Salisbury data center network via a dedicated VLAN. Network security is enforced through firewalls and access control lists.
Parameter | Value |
---|---|
VLAN ID | 1001 |
Subnet Mask | 255.255.255.0 |
Gateway | 192.168.1.1 |
DNS Servers | 8.8.8.8, 8.8.4.4 |
Firewall | pfSense 2.7 |
Internal communication between servers is secured using TLS/SSL. External access is restricted to authorized personnel through a VPN. Detailed network diagrams are available on the Network Documentation Page. We utilize Prometheus for network monitoring and alerting.
Security Considerations
Security is paramount. All servers undergo regular vulnerability scans using Nessus. Access to the servers is strictly controlled via SSH keys and multi-factor authentication. Data at rest is encrypted using AES-256. Regular security audits are conducted by the Security Team. See the Data Security Policy for complete details.
Intel-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Core i7-6700K/7700 Server | 64 GB DDR4, NVMe SSD 2 x 512 GB | CPU Benchmark: 8046 |
Core i7-8700 Server | 64 GB DDR4, NVMe SSD 2x1 TB | CPU Benchmark: 13124 |
Core i9-9900K Server | 128 GB DDR4, NVMe SSD 2 x 1 TB | CPU Benchmark: 49969 |
Core i9-13900 Server (64GB) | 64 GB RAM, 2x2 TB NVMe SSD | |
Core i9-13900 Server (128GB) | 128 GB RAM, 2x2 TB NVMe SSD | |
Core i5-13500 Server (64GB) | 64 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Server (128GB) | 128 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Workstation | 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000 |
AMD-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Ryzen 5 3600 Server | 64 GB RAM, 2x480 GB NVMe | CPU Benchmark: 17849 |
Ryzen 7 7700 Server | 64 GB DDR5 RAM, 2x1 TB NVMe | CPU Benchmark: 35224 |
Ryzen 9 5950X Server | 128 GB RAM, 2x4 TB NVMe | CPU Benchmark: 46045 |
Ryzen 9 7950X Server | 128 GB DDR5 ECC, 2x2 TB NVMe | CPU Benchmark: 63561 |
EPYC 7502P Server (128GB/1TB) | 128 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/2TB) | 128 GB RAM, 2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/4TB) | 128 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/1TB) | 256 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/4TB) | 256 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 9454P Server | 256 GB RAM, 2x2 TB NVMe |
Order Your Dedicated Server
Configure and order your ideal server configuration
Need Assistance?
- Telegram: @powervps Servers at a discounted price
⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️