AI in the England Rainforest
- AI in the England Rainforest: Server Configuration
This article details the server configuration supporting the “AI in the England Rainforest” project. This project utilizes artificial intelligence to monitor and analyze data collected from a simulated rainforest environment within England, focusing on biodiversity, climate patterns, and ecosystem health. This document is intended for new system administrators and developers contributing to the project.
Project Overview
The “AI in the England Rainforest” project involves a network of sensors deployed throughout a large, climate-controlled facility mimicking a rainforest environment. These sensors collect data on temperature, humidity, light levels, soil moisture, audio recordings (for animal identification), and video feeds. This data is processed in real-time by AI models to identify species, detect anomalies, and predict potential ecological shifts. The entire system is underpinned by a robust server infrastructure, described below. For information on the Data Collection Pipeline, see the dedicated documentation.
Server Infrastructure
The server infrastructure is divided into three tiers: Data Acquisition, Processing & AI, and Storage & Archiving. Each tier has specific hardware and software requirements. We utilize a hybrid cloud approach, with critical processing occurring on-premise for latency reasons, and long-term archival in a secure cloud environment. Details about Security Protocols are available on the security wiki.
Data Acquisition Servers
These servers are responsible for receiving data directly from the sensors. They perform initial data validation and preprocessing before forwarding the data to the Processing & AI tier. The servers are located close to the sensor network to minimize latency.
Server Name | Role | Operating System | CPU | RAM | Network Interface |
---|---|---|---|---|---|
aq-server-01 | Primary Data Receiver | Ubuntu Server 22.04 LTS | Intel Xeon Silver 4310 (12 cores) | 64 GB DDR4 ECC | 10 Gbps Ethernet |
aq-server-02 | Secondary Data Receiver (Failover) | Ubuntu Server 22.04 LTS | Intel Xeon Silver 4310 (12 cores) | 64 GB DDR4 ECC | 10 Gbps Ethernet |
Software running on these servers includes: MQTT Broker, Node-RED, and custom Python scripts for data validation. See Data Acquisition Software for configuration details.
Processing & AI Servers
These servers are the heart of the project, running the AI models and performing real-time data analysis. They require significant computational resources, particularly GPUs.
Server Name | Role | Operating System | CPU | GPU | RAM | Storage |
---|---|---|---|---|---|---|
ai-server-01 | Primary AI Processing | Ubuntu Server 22.04 LTS | Intel Xeon Gold 6338 (32 cores) | NVIDIA A100 (80 GB) | 256 GB DDR4 ECC | 4 TB NVMe SSD |
ai-server-02 | Secondary AI Processing (Model Training) | Ubuntu Server 22.04 LTS | Intel Xeon Gold 6338 (32 cores) | NVIDIA A100 (80 GB) | 256 GB DDR4 ECC | 4 TB NVMe SSD |
ai-server-03 | Real-time Anomaly Detection | Ubuntu Server 22.04 LTS | Intel Xeon Silver 4310 (12 cores) | NVIDIA RTX 3090 (24 GB) | 128 GB DDR4 ECC | 2 TB NVMe SSD |
Key software components include: TensorFlow, PyTorch, Kubernetes for container orchestration, and various custom AI models developed in Python. Refer to the AI Model Documentation for details on the models themselves.
Storage & Archiving Servers
These servers are responsible for storing the raw sensor data and the processed results. Long-term archiving is done in a cloud-based object storage service.
Server Name | Role | Operating System | Storage Capacity | RAID Level | Network Interface |
---|---|---|---|---|---|
st-server-01 | Primary Data Storage | CentOS 7 | 100 TB | RAID 6 | 40 Gbps Infiniband |
st-server-02 | Backup & Replication | CentOS 7 | 100 TB | RAID 6 | 40 Gbps Infiniband |
We utilize Ceph for distributed file storage and replication. Cloud archival is managed through Amazon S3. Detailed information about the Data Retention Policy can be found on the policy wiki.
Networking
The servers are connected via a high-speed network infrastructure. A dedicated VLAN is used for the sensor data traffic. The network topology is a star configuration with a central core switch. See the Network Diagram for a visual representation.
Monitoring & Alerting
The entire server infrastructure is monitored using Prometheus and Grafana. Alerts are configured for critical metrics such as CPU usage, memory usage, disk space, and network latency. Alerts are delivered via PagerDuty. A detailed guide to Server Monitoring is available for new administrators.
Help:Contents
Main Page
Project:AI Rainforest
Data Collection Pipeline
Security Protocols
Data Acquisition Software
AI Model Documentation
TensorFlow
PyTorch
Kubernetes
Ceph
Amazon S3
Data Retention Policy
Network Diagram
Server Monitoring
PagerDuty
Help:Editing
Intel-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Core i7-6700K/7700 Server | 64 GB DDR4, NVMe SSD 2 x 512 GB | CPU Benchmark: 8046 |
Core i7-8700 Server | 64 GB DDR4, NVMe SSD 2x1 TB | CPU Benchmark: 13124 |
Core i9-9900K Server | 128 GB DDR4, NVMe SSD 2 x 1 TB | CPU Benchmark: 49969 |
Core i9-13900 Server (64GB) | 64 GB RAM, 2x2 TB NVMe SSD | |
Core i9-13900 Server (128GB) | 128 GB RAM, 2x2 TB NVMe SSD | |
Core i5-13500 Server (64GB) | 64 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Server (128GB) | 128 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Workstation | 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000 |
AMD-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Ryzen 5 3600 Server | 64 GB RAM, 2x480 GB NVMe | CPU Benchmark: 17849 |
Ryzen 7 7700 Server | 64 GB DDR5 RAM, 2x1 TB NVMe | CPU Benchmark: 35224 |
Ryzen 9 5950X Server | 128 GB RAM, 2x4 TB NVMe | CPU Benchmark: 46045 |
Ryzen 9 7950X Server | 128 GB DDR5 ECC, 2x2 TB NVMe | CPU Benchmark: 63561 |
EPYC 7502P Server (128GB/1TB) | 128 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/2TB) | 128 GB RAM, 2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/4TB) | 128 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/1TB) | 256 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/4TB) | 256 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 9454P Server | 256 GB RAM, 2x2 TB NVMe |
Order Your Dedicated Server
Configure and order your ideal server configuration
Need Assistance?
- Telegram: @powervps Servers at a discounted price
⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️