AI in the Malaysian Rainforest
AI in the Malaysian Rainforest: Server Configuration
This article details the server configuration supporting the "AI in the Malaysian Rainforest" project, a research initiative utilizing machine learning to analyze biodiversity data gathered from remote sensors. This document is intended for new system administrators and engineers tasked with maintaining or scaling this infrastructure. It outlines the hardware, software, and network configuration. Please consult the System Security Policy before making any changes.
Project Overview
The "AI in the Malaysian Rainforest" project involves a network of sensor nodes collecting audio and visual data from the rainforest. This data is transmitted to a central server cluster for processing using deep learning models. The primary goals are species identification, anomaly detection (e.g., illegal logging), and long-term biodiversity monitoring. Data is initially processed on edge devices for pre-filtering, but the bulk of the analysis occurs on the central servers. Refer to the Data Acquisition Protocol for details on sensor data formats.
Hardware Configuration
The server cluster consists of three primary server types: ingest servers, processing servers, and storage servers. Each server type has specific hardware requirements. The following tables outline these specifications.
Server Type | CPU | Memory (RAM) | Storage | Network Interface |
---|---|---|---|---|
Ingest Servers (x2) | Intel Xeon Silver 4310 (12 cores) | 64 GB DDR4 ECC | 2 x 1 TB NVMe SSD (RAID 1) | 10 Gbps Ethernet |
Processing Servers (x4) | AMD EPYC 7763 (64 cores) | 256 GB DDR4 ECC | 4 x 2 TB NVMe SSD (RAID 10) | 100 Gbps Ethernet |
Storage Servers (x3) | Intel Xeon Gold 6338 (32 cores) | 128 GB DDR4 ECC | 12 x 16 TB SAS HDD (RAID 6) | 40 Gbps Ethernet |
All servers are housed in a physically secure data center with redundant power and cooling. The data center location is detailed in the Data Center Location Document. Regular hardware maintenance is scheduled as per the Hardware Maintenance Schedule.
Software Configuration
The operating system across all servers is Ubuntu Server 22.04 LTS. Containerization is used extensively to manage dependencies and ensure reproducibility. The project utilizes Docker and Kubernetes for orchestration. See the Software Stack Diagram for a visual representation.
Layer | Software | Version | Purpose |
---|---|---|---|
Operating System | Ubuntu Server | 22.04 LTS | Base OS for all servers |
Containerization | Docker | 20.10 | Container runtime |
Orchestration | Kubernetes | 1.24 | Container orchestration |
Database | PostgreSQL | 14 | Metadata storage and data catalog |
Machine Learning Framework | TensorFlow | 2.9 | Deep learning model training and inference |
Monitoring | Prometheus & Grafana | 9.x & 8.x | System and application monitoring |
The machine learning models are developed using Python and TensorFlow. Model training is performed on the processing servers, and inference is deployed as microservices orchestrated by Kubernetes. Detailed instructions for deploying models are available in the Model Deployment Guide.
Network Configuration
The server cluster is connected to the internet via a dedicated 1 Gbps fiber connection. Internal network communication utilizes a private 10 Gbps network. Firewall rules are configured to restrict access to only authorized personnel and services. The network topology is outlined in the Network Diagram.
Network Segment | IP Range | Purpose |
---|---|---|
Management Network | 192.168.1.0/24 | Server administration and monitoring |
Data Transfer Network | 10.0.0.0/16 | Data ingestion and processing |
Public Network | (Public IP Address) | External access (restricted) |
Network security is a critical concern. Regular security audits are conducted as described in the Security Audit Report. Intrusion detection and prevention systems are in place to monitor for malicious activity. All network traffic is logged and analyzed. Review the Network Security Policy for more details.
Data Flow
Data flows from the sensor nodes to the ingest servers, where it is validated and pre-processed. The data is then stored on the storage servers. The processing servers retrieve data from storage, run the machine learning models, and store the results back on the storage servers. A detailed explanation of the data pipeline is available in the Data Pipeline Documentation.
Future Scalability
The current infrastructure is designed to be scalable. Additional processing servers can be added to the Kubernetes cluster to increase processing capacity. Storage capacity can be increased by adding more storage servers. The network infrastructure can be upgraded to support higher bandwidth requirements. See the Scalability Plan for details.
Related Documentation
- System Security Policy
- Data Acquisition Protocol
- Software Stack Diagram
- Model Deployment Guide
- Data Center Location Document
- Hardware Maintenance Schedule
- Network Diagram
- Security Audit Report
- Network Security Policy
- Data Pipeline Documentation
- Scalability Plan
- Troubleshooting Guide
- Backup and Recovery Procedures
- Disaster Recovery Plan
Intel-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Core i7-6700K/7700 Server | 64 GB DDR4, NVMe SSD 2 x 512 GB | CPU Benchmark: 8046 |
Core i7-8700 Server | 64 GB DDR4, NVMe SSD 2x1 TB | CPU Benchmark: 13124 |
Core i9-9900K Server | 128 GB DDR4, NVMe SSD 2 x 1 TB | CPU Benchmark: 49969 |
Core i9-13900 Server (64GB) | 64 GB RAM, 2x2 TB NVMe SSD | |
Core i9-13900 Server (128GB) | 128 GB RAM, 2x2 TB NVMe SSD | |
Core i5-13500 Server (64GB) | 64 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Server (128GB) | 128 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Workstation | 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000 |
AMD-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Ryzen 5 3600 Server | 64 GB RAM, 2x480 GB NVMe | CPU Benchmark: 17849 |
Ryzen 7 7700 Server | 64 GB DDR5 RAM, 2x1 TB NVMe | CPU Benchmark: 35224 |
Ryzen 9 5950X Server | 128 GB RAM, 2x4 TB NVMe | CPU Benchmark: 46045 |
Ryzen 9 7950X Server | 128 GB DDR5 ECC, 2x2 TB NVMe | CPU Benchmark: 63561 |
EPYC 7502P Server (128GB/1TB) | 128 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/2TB) | 128 GB RAM, 2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/4TB) | 128 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/1TB) | 256 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/4TB) | 256 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 9454P Server | 256 GB RAM, 2x2 TB NVMe |
Order Your Dedicated Server
Configure and order your ideal server configuration
Need Assistance?
- Telegram: @powervps Servers at a discounted price
⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️