AI in the Nauru Rainforest
AI in the Nauru Rainforest: Server Configuration
This article details the server infrastructure established to support the “AI in the Nauru Rainforest” project. This project utilizes artificial intelligence for real-time biodiversity monitoring and environmental analysis within the unique ecosystem of the Nauru rainforest. This document is intended for new system administrators and developers contributing to the project. Please consult the Project Overview for a complete description of the project goals.
Project Overview
The “AI in the Nauru Rainforest” project involves deploying a network of sensors throughout the rainforest to collect data on flora, fauna, and environmental conditions. This data is then processed in real-time by AI models hosted on a dedicated server cluster. The cluster is designed for high availability, scalability, and efficient data processing. See Data Acquisition Strategy for details on sensor data collection.
Server Architecture
The server infrastructure consists of three primary tiers: Data Ingestion, Processing, and Storage. Each tier is composed of multiple servers to ensure redundancy and handle the high data volume. The servers are virtualized using Proxmox VE for flexibility and resource management. Communication between tiers utilizes a dedicated 10 Gigabit Ethernet network. Power is provided by a redundant UPS system detailed in the Power Redundancy Documentation.
Data Ingestion Tier
This tier receives data streams from the sensors. It performs initial validation and buffering before passing data to the processing tier. This tier uses Nginx as a reverse proxy for load balancing and security. See Nginx Configuration Details for specific settings.
Processing Tier
This tier houses the AI models responsible for analyzing the sensor data. The models are primarily implemented in Python using the TensorFlow and PyTorch frameworks. This tier utilizes GPU acceleration for faster processing. Details on model training and deployment are available in the AI Model Documentation.
Storage Tier
This tier provides persistent storage for raw sensor data, processed results, and model artifacts. We utilize a distributed file system based on Ceph for scalability and data redundancy. Refer to the Ceph Cluster Configuration for specific details.
Hardware Specifications
The following tables detail the hardware specifications for each tier. All servers are Dell PowerEdge R750 servers.
Data Ingestion Servers
CPU | Memory | Storage | Network Interface |
---|---|---|---|
2 x Intel Xeon Gold 6338 | 128 GB DDR4 ECC REG | 2 x 960 GB NVMe SSD (RAID 1) | 2 x 10 GbE |
Processing Servers
CPU | Memory | Storage | GPU | Network Interface |
---|---|---|---|---|
2 x Intel Xeon Gold 6342 | 256 GB DDR4 ECC REG | 1 x 1.92 TB NVMe SSD | 2 x NVIDIA A100 80GB | 2 x 10 GbE |
Storage Servers
CPU | Memory | Storage | Network Interface |
---|---|---|---|
2 x Intel Xeon Silver 4310 | 64 GB DDR4 ECC REG | 8 x 16 TB SAS HDD (RAID 8) | 2 x 10 GbE |
Software Stack
The following software components are essential to the operation of the server infrastructure:
- Operating System: Ubuntu Server 22.04 LTS
- Virtualization: Proxmox VE 7.4
- Web Server: Nginx 1.22
- Database: PostgreSQL 14 - Used for metadata storage. See Database Schema Documentation.
- Message Queue: RabbitMQ 3.9 - Facilitates asynchronous communication between tiers.
- Monitoring: Prometheus and Grafana - Used for real-time monitoring and alerting. See Monitoring Dashboard Configuration.
- Configuration Management: Ansible - Automates server provisioning and configuration. Refer to the Ansible Playbooks Repository for details.
- Containerization: Docker & Kubernetes - Used for deploying and managing AI models. See Kubernetes Deployment Guide.
Network Configuration
The server network is segmented into three VLANs:
- VLAN 10: Data Ingestion Tier
- VLAN 20: Processing Tier
- VLAN 30: Storage Tier
Firewall rules are configured using iptables to restrict communication between tiers and to protect against external threats. Detailed firewall rules are documented in the Firewall Configuration Document. DNS is managed by an internal BIND9 server.
Security Considerations
Security is paramount. All servers are hardened according to CIS Benchmarks. Regular security audits are conducted. Access to the server infrastructure is restricted via SSH key-based authentication and multi-factor authentication. Intrusion detection is provided by Suricata.
Intel-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Core i7-6700K/7700 Server | 64 GB DDR4, NVMe SSD 2 x 512 GB | CPU Benchmark: 8046 |
Core i7-8700 Server | 64 GB DDR4, NVMe SSD 2x1 TB | CPU Benchmark: 13124 |
Core i9-9900K Server | 128 GB DDR4, NVMe SSD 2 x 1 TB | CPU Benchmark: 49969 |
Core i9-13900 Server (64GB) | 64 GB RAM, 2x2 TB NVMe SSD | |
Core i9-13900 Server (128GB) | 128 GB RAM, 2x2 TB NVMe SSD | |
Core i5-13500 Server (64GB) | 64 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Server (128GB) | 128 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Workstation | 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000 |
AMD-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Ryzen 5 3600 Server | 64 GB RAM, 2x480 GB NVMe | CPU Benchmark: 17849 |
Ryzen 7 7700 Server | 64 GB DDR5 RAM, 2x1 TB NVMe | CPU Benchmark: 35224 |
Ryzen 9 5950X Server | 128 GB RAM, 2x4 TB NVMe | CPU Benchmark: 46045 |
Ryzen 9 7950X Server | 128 GB DDR5 ECC, 2x2 TB NVMe | CPU Benchmark: 63561 |
EPYC 7502P Server (128GB/1TB) | 128 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/2TB) | 128 GB RAM, 2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/4TB) | 128 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/1TB) | 256 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/4TB) | 256 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 9454P Server | 256 GB RAM, 2x2 TB NVMe |
Order Your Dedicated Server
Configure and order your ideal server configuration
Need Assistance?
- Telegram: @powervps Servers at a discounted price
⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️