Server rental store

AI in the Nauru Rainforest

AI in the Nauru Rainforest: Server Configuration

This article details the server infrastructure established to support the “AI in the Nauru Rainforest” project. This project utilizes artificial intelligence for real-time biodiversity monitoring and environmental analysis within the unique ecosystem of the Nauru rainforest. This document is intended for new system administrators and developers contributing to the project. Please consult the Project Overview for a complete description of the project goals.

Project Overview

The “AI in the Nauru Rainforest” project involves deploying a network of sensors throughout the rainforest to collect data on flora, fauna, and environmental conditions. This data is then processed in real-time by AI models hosted on a dedicated server cluster. The cluster is designed for high availability, scalability, and efficient data processing. See Data Acquisition Strategy for details on sensor data collection.

Server Architecture

The server infrastructure consists of three primary tiers: Data Ingestion, Processing, and Storage. Each tier is composed of multiple servers to ensure redundancy and handle the high data volume. The servers are virtualized using Proxmox VE for flexibility and resource management. Communication between tiers utilizes a dedicated 10 Gigabit Ethernet network. Power is provided by a redundant UPS system detailed in the Power Redundancy Documentation.

Data Ingestion Tier

This tier receives data streams from the sensors. It performs initial validation and buffering before passing data to the processing tier. This tier uses Nginx as a reverse proxy for load balancing and security. See Nginx Configuration Details for specific settings.

Processing Tier

This tier houses the AI models responsible for analyzing the sensor data. The models are primarily implemented in Python using the TensorFlow and PyTorch frameworks. This tier utilizes GPU acceleration for faster processing. Details on model training and deployment are available in the AI Model Documentation.

Storage Tier

This tier provides persistent storage for raw sensor data, processed results, and model artifacts. We utilize a distributed file system based on Ceph for scalability and data redundancy. Refer to the Ceph Cluster Configuration for specific details.

Hardware Specifications

The following tables detail the hardware specifications for each tier. All servers are Dell PowerEdge R750 servers.

Data Ingestion Servers

CPU Memory Storage Network Interface
2 x Intel Xeon Gold 6338 128 GB DDR4 ECC REG 2 x 960 GB NVMe SSD (RAID 1) 2 x 10 GbE

Processing Servers

CPU Memory Storage GPU Network Interface
2 x Intel Xeon Gold 6342 256 GB DDR4 ECC REG 1 x 1.92 TB NVMe SSD 2 x NVIDIA A100 80GB 2 x 10 GbE

Storage Servers

CPU Memory Storage Network Interface
2 x Intel Xeon Silver 4310 64 GB DDR4 ECC REG 8 x 16 TB SAS HDD (RAID 8) 2 x 10 GbE

Software Stack

The following software components are essential to the operation of the server infrastructure:

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️