Server rental store

AI in the Borneo Rainforest

AI in the Borneo Rainforest: Server Configuration

This document details the server configuration for the “AI in the Borneo Rainforest” project, designed to process data from remote sensor networks deployed throughout the region. This setup balances performance, reliability, and power efficiency, crucial for a remote, environmentally sensitive location. This guide is aimed at new system administrators joining the project.

Overview

The project utilizes a distributed server architecture. A central server cluster located in Kuching, Sarawak (Malaysia) receives, processes, and stores data transmitted from various edge devices within the rainforest. These edge devices include acoustic sensors, camera traps, and environmental sensors, all contributing to a real-time AI-powered monitoring system. The primary goal is to identify and track endangered species, monitor deforestation, and assess the overall health of the rainforest ecosystem. We leverage machine learning models for object detection, sound classification, and anomaly detection. This document focuses on the central cluster configuration.

Central Server Cluster Architecture

The central cluster comprises three primary server roles: data ingestion, processing, and storage. These roles are physically separated across dedicated server nodes for improved performance and fault tolerance. A load balancer distributes incoming data streams across the ingestion servers. A high-speed network interconnect (100GbE) links all nodes within the cluster.

Data Ingestion Servers

These servers are responsible for receiving data from the edge devices, performing initial validation, and queuing it for processing. They utilize a message queue (RabbitMQ) to decouple the ingestion process from the more computationally intensive processing stage.

Processing Servers

These servers execute the machine learning models. They are equipped with powerful GPUs to accelerate model inference. The processing servers pull data from the message queue, perform analysis, and store the results in the storage servers.

Storage Servers

These servers provide persistent storage for raw sensor data, processed data, and model outputs. They employ a distributed file system (Ceph) to ensure high availability and scalability.

Server Hardware Specifications

The following tables detail the hardware specifications for each server role. All servers run Ubuntu Server 22.04 LTS.

Server Role CPU RAM Storage Network Interface
Data Ingestion Intel Xeon Silver 4310 (12 cores) 64 GB DDR4 ECC 2 x 1 TB NVMe SSD (RAID 1) 10 GbE
Processing AMD EPYC 7763 (64 cores) 128 GB DDR4 ECC 1 x 2 TB NVMe SSD (OS) + 4 x NVIDIA A100 (40 GB) 100 GbE
Storage Intel Xeon Gold 6338 (32 cores) 128 GB DDR4 ECC 8 x 16 TB SAS HDD (RAID 8) 100 GbE

Software Stack

The software stack is carefully chosen to maximize performance, scalability, and maintainability.

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️