AI in the Solomon Islands Rainforest
- AI in the Solomon Islands Rainforest: Server Configuration
This article details the server configuration used to support the "AI in the Solomon Islands Rainforest" project. This project utilizes artificial intelligence for real-time analysis of audio and visual data collected from remote sensors deployed within the rainforest environment. This document is geared towards new contributors to our server infrastructure and assumes a basic understanding of Linux server administration.
Project Overview
The core goal of the project is to monitor biodiversity, detect illegal logging activities, and track animal populations using machine learning algorithms. Data is collected by a network of low-power sensors, transmitted via satellite link, and processed on our central server cluster. The processed data is then made available to researchers via a web interface and API. See Data Acquisition for information on sensor deployment. Understanding the Network Topology is also crucial.
Server Hardware
The project utilizes a cluster of four dedicated servers located in a secure, climate-controlled data center. These servers are responsible for data ingestion, model training, inference, and data storage. The primary server, designated "RainforestAI-01," handles the bulk of the AI processing.
Server Component | Specification |
---|---|
CPU | 2 x Intel Xeon Gold 6248R (24 cores/48 threads) |
RAM | 256 GB DDR4 ECC Registered |
Storage (OS) | 1 TB NVMe SSD |
Storage (Data) | 16 TB RAID 6 HDD Array |
Network Interface | Dual 10 Gigabit Ethernet |
The remaining three servers ("RainforestAI-02", "RainforestAI-03", and "RainforestAI-04") are configured for redundancy and scaling. They primarily handle data storage, backup, and model serving. See Server Redundancy for details on failover procedures.
Software Stack
The server software stack is built around Ubuntu Server 22.04 LTS. We utilize a containerized environment using Docker and Kubernetes for application deployment and management. The project depends on Python 3.10 and several key machine learning libraries.
Software Component | Version |
---|---|
Operating System | Ubuntu Server 22.04 LTS |
Containerization | Docker 24.0.6, Kubernetes 1.28.3 |
Programming Language | Python 3.10 |
Machine Learning Framework | TensorFlow 2.13.0, PyTorch 2.0.1 |
Database | PostgreSQL 15.3 |
We leverage PostgreSQL for storing metadata about the sensor data and model outputs. Our API is built using Flask, a Python web framework. Refer to API Documentation for more information. The Database Schema is also important to understand.
Network Configuration
Each server is assigned a static IP address within the 192.168.1.0/24 subnet. Firewall rules are configured using `iptables` to restrict access to essential ports only. The servers are protected by a hardware firewall and intrusion detection system. See the Firewall Ruleset for specific configurations.
Server | IP Address | Role |
---|---|---|
RainforestAI-01 | 192.168.1.10 | AI Processing, Model Training |
RainforestAI-02 | 192.168.1.11 | Data Storage, Backup |
RainforestAI-03 | 192.168.1.12 | Model Serving, Redundancy |
RainforestAI-04 | 192.168.1.13 | Data Storage, Redundancy |
DNS resolution is handled by an internal DNS server. All traffic to and from the servers is encrypted using TLS/SSL. Review the Security Protocols for more details.
Monitoring and Logging
We utilize Prometheus and Grafana for server monitoring. Metrics such as CPU usage, memory consumption, disk I/O, and network traffic are collected and visualized in Grafana dashboards. Logs are aggregated using the ELK stack (Elasticsearch, Logstash, Kibana) for centralized log management and analysis. See Monitoring Dashboard Access for instructions. Regular log analysis is critical for identifying potential issues and ensuring system stability.
Future Considerations
We are currently evaluating the use of GPU acceleration to further improve the performance of our machine learning models. We are also exploring the integration of edge computing to reduce latency and bandwidth requirements. Please see Project Roadmap for planned enhancements. Contributions to Open Issues are highly valued.
Data Acquisition
Network Topology
Server Redundancy
API Documentation
Database Schema
Firewall Ruleset
Security Protocols
Monitoring Dashboard Access
Project Roadmap
Open Issues
Python 3.10
TensorFlow
PyTorch
PostgreSQL
Kubernetes
Docker
ELK Stack
Logging Procedures
Ubuntu Server
Intel-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Core i7-6700K/7700 Server | 64 GB DDR4, NVMe SSD 2 x 512 GB | CPU Benchmark: 8046 |
Core i7-8700 Server | 64 GB DDR4, NVMe SSD 2x1 TB | CPU Benchmark: 13124 |
Core i9-9900K Server | 128 GB DDR4, NVMe SSD 2 x 1 TB | CPU Benchmark: 49969 |
Core i9-13900 Server (64GB) | 64 GB RAM, 2x2 TB NVMe SSD | |
Core i9-13900 Server (128GB) | 128 GB RAM, 2x2 TB NVMe SSD | |
Core i5-13500 Server (64GB) | 64 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Server (128GB) | 128 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Workstation | 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000 |
AMD-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Ryzen 5 3600 Server | 64 GB RAM, 2x480 GB NVMe | CPU Benchmark: 17849 |
Ryzen 7 7700 Server | 64 GB DDR5 RAM, 2x1 TB NVMe | CPU Benchmark: 35224 |
Ryzen 9 5950X Server | 128 GB RAM, 2x4 TB NVMe | CPU Benchmark: 46045 |
Ryzen 9 7950X Server | 128 GB DDR5 ECC, 2x2 TB NVMe | CPU Benchmark: 63561 |
EPYC 7502P Server (128GB/1TB) | 128 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/2TB) | 128 GB RAM, 2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/4TB) | 128 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/1TB) | 256 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/4TB) | 256 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 9454P Server | 256 GB RAM, 2x2 TB NVMe |
Order Your Dedicated Server
Configure and order your ideal server configuration
Need Assistance?
- Telegram: @powervps Servers at a discounted price
⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️