AI in the Turks and Caicos Islands Rainforest
AI in the Turks and Caicos Islands Rainforest: Server Configuration
This article details the server configuration supporting the "AI in the Turks and Caicos Islands Rainforest" project. This project utilizes artificial intelligence for real-time environmental monitoring and species identification within the unique ecosystem of the Turks and Caicos Islands rainforest. This document is intended for new system administrators and developers contributing to the project. Understanding this configuration is vital for maintaining system stability and scaling the project's capabilities. Refer to System Administration Basics for general wiki usage.
Project Overview
The project involves deploying a network of sensors throughout the rainforest, collecting data on temperature, humidity, soundscapes, and visual imagery. This data is transmitted to a central server cluster for processing using machine learning models. The goal is to automatically identify species, track their movements, and detect environmental changes. Initial project details can be found on the Project Homepage. The initial data collection phase is detailed in the Data Acquisition Protocol.
Server Infrastructure
The server infrastructure is hosted in a secure data center in Providenciales, Turks and Caicos Islands. Redundancy and high availability are critical due to the remote location and the importance of continuous data collection. The system utilizes a hybrid cloud approach, leveraging both on-premise hardware and cloud-based services. See Network Diagram for a visual representation.
Hardware Specifications
The core server cluster consists of three primary servers, each with dedicated roles: data ingestion, model training, and inference.
Server Role | CPU | RAM | Storage | Network Interface |
---|---|---|---|---|
Data Ingestion | Intel Xeon Gold 6248R (24 cores) | 128 GB DDR4 ECC | 4 x 4TB NVMe SSD (RAID 10) | 10 Gbps Ethernet |
Model Training | AMD EPYC 7763 (64 cores) | 256 GB DDR4 ECC | 8 x 8TB NVMe SSD (RAID 10) | 10 Gbps Ethernet |
Inference | Intel Xeon Silver 4210 (10 cores) | 64 GB DDR4 ECC | 2 x 2TB NVMe SSD (RAID 1) | 1 Gbps Ethernet |
These servers are housed within a Rack Unit and are cooled via a dedicated HVAC System. Power redundancy is provided by dual UPS units with generator backup.
Software Stack
The software stack is built around Ubuntu Server 22.04 LTS. We utilize Docker for containerization and Kubernetes for orchestration.
Component | Version | Purpose |
---|---|---|
Operating System | Ubuntu Server 22.04 LTS | Base operating system |
Docker | 20.10.14 | Containerization platform |
Kubernetes | 1.24.0 | Container orchestration |
Python | 3.9 | Primary programming language |
TensorFlow | 2.8.0 | Machine learning framework |
PostgreSQL | 14 | Database for metadata and results |
All code is managed using Git Version Control and stored in a private GitLab repository. Continuous Integration and Continuous Deployment (CI/CD) pipelines are implemented using Jenkins Automation.
Data Flow and Processing
1. **Data Acquisition:** Sensors collect data and transmit it via LoRaWAN to a gateway. 2. **Data Ingestion:** The data ingestion server receives data from the gateway, validates it, and stores it in a time-series database (InfluxDB). See Data Validation Procedures. 3. **Model Training:** Periodically, the model training server retrieves data from InfluxDB, trains new machine learning models (using TensorFlow), and stores the trained models in a model registry. Refer to Model Training Scripts. 4. **Inference:** The inference server loads the latest model from the model registry and uses it to process incoming data in real-time. Results are stored in PostgreSQL and visualized on a Dashboard Interface.
Database Configuration
The PostgreSQL database is configured for high availability using replication and connection pooling.
Parameter | Value | Description |
---|---|---|
Replication Method | Streaming Replication | Asynchronous replication to a standby server |
Connection Pooler | PgBouncer | Manages database connections efficiently |
Max Connections | 200 | Maximum number of concurrent database connections |
WAL Level | Replica | Write-Ahead Logging level for replication |
Database backups are performed daily and stored offsite. See Database Backup Policy.
Security Considerations
Security is paramount. All network traffic is encrypted using TLS/SSL. Access to servers is restricted using SSH keys and firewalls. Regular security audits are conducted. The Security Protocol outlines all security measures.
Future Scalability
The infrastructure is designed to be scalable. As the project grows, we can add more servers to the Kubernetes cluster and increase the capacity of the database. We are also exploring the use of cloud-based services for additional scalability and redundancy. See Scalability Roadmap for planned expansions.
Troubleshooting Guide Contact Information Glossary of Terms Monitoring System Data Privacy Policy
Intel-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Core i7-6700K/7700 Server | 64 GB DDR4, NVMe SSD 2 x 512 GB | CPU Benchmark: 8046 |
Core i7-8700 Server | 64 GB DDR4, NVMe SSD 2x1 TB | CPU Benchmark: 13124 |
Core i9-9900K Server | 128 GB DDR4, NVMe SSD 2 x 1 TB | CPU Benchmark: 49969 |
Core i9-13900 Server (64GB) | 64 GB RAM, 2x2 TB NVMe SSD | |
Core i9-13900 Server (128GB) | 128 GB RAM, 2x2 TB NVMe SSD | |
Core i5-13500 Server (64GB) | 64 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Server (128GB) | 128 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Workstation | 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000 |
AMD-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Ryzen 5 3600 Server | 64 GB RAM, 2x480 GB NVMe | CPU Benchmark: 17849 |
Ryzen 7 7700 Server | 64 GB DDR5 RAM, 2x1 TB NVMe | CPU Benchmark: 35224 |
Ryzen 9 5950X Server | 128 GB RAM, 2x4 TB NVMe | CPU Benchmark: 46045 |
Ryzen 9 7950X Server | 128 GB DDR5 ECC, 2x2 TB NVMe | CPU Benchmark: 63561 |
EPYC 7502P Server (128GB/1TB) | 128 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/2TB) | 128 GB RAM, 2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/4TB) | 128 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/1TB) | 256 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/4TB) | 256 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 9454P Server | 256 GB RAM, 2x2 TB NVMe |
Order Your Dedicated Server
Configure and order your ideal server configuration
Need Assistance?
- Telegram: @powervps Servers at a discounted price
⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️