AI in the Curaçao Rainforest

From Server rental store
Revision as of 09:43, 16 April 2025 by Admin (talk | contribs) (Automated server configuration article)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

AI in the Curaçao Rainforest: Server Configuration

This article details the server configuration deployed to support the "AI in the Curaçao Rainforest" project, a research initiative focused on biodiversity monitoring using artificial intelligence. This guide is intended for newcomers to our MediaWiki site and provides a technical overview of the hardware and software infrastructure. Understanding this setup is crucial for anyone contributing to the project's data analysis or system maintenance.

Project Overview

The "AI in the Curaçao Rainforest" project utilizes a network of camera traps and audio recorders deployed throughout the Shete Boka National Park in Curaçao. Data collected from these devices is streamed to our central server for processing, analysis, and long-term storage. The AI models employed are primarily focused on species identification from images and soundscapes. This requires significant computational resources and a robust, reliable server infrastructure. Our data pipeline is designed for scalability and resilience. We also use a version control system to manage all configurations.

Server Hardware

The core of our infrastructure consists of three primary servers: a primary data ingestion server, a processing server, and a storage server. Each server is located in a secure, climate-controlled data center in Willemstad. Detailed specifications are provided below:

Server Role CPU RAM Storage Network Interface
Intel Xeon Gold 6248R (24 cores) | 128 GB DDR4 ECC | 2 x 4TB NVMe SSD (RAID 1) | 10 Gbps Ethernet |
2 x AMD EPYC 7763 (64 cores total) | 256 GB DDR4 ECC | 4 x 8TB SAS HDD (RAID 5) + 2 x 1TB NVMe SSD (OS) | 25 Gbps Ethernet |
Supermicro Ultra Storage Server | 128 GB DDR4 ECC | 32 x 16TB SAS HDD (RAID 6) | 40 Gbps InfiniBand |

These servers are connected via a dedicated internal network. Power redundancy is provided by dual power supplies and an uninterruptible power supply (UPS). Regular system backups are performed to an offsite location. We monitor server health using Nagios.

Software Configuration

The software stack is built around a Linux foundation, utilizing Ubuntu Server 22.04 LTS as the operating system for all servers. Key software components include:

  • Data Ingestion Server: Nginx web server, Rsync for data transfer, and a custom Python script for initial data validation. This server also hosts the API endpoint for camera trap uploads.
  • Processing Server: Python 3.10, TensorFlow 2.12, PyTorch 2.0, CUDA Toolkit 11.8 (for GPU acceleration), and Jupyter Notebooks for development and experimentation. We utilize a containerization strategy with Docker to ensure reproducibility.
  • Storage Server: Ceph distributed storage system for scalability and data redundancy. This provides a unified namespace for all project data. Data archiving is handled through Ceph's lifecycle management policies.

Network Topology

The network infrastructure is designed for high bandwidth and low latency. The following table outlines key network components:

Component IP Address Function
192.168.1.1 | Gateway to the internet, firewall |
192.168.1.10 | Receives data from camera traps |
192.168.1.20 | Runs AI models |
192.168.1.30 | Stores all project data |
192.168.1.40 | Runs Nagios and other monitoring tools |

All communication between servers is secured using SSH and TLS. We employ a firewall configuration to restrict access to only necessary ports. Network performance is regularly monitored using Iperf3.

AI Model Deployment

AI models are deployed on the Processing Server using Docker containers. Each model is encapsulated in its own container, ensuring isolation and reproducibility. The models are served via a REST API using Flask. We use model versioning to track changes and ensure traceability. The AI models are regularly retrained using the latest data, following a continuous integration/continuous deployment (CI/CD) pipeline.

Security Considerations

Security is paramount. The following measures are in place:

  • Regular security audits.
  • Strong password policies.
  • Two-factor authentication for all administrative accounts.
  • Intrusion detection system (IDS).
  • Data encryption at rest and in transit.
  • Regular security patching of all software.

Future Enhancements

Planned future enhancements include:

  • Expanding the storage capacity of the Storage Server.
  • Adding a dedicated GPU server for faster model training.
  • Implementing a more sophisticated monitoring system.
  • Automating the deployment of new AI models.
  • Integrating with a cloud storage provider for disaster recovery.

Server Resource Allocation

The following table details the resource allocation for key processes:

Process Server CPU Allocation Memory Allocation
Data Ingestion Server | 4 cores | 16 GB | Processing Server | 32 cores | 128 GB | Processing Server | 8 cores | 64 GB | Storage Server | 16 cores | 64 GB |

Contact the administrators for any questions or concerns regarding the server infrastructure.


Intel-Based Server Configurations

Configuration Specifications Benchmark
Core i7-6700K/7700 Server 64 GB DDR4, NVMe SSD 2 x 512 GB CPU Benchmark: 8046
Core i7-8700 Server 64 GB DDR4, NVMe SSD 2x1 TB CPU Benchmark: 13124
Core i9-9900K Server 128 GB DDR4, NVMe SSD 2 x 1 TB CPU Benchmark: 49969
Core i9-13900 Server (64GB) 64 GB RAM, 2x2 TB NVMe SSD
Core i9-13900 Server (128GB) 128 GB RAM, 2x2 TB NVMe SSD
Core i5-13500 Server (64GB) 64 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Server (128GB) 128 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Workstation 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000

AMD-Based Server Configurations

Configuration Specifications Benchmark
Ryzen 5 3600 Server 64 GB RAM, 2x480 GB NVMe CPU Benchmark: 17849
Ryzen 7 7700 Server 64 GB DDR5 RAM, 2x1 TB NVMe CPU Benchmark: 35224
Ryzen 9 5950X Server 128 GB RAM, 2x4 TB NVMe CPU Benchmark: 46045
Ryzen 9 7950X Server 128 GB DDR5 ECC, 2x2 TB NVMe CPU Benchmark: 63561
EPYC 7502P Server (128GB/1TB) 128 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/2TB) 128 GB RAM, 2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/4TB) 128 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/1TB) 256 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/4TB) 256 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 9454P Server 256 GB RAM, 2x2 TB NVMe

Order Your Dedicated Server

Configure and order your ideal server configuration

Need Assistance?

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️