AI in the Isle of Man Rainforest

From Server rental store
Jump to navigation Jump to search

AI in the Isle of Man Rainforest: Server Configuration

This document details the server configuration supporting the "AI in the Isle of Man Rainforest" project. This project utilizes artificial intelligence to monitor and analyze data collected from our unique rainforest environment. This guide is intended for new system administrators and developers joining the team. It assumes a basic understanding of Linux server administration and networking concepts. See Help:Contents for basic MediaWiki editing guidance.

Project Overview

The "AI in the Isle of Man Rainforest" project aims to leverage sensor data – temperature, humidity, sound, and visual feeds – to understand the rainforest's ecosystem. We employ machine learning models to identify species, detect anomalies (such as unusual sounds indicative of distress), and predict potential environmental changes. The entire system relies on a robust and scalable server infrastructure. More information on the project goals can be found on the Project Homepage.

Server Architecture

The infrastructure is composed of three primary server roles: Data Acquisition, Processing, and Presentation. These roles are physically separated for redundancy and security. Each role is detailed below. For details on our Security Protocols, please see the dedicated security documentation. Server access is managed via SSH Key Authentication.

Data Acquisition Servers

These servers are responsible for collecting data from the deployed sensors. They are located close to the sensor network to minimize latency. They perform initial data validation and buffering before transmitting the data to the processing servers.

Server Name Role Operating System CPU RAM Storage
rainforest-daq-01 Data Acquisition Ubuntu Server 22.04 LTS Intel Xeon E3-1220 v6 16 GB DDR4 1 TB SSD
rainforest-daq-02 Data Acquisition (Backup) Ubuntu Server 22.04 LTS Intel Xeon E3-1220 v6 16 GB DDR4 1 TB SSD

These servers use a custom Python script, detailed in the Data Acquisition Script Documentation, to interface with the sensors via Modbus TCP. Data is transmitted over a dedicated VLAN to the processing servers.

Processing Servers

These servers are the heart of the AI system. They receive data from the acquisition servers, run the machine learning models, and store the analyzed results. They require significant computational power and memory.

Server Name Role Operating System CPU RAM GPU Storage
rainforest-proc-01 AI Processing CentOS Stream 9 2 x Intel Xeon Gold 6248R 128 GB DDR4 NVIDIA Tesla V100 4 TB NVMe SSD
rainforest-proc-02 AI Processing (Backup) CentOS Stream 9 2 x Intel Xeon Gold 6248R 128 GB DDR4 NVIDIA Tesla V100 4 TB NVMe SSD

The machine learning models are built using TensorFlow and PyTorch, as described in the Machine Learning Model Repository. The processing servers utilize a distributed computing framework (Kubernetes) for scalability and fault tolerance. Monitoring is handled by Prometheus and Grafana.

Presentation Servers

These servers host the web application that provides access to the analyzed data and visualizations. They are responsible for serving the user interface and handling user authentication.

Server Name Role Operating System CPU RAM Storage Web Server
rainforest-web-01 Web Application Debian 11 Intel Core i7-10700K 32 GB DDR4 2 TB SSD Apache 2.4
rainforest-web-02 Web Application (Backup) Debian 11 Intel Core i7-10700K 32 GB DDR4 2 TB SSD Apache 2.4

The web application is built using Python (Flask) and JavaScript. The data is retrieved from a PostgreSQL database, documented in Database Schema. User authentication is managed through LDAP Integration.

Networking

The servers are connected via a dedicated Gigabit Ethernet network. A firewall (pfSense) protects the network from external threats. Details of the network topology can be found in the Network Diagram. We utilize a VLAN structure to segregate the different server roles.

Software Stack

The following software components are essential to the operation of the system:

  • Operating Systems: Ubuntu Server 22.04 LTS, CentOS Stream 9, Debian 11
  • Programming Languages: Python, JavaScript
  • Machine Learning Frameworks: TensorFlow, PyTorch
  • Database: PostgreSQL
  • Web Server: Apache 2.4
  • Containerization: Kubernetes
  • Monitoring: Prometheus, Grafana
  • Firewall: pfSense

Further information on software versions and configurations can be found in the Software Inventory.

Future Considerations

We are currently exploring the use of edge computing to reduce latency and bandwidth requirements. This will involve deploying AI models directly on the sensor nodes. See Edge Computing Proposal for further details. We are also evaluating the possibility of using more powerful GPUs to accelerate the machine learning training process.



Help:Editing Special:Search Main Page Server Maintenance Schedule Troubleshooting Guide Data Backup Policy Disaster Recovery Plan Security Best Practices Contact Information Change Management Process System Documentation Software Licensing Hardware Inventory Network Configuration Database Administration


Intel-Based Server Configurations

Configuration Specifications Benchmark
Core i7-6700K/7700 Server 64 GB DDR4, NVMe SSD 2 x 512 GB CPU Benchmark: 8046
Core i7-8700 Server 64 GB DDR4, NVMe SSD 2x1 TB CPU Benchmark: 13124
Core i9-9900K Server 128 GB DDR4, NVMe SSD 2 x 1 TB CPU Benchmark: 49969
Core i9-13900 Server (64GB) 64 GB RAM, 2x2 TB NVMe SSD
Core i9-13900 Server (128GB) 128 GB RAM, 2x2 TB NVMe SSD
Core i5-13500 Server (64GB) 64 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Server (128GB) 128 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Workstation 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000

AMD-Based Server Configurations

Configuration Specifications Benchmark
Ryzen 5 3600 Server 64 GB RAM, 2x480 GB NVMe CPU Benchmark: 17849
Ryzen 7 7700 Server 64 GB DDR5 RAM, 2x1 TB NVMe CPU Benchmark: 35224
Ryzen 9 5950X Server 128 GB RAM, 2x4 TB NVMe CPU Benchmark: 46045
Ryzen 9 7950X Server 128 GB DDR5 ECC, 2x2 TB NVMe CPU Benchmark: 63561
EPYC 7502P Server (128GB/1TB) 128 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/2TB) 128 GB RAM, 2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/4TB) 128 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/1TB) 256 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/4TB) 256 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 9454P Server 256 GB RAM, 2x2 TB NVMe

Order Your Dedicated Server

Configure and order your ideal server configuration

Need Assistance?

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️