Server rental store

AI in the Coral Sea

# AI in the Coral Sea: Server Configuration

This document details the server configuration for the "AI in the Coral Sea" project, a research initiative utilizing artificial intelligence to monitor and analyze the health of the Coral Sea ecosystem. This article is designed for new members of the team and those unfamiliar with the server infrastructure. It covers hardware specifications, software stack, network configuration, and security considerations.

Project Overview

The "AI in the Coral Sea" project involves the deployment of underwater sensors, drones, and satellite imagery analysis. The collected data is processed by a centralized server infrastructure to identify coral bleaching events, track marine life, and detect pollution. This data analysis is performed using machine learning models, requiring significant computational resources. See also: Data Acquisition and Machine Learning Models.

Hardware Configuration

The server infrastructure consists of three primary server types: Database Servers, Application Servers, and Processing Nodes. Each server type is built with specific hardware to optimize its performance.

Database Servers

These servers are responsible for storing and managing the vast amounts of data collected by the sensors and drones. They utilize a clustered database approach for redundancy and scalability.

Component Specification
CPU 2 x Intel Xeon Gold 6248R (24 cores/48 threads)
RAM 512 GB DDR4 ECC Registered
Storage 4 x 8TB SAS 12Gbps 7.2K RPM HDD (RAID 10) + 2 x 1TB NVMe SSD (Caching)
Network Interface Dual 10 Gigabit Ethernet
Power Supply Redundant 1600W Platinum

These servers run PostgreSQL with pgBackRest for backups. See Database Administration for details.

Application Servers

These servers host the web application interface for data visualization and analysis, as well as the API endpoints for data access.

Component Specification
CPU 2 x Intel Xeon Silver 4210 (10 cores/20 threads)
RAM 128 GB DDR4 ECC Registered
Storage 2 x 1TB NVMe SSD (RAID 1)
Network Interface Dual 1 Gigabit Ethernet
Power Supply Redundant 850W Gold

The application servers are built using Python, Flask, and nginx. See Web Application Deployment for more information.

Processing Nodes

These servers are dedicated to running the computationally intensive machine learning models. They utilize powerful GPUs to accelerate the training and inference processes.

Component Specification
CPU 2 x AMD EPYC 7763 (64 cores/128 threads)
RAM 256 GB DDR4 ECC Registered
GPU 4 x NVIDIA A100 (80GB)
Storage 2 x 2TB NVMe SSD (RAID 0)
Network Interface Dual 100 Gigabit Ethernet
Power Supply Redundant 2000W Titanium

These nodes utilize TensorFlow and PyTorch to run the AI models. See GPU Cluster Management for configuration details.

Software Stack

The software stack is built upon a Linux foundation, providing a stable and secure operating environment.

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️