AI in the Adriatic Sea

From Server rental store
Jump to navigation Jump to search

AI in the Adriatic Sea: Server Configuration Document

This document details the server configuration for the “AI in the Adriatic Sea” project, a research initiative focused on utilizing artificial intelligence to monitor and analyze marine life and environmental conditions within the Adriatic Sea. This guide is intended for new members of the technical team and provides a comprehensive overview of the hardware, software, and network infrastructure. Understanding these details is crucial for effective maintenance, troubleshooting, and future expansion of the project. Refer to System Administration for general MediaWiki administration guidelines.

Overview

The project utilizes a distributed server architecture, comprised of three primary server clusters: a central processing cluster (located in Trieste, Italy), a data ingestion cluster (split between Split, Croatia and Ancona, Italy), and a remote access/visualization server (located in Venice, Italy). Data is collected from a network of underwater sensors, autonomous underwater vehicles (AUVs), and satellite feeds. The central processing cluster performs the heavy lifting of AI model training and inference, while the data ingestion clusters handle the raw data stream. The remote access server provides a user interface for researchers and stakeholders. See Data Flow Diagram for a visual representation of the system.

Central Processing Cluster (Trieste)

This cluster is the core of the AI processing pipeline. It is responsible for running the complex machine learning models used for species identification, anomaly detection, and predictive modeling. The cluster is built around high-performance computing (HPC) principles. Consult the HPC Best Practices document for further details.

Hardware Specifications

Component Specification
CPU 8 x Intel Xeon Gold 6338 (32 cores/CPU, 2.0 GHz)
RAM 512 GB DDR4 ECC Registered (16 x 32GB DIMMs)
Storage (OS) 2 x 960GB NVMe SSD (RAID 1)
Storage (Data) 16 x 16TB SAS HDD (RAID 6)
GPU 4 x NVIDIA A100 (80GB HBM2e)
Network Interface 2 x 100GbE Mellanox ConnectX-6

Software Stack

Data Ingestion Clusters (Split & Ancona)

These clusters act as the entry point for all incoming data. They are responsible for data validation, preprocessing, and initial storage. Each cluster operates independently, providing redundancy and geographical diversity. The clusters use a message queue system for reliable data transfer. Refer to the Data Pipeline Documentation for details on the data flow.

Hardware Specifications (Per Cluster)

Component Specification
CPU 2 x Intel Xeon Silver 4310 (12 cores/CPU, 2.1 GHz)
RAM 128 GB DDR4 ECC Registered (8 x 16GB DIMMs)
Storage (Incoming) 4 x 8TB SAS HDD (RAID 5)
Storage (Processed) 2 x 4TB NVMe SSD (RAID 1)
Network Interface 2 x 40GbE Intel Ethernet

Software Stack

Remote Access & Visualization Server (Venice)

This server provides a secure interface for researchers to access data, visualize results, and interact with the AI models. It's optimized for user experience and security. The server utilizes a web-based interface. See the User Interface Guide.

Hardware Specifications

Component Specification
CPU 2 x Intel Core i7-12700 (12 cores, 3.6 GHz)
RAM 64 GB DDR5 (2 x 32GB DIMMs)
Storage (OS) 1 x 512GB NVMe SSD
Storage (Data) 2 x 4TB SATA HDD (RAID 1)
Network Interface 1 x 10GbE Intel Ethernet

Software Stack

  • Operating System: Debian 11
  • Web Server: Apache 2.4 with SSL/TLS encryption.
  • Visualization Tools: Plotly, Dash, and Bokeh.
  • Authentication: LDAP integration with university accounts.
  • Security: Fail2ban intrusion prevention system.

Network Infrastructure

All three clusters are connected via a dedicated, high-bandwidth network. The network is segmented using VLANs to isolate traffic and enhance security. Detailed network diagrams are available at Network Topology. Firewall rules are managed using iptables. Regular security audits are performed.

Future Considerations

Future upgrades include expanding the storage capacity of the central processing cluster, implementing a more robust data backup system, and exploring the use of federated learning to further enhance the AI models. See Future Development Roadmap.


Intel-Based Server Configurations

Configuration Specifications Benchmark
Core i7-6700K/7700 Server 64 GB DDR4, NVMe SSD 2 x 512 GB CPU Benchmark: 8046
Core i7-8700 Server 64 GB DDR4, NVMe SSD 2x1 TB CPU Benchmark: 13124
Core i9-9900K Server 128 GB DDR4, NVMe SSD 2 x 1 TB CPU Benchmark: 49969
Core i9-13900 Server (64GB) 64 GB RAM, 2x2 TB NVMe SSD
Core i9-13900 Server (128GB) 128 GB RAM, 2x2 TB NVMe SSD
Core i5-13500 Server (64GB) 64 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Server (128GB) 128 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Workstation 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000

AMD-Based Server Configurations

Configuration Specifications Benchmark
Ryzen 5 3600 Server 64 GB RAM, 2x480 GB NVMe CPU Benchmark: 17849
Ryzen 7 7700 Server 64 GB DDR5 RAM, 2x1 TB NVMe CPU Benchmark: 35224
Ryzen 9 5950X Server 128 GB RAM, 2x4 TB NVMe CPU Benchmark: 46045
Ryzen 9 7950X Server 128 GB DDR5 ECC, 2x2 TB NVMe CPU Benchmark: 63561
EPYC 7502P Server (128GB/1TB) 128 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/2TB) 128 GB RAM, 2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/4TB) 128 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/1TB) 256 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/4TB) 256 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 9454P Server 256 GB RAM, 2x2 TB NVMe

Order Your Dedicated Server

Configure and order your ideal server configuration

Need Assistance?

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️