AI in the Red Sea

From Server rental store
Revision as of 10:31, 16 April 2025 by Admin (talk | contribs) (Automated server configuration article)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

AI in the Red Sea: Server Configuration Documentation

This document details the server configuration for the "AI in the Red Sea" project, a research initiative focusing on underwater data analysis using artificial intelligence. This guide is intended for new team members and system administrators tasked with maintaining the project's infrastructure. It covers hardware, software, networking, and security considerations. Please review this document thoroughly before making any changes to the server environment. Refer to the System Administration Policy for overarching guidelines.

Overview

The "AI in the Red Sea" project relies on a distributed server architecture to process the large volumes of data collected from underwater sensors and remotely operated vehicles (ROVs). The system is designed for high throughput, low latency, and scalability. Data is initially ingested into a staging area, then processed by AI models, and finally archived for long-term storage. This process is outlined in the Data Flow Diagram. This documentation focuses on the core server components. For details on the data acquisition systems, see Sensor Network Configuration.

Hardware Specifications

The server infrastructure consists of three primary tiers: ingestion, processing, and storage. Each tier utilizes specialized hardware optimized for its respective tasks.

Ingestion Servers

These servers handle the initial intake of data from the ROVs and sensor networks. They require high network bandwidth and fast storage for temporary buffering.

Component Specification Quantity
CPU Intel Xeon Gold 6248R (24 cores) 2
RAM 256 GB DDR4 ECC Reg. 2
Storage (Temporary) 4 x 2TB NVMe SSD (RAID 0) 2
Network Interface Dual 100 Gbps Ethernet 2
Power Supply 1600W Redundant 2

Processing Servers

These servers execute the AI models for data analysis. They are equipped with high-performance GPUs for accelerated computation.

Component Specification Quantity
CPU AMD EPYC 7763 (64 cores) 4
RAM 512 GB DDR4 ECC Reg. 4
GPU NVIDIA A100 (80GB) 8
Storage (Local) 2 x 4TB NVMe SSD (RAID 1) 4
Network Interface Dual 100 Gbps Ethernet 4
Power Supply 2000W Redundant 4

Storage Servers

These servers provide long-term archival storage for processed data. Capacity and data integrity are the primary concerns.

Component Specification Quantity
CPU Intel Xeon Silver 4210 (10 cores) 6
RAM 64 GB DDR4 ECC Reg. 6
Storage (Archival) 12 x 16TB SAS HDD (RAID 6) 6
Network Interface Dual 25 Gbps Ethernet 6
Power Supply 1200W Redundant 6

Software Configuration

All servers run Ubuntu Server 22.04 LTS. The following software components are installed and configured:

Networking Configuration

The servers are connected via a dedicated 100 Gbps network. The network topology is a full mesh for redundancy and low latency. Each server has a static IP address assigned within the 10.0.0.0/8 network. A dedicated VLAN is used for inter-server communication. See Network Diagram for a visual representation. Firewall rules are configured using `iptables` to restrict access to essential services only. The Firewall Rule Set details the current configuration.

Security Considerations

Security is paramount for the "AI in the Red Sea" project. The following measures are in place:

  • Access Control: SSH access is restricted to authorized personnel only, using key-based authentication.
  • Firewall: A strict firewall policy is enforced to prevent unauthorized access.
  • Intrusion Detection: An intrusion detection system (IDS) is deployed to monitor for malicious activity.
  • Data Encryption: Data is encrypted both in transit and at rest.
  • Regular Audits: Security audits are conducted regularly to identify and address vulnerabilities. Refer to Security Audit Schedule.
  • VPN Access: Remote access is only permitted via a secure VPN. See VPN Configuration.

Future Expansion

The server infrastructure is designed to be scalable. Future expansion plans include adding more processing servers with next-generation GPUs and increasing the storage capacity of the storage servers. We are also evaluating the use of cloud-based storage solutions for archival data. See Scalability Plan for details.

Server Maintenance Schedule Troubleshooting Guide Contact Information


Intel-Based Server Configurations

Configuration Specifications Benchmark
Core i7-6700K/7700 Server 64 GB DDR4, NVMe SSD 2 x 512 GB CPU Benchmark: 8046
Core i7-8700 Server 64 GB DDR4, NVMe SSD 2x1 TB CPU Benchmark: 13124
Core i9-9900K Server 128 GB DDR4, NVMe SSD 2 x 1 TB CPU Benchmark: 49969
Core i9-13900 Server (64GB) 64 GB RAM, 2x2 TB NVMe SSD
Core i9-13900 Server (128GB) 128 GB RAM, 2x2 TB NVMe SSD
Core i5-13500 Server (64GB) 64 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Server (128GB) 128 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Workstation 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000

AMD-Based Server Configurations

Configuration Specifications Benchmark
Ryzen 5 3600 Server 64 GB RAM, 2x480 GB NVMe CPU Benchmark: 17849
Ryzen 7 7700 Server 64 GB DDR5 RAM, 2x1 TB NVMe CPU Benchmark: 35224
Ryzen 9 5950X Server 128 GB RAM, 2x4 TB NVMe CPU Benchmark: 46045
Ryzen 9 7950X Server 128 GB DDR5 ECC, 2x2 TB NVMe CPU Benchmark: 63561
EPYC 7502P Server (128GB/1TB) 128 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/2TB) 128 GB RAM, 2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/4TB) 128 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/1TB) 256 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/4TB) 256 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 9454P Server 256 GB RAM, 2x2 TB NVMe

Order Your Dedicated Server

Configure and order your ideal server configuration

Need Assistance?

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️