AI in the Baltic Sea
- AI in the Baltic Sea: Server Configuration
This article details the server configuration utilized for the "AI in the Baltic Sea" project, a research initiative focused on real-time data analysis and predictive modeling of environmental conditions within the Baltic Sea region. This guide is intended for newcomers to our server environment and provides a comprehensive overview of the hardware and software stack.
Project Overview
The "AI in the Baltic Sea" project ingests data from a network of underwater sensors, satellite imagery, and historical datasets. This data is processed using machine learning algorithms to predict algal blooms, monitor water quality, and track marine life migration patterns. The server infrastructure is designed for high throughput, low latency, and scalability. Data Acquisition is a crucial component, and Data Preprocessing prepares the data for analysis. The core of the project revolves around Machine Learning Models and their deployment. We utilize a distributed system to handle the large data volumes. See also Project Goals for a high-level overview.
Hardware Infrastructure
The server infrastructure consists of three primary tiers: Data Ingestion, Processing, and Storage. Each tier is built with redundancy and scalability in mind.
Data Ingestion Tier
This tier handles the reception of data from various sources. It’s designed for high availability and rapid data transfer.
Component | Specification | Quantity |
---|---|---|
Server Type | Dell PowerEdge R750 | 2 |
CPU | Intel Xeon Gold 6338 (32 Cores) | 2 per server |
RAM | 256 GB DDR4 ECC REG | 2 per server |
Network Interface | 100 Gbps Ethernet | 2 per server |
Storage (Temporary) | 2 x 1 TB NVMe SSD (RAID 1) | 2 per server |
These servers utilize Network Protocols like MQTT and HTTP/S for data reception. Security Considerations are paramount in this tier.
Processing Tier
This tier performs the computationally intensive tasks of data cleaning, transformation, and model training/inference.
Component | Specification | Quantity |
---|---|---|
Server Type | Supermicro SYS-2029U-TR4 | 4 |
CPU | AMD EPYC 7763 (64 Cores) | 2 per server |
GPU | NVIDIA A100 (80GB) | 2 per server |
RAM | 512 GB DDR4 ECC REG | 2 per server |
Storage (Local) | 4 x 4 TB NVMe SSD (RAID 10) | 4 per server |
GPU acceleration is essential for our Deep Learning Frameworks, specifically TensorFlow and PyTorch. We employ Containerization using Docker and Kubernetes for efficient resource management.
Storage Tier
The Storage Tier provides persistent storage for raw data, processed data, and model artifacts.
Component | Specification | Capacity |
---|---|---|
Storage System | Dell EMC PowerScale F600 | 1 PB (Scalable to 5 PB) |
File System | Lustre | N/A |
Network Connectivity | 200 Gbps InfiniBand | N/A |
Redundancy | Triple Parity RAID | N/A |
Data Backup Strategies are crucial for data integrity and disaster recovery. We use a tiered storage approach, utilizing faster storage for frequently accessed data and slower, cheaper storage for archival purposes.
Software Stack
The software stack is designed to support the entire data pipeline, from ingestion to model deployment.
- Operating System: Ubuntu Server 22.04 LTS
- Containerization: Docker 20.10.7, Kubernetes 1.23
- Programming Languages: Python 3.9, R 4.2.1
- Databases: PostgreSQL 14, TimescaleDB 2.7
- Message Queue: Kafka 3.2.0
- Machine Learning Frameworks: TensorFlow 2.9, PyTorch 1.12
- Monitoring: Prometheus, Grafana
- Version Control: Git, GitLab
- CI/CD: Jenkins
We leverage Cloud Integration for certain tasks, such as model deployment and remote access. Regular Software Updates are performed to ensure system security and stability. The API Documentation provides details on accessing the processed data.
Network Topology
The servers are interconnected via a high-speed network utilizing a spine-leaf architecture. This provides low latency and high bandwidth between all tiers. Network Security is a key concern and is addressed through firewalls, intrusion detection systems, and regular security audits. See also Network Monitoring.
This configuration provides a robust and scalable platform for the "AI in the Baltic Sea" project. Future enhancements will focus on increasing storage capacity and incorporating new machine learning algorithms. For more information, please refer to the Internal Documentation.
Intel-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Core i7-6700K/7700 Server | 64 GB DDR4, NVMe SSD 2 x 512 GB | CPU Benchmark: 8046 |
Core i7-8700 Server | 64 GB DDR4, NVMe SSD 2x1 TB | CPU Benchmark: 13124 |
Core i9-9900K Server | 128 GB DDR4, NVMe SSD 2 x 1 TB | CPU Benchmark: 49969 |
Core i9-13900 Server (64GB) | 64 GB RAM, 2x2 TB NVMe SSD | |
Core i9-13900 Server (128GB) | 128 GB RAM, 2x2 TB NVMe SSD | |
Core i5-13500 Server (64GB) | 64 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Server (128GB) | 128 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Workstation | 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000 |
AMD-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Ryzen 5 3600 Server | 64 GB RAM, 2x480 GB NVMe | CPU Benchmark: 17849 |
Ryzen 7 7700 Server | 64 GB DDR5 RAM, 2x1 TB NVMe | CPU Benchmark: 35224 |
Ryzen 9 5950X Server | 128 GB RAM, 2x4 TB NVMe | CPU Benchmark: 46045 |
Ryzen 9 7950X Server | 128 GB DDR5 ECC, 2x2 TB NVMe | CPU Benchmark: 63561 |
EPYC 7502P Server (128GB/1TB) | 128 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/2TB) | 128 GB RAM, 2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/4TB) | 128 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/1TB) | 256 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/4TB) | 256 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 9454P Server | 256 GB RAM, 2x2 TB NVMe |
Order Your Dedicated Server
Configure and order your ideal server configuration
Need Assistance?
- Telegram: @powervps Servers at a discounted price
⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️