AI in the North Sea

From Server rental store
Jump to navigation Jump to search
  1. AI in the North Sea: Server Configuration

This article details the server configuration for the "AI in the North Sea" project, a research initiative utilizing artificial intelligence to analyze data collected from sensors deployed in the North Sea. This guide is tailored for new contributors to our MediaWiki site and outlines the hardware and software stack employed.

Project Overview

The "AI in the North Sea" project focuses on real-time data analysis from underwater sensors, buoys, and satellite feeds. This data is used to predict environmental changes, optimize energy production from offshore wind farms, and monitor marine life. The project requires significant computational power for machine learning models, data storage, and network bandwidth. We leverage a hybrid cloud approach, utilizing both on-premise servers and cloud resources via [Amazon Web Services](https://en.wikipedia.org/wiki/Amazon_Web_Services). Understanding the server infrastructure is crucial for Data Management and Model Deployment.

Hardware Infrastructure

The core on-premise infrastructure consists of three primary server clusters: the Data Acquisition Cluster, the Processing Cluster, and the Storage Cluster. Each cluster is designed with redundancy and scalability in mind. Detailed specifications are provided below. System Administration personnel are responsible for maintaining this hardware.

Cluster Server Role Number of Servers CPU RAM Storage
Sensor Data Ingest | 4 128 GB | 4 x 4TB SSD (RAID 10)
Machine Learning, Analysis | 8 256 GB | 2 x 8TB NVMe SSD (RAID 0) + 4 x 16TB HDD (RAID 6)
Long-Term Data Archival | 12 64 GB | 12 x 24TB HDD (RAID 6)

All servers are interconnected via a 100Gbps Ethernet network utilizing Cisco Networking equipment. Power is provided by redundant UPS systems and a backup generator. Regular Hardware Monitoring is critical.

Software Stack

The software stack is built around a Linux foundation, with specific distributions and versions detailed below. We prioritize open-source solutions wherever possible. Software Deployment is automated using Ansible.

Component Version Role Details
Ubuntu Server 22.04 LTS | Base OS | Provides a stable and secure operating environment.
PostgreSQL 14 | Data Storage | Stores all sensor data, metadata, and analysis results. Database Administration is vital.
RabbitMQ 3.9 | Asynchronous Communication | Facilitates communication between different components of the system.
TensorFlow 2.10 | Model Training & Inference | Used for developing and deploying machine learning models.
Grafana 8.5 | Dashboarding | Provides real-time visualization of sensor data and analysis results.
Docker 20.10 | Application Packaging | Enables consistent application deployment across different environments.

We also utilize Kubernetes for orchestrating Docker containers. Containerization Best Practices are followed to ensure efficient resource utilization.

Network Configuration

The network is segmented into three zones: a public zone for external access, a DMZ for web servers and API endpoints, and a private zone for the core infrastructure. Firewalls and intrusion detection systems are implemented to protect against unauthorized access. See the Network Diagram for a visual representation.

Zone Purpose Access Control Key Components
External Access | Strict Firewall Rules | Web Servers, API Gateways
Buffer Zone | Limited Access to Private Zone | Load Balancers, Reverse Proxies
Core Infrastructure | Highly Restricted Access | Database Servers, Processing Servers, Storage Servers

All network traffic is encrypted using TLS/SSL. Security Protocols are regularly reviewed and updated. We also employ a VPN for remote access.


Future Considerations

We are currently evaluating the integration of GPU acceleration for faster machine learning model training. We are also exploring the use of serverless computing for certain tasks. Scalability Planning is an ongoing process. Further optimization of the Data Pipeline is also planned. The move to Mediawiki 1.41 is under consideration.


Main Page Data Security System Documentation Troubleshooting Guide Contact Us


Intel-Based Server Configurations

Configuration Specifications Benchmark
Core i7-6700K/7700 Server 64 GB DDR4, NVMe SSD 2 x 512 GB CPU Benchmark: 8046
Core i7-8700 Server 64 GB DDR4, NVMe SSD 2x1 TB CPU Benchmark: 13124
Core i9-9900K Server 128 GB DDR4, NVMe SSD 2 x 1 TB CPU Benchmark: 49969
Core i9-13900 Server (64GB) 64 GB RAM, 2x2 TB NVMe SSD
Core i9-13900 Server (128GB) 128 GB RAM, 2x2 TB NVMe SSD
Core i5-13500 Server (64GB) 64 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Server (128GB) 128 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Workstation 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000

AMD-Based Server Configurations

Configuration Specifications Benchmark
Ryzen 5 3600 Server 64 GB RAM, 2x480 GB NVMe CPU Benchmark: 17849
Ryzen 7 7700 Server 64 GB DDR5 RAM, 2x1 TB NVMe CPU Benchmark: 35224
Ryzen 9 5950X Server 128 GB RAM, 2x4 TB NVMe CPU Benchmark: 46045
Ryzen 9 7950X Server 128 GB DDR5 ECC, 2x2 TB NVMe CPU Benchmark: 63561
EPYC 7502P Server (128GB/1TB) 128 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/2TB) 128 GB RAM, 2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/4TB) 128 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/1TB) 256 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/4TB) 256 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 9454P Server 256 GB RAM, 2x2 TB NVMe

Order Your Dedicated Server

Configure and order your ideal server configuration

Need Assistance?

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️