Server rental store

AI in the Sea of Japan

AI in the Sea of Japan: Server Configuration

This document details the server configuration for the "AI in the Sea of Japan" project, a research initiative focused on real-time analysis of marine data gathered from sensor networks deployed in the Sea of Japan. This guide is intended for new team members and system administrators responsible for maintaining the server infrastructure. It covers hardware specifications, software stack, network topology, and security considerations. Please familiarize yourself with Special:MyPreferences to customize your MediaWiki experience. Refer to Help:Contents for general MediaWiki help.

Overview

The project relies on a distributed server architecture to handle the high volume and velocity of data generated by the sensor network. The system comprises three primary tiers: data ingestion, processing, and storage. Each tier is designed for scalability and redundancy. The system utilizes Semantic MediaWiki extensions for data organization and querying. Understanding Help:Linking is crucial for navigating this documentation. The core principle guiding the design is maximizing uptime and data integrity. We also leverage Help:Tables for clear data presentation.

Hardware Specifications

The server infrastructure consists of a cluster of machines, each with dedicated roles. Details are provided in the following tables:

Server Role CPU RAM Storage Network Interface
Ingestion Nodes (x3) Intel Xeon Silver 4310 (12 cores) 64 GB DDR4 ECC 4TB NVMe SSD (RAID 0) 10 Gbps Ethernet
Processing Nodes (x5) AMD EPYC 7763 (64 cores) 256 GB DDR4 ECC 8TB NVMe SSD (RAID 1) 25 Gbps Ethernet
Storage Nodes (x2) Intel Xeon Gold 6338 (32 cores) 128 GB DDR4 ECC 64TB SAS HDD (RAID 6) 10 Gbps Ethernet

The servers are housed in a dedicated, climate-controlled data center with redundant power supplies and network connectivity. Consider reviewing the Manual:Configuration settings for further details on system configuration.

Software Stack

The software stack is built around a Linux distribution (Ubuntu Server 22.04 LTS) and leverages containerization technologies for ease of deployment and management. The following table outlines the key software components:

Component Version Purpose Notes
Operating System Ubuntu Server 22.04 LTS Base OS for all servers Kernel version 5.15
Docker 24.0.5 Containerization platform Used for deploying applications
Docker Compose v2.20.3 Orchestration of Docker containers Simplifies multi-container application management
PostgreSQL 15.3 Database for metadata and processed data Configured with replication for high availability
Python 3.10 Primary programming language for data processing Utilizes libraries such as NumPy, Pandas, and TensorFlow
Nginx 1.25 Reverse proxy and load balancer Distributes traffic across ingestion nodes

All software is managed using a combination of Ansible for configuration management and Prometheus for monitoring. See Help:Editing pages for how to contribute to the documentation.

Network Topology

The server infrastructure is connected via a dedicated VLAN. The network topology is a star configuration, with a central core switch providing connectivity to all servers. Network security is enforced using firewalls and intrusion detection systems.

Network Segment IP Range Subnet Mask Gateway
Ingestion Network 192.168.1.0/24 255.255.255.0 192.168.1.1
Processing Network 192.168.2.0/24 255.255.255.0 192.168.2.1
Storage Network 192.168.3.0/24 255.255.255.0 192.168.3.1

Internal communication between servers utilizes secure protocols (HTTPS, SSH). External access is restricted to authorized personnel only. Refer to Help:Search to locate other relevant documentation.

Security Considerations

Security is paramount. The following measures are in place:

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️