AI in the Equator
- AI in the Equator: Server Configuration
This document details the server configuration for "AI in the Equator," a project focused on real-time climate data analysis using artificial intelligence. This guide is intended for new system administrators and engineers contributing to the project. It covers hardware specifications, software stack, network configuration, and security considerations. Understanding these details is crucial for maintaining a stable and performant system. Refer to our System Administration Guide for general MediaWiki administration.
Overview
"AI in the Equator" relies on a distributed server architecture to process the massive data streams received from sensor networks positioned along the equator. The core infrastructure is housed in a secure data center in Quito, Ecuador, with redundant systems in Singapore for disaster recovery. This setup necessitates robust hardware and a streamlined software stack. We utilize Debian Linux as our base operating system due to its stability and extensive package repository. The project leverages Kubernetes for container orchestration, ensuring scalability and efficient resource utilization.
Hardware Specifications
The server cluster consists of three primary node types: Master Nodes, Worker Nodes, and Data Storage Nodes. Each node type has specific hardware requirements.
Node Type | CPU | Memory (RAM) | Storage | Network Interface |
---|---|---|---|---|
Master Nodes (3) | 2 x Intel Xeon Gold 6338 | 256 GB DDR4 ECC | 1 TB NVMe SSD (RAID 1) | 10 Gbps Ethernet |
Worker Nodes (12) | 2 x AMD EPYC 7763 | 512 GB DDR4 ECC | 2 TB NVMe SSD (RAID 0) | 25 Gbps Ethernet |
Data Storage Nodes (6) | 2 x Intel Xeon Silver 4310 | 128 GB DDR4 ECC | 16 TB HDD (RAID 6) | 10 Gbps Ethernet |
All servers are equipped with redundant power supplies and are monitored by a dedicated Infrastructure Monitoring System. Detailed hardware inventory is maintained in the Asset Management Database.
Software Stack
The software stack is designed for efficiency and scalability. Key components include:
- Operating System: Debian 11 (Bullseye)
- Containerization: Docker 20.10
- Orchestration: Kubernetes 1.23
- Programming Languages: Python 3.9, R 4.2.0
- AI Frameworks: TensorFlow 2.8, PyTorch 1.10
- Database: PostgreSQL 14 with TimescaleDB extension for time-series data. Refer to our Database Schema documentation.
- Message Queue: RabbitMQ 3.9 for asynchronous task processing. See the Message Queue Architecture for details.
We use Ansible for automated configuration management. The entire software stack is documented in the Software Repository.
Network Configuration
The network is segmented into three zones: Public, DMZ, and Private. Master Nodes and Data Storage Nodes reside in the Private network, while Worker Nodes are accessible through the DMZ. This configuration enhances security.
Network Zone | IP Range | Access Control | Purpose |
---|---|---|---|
Public | 203.0.113.0/24 | Restricted to web servers and API endpoints. | External access. |
DMZ | 192.168.1.0/24 | Limited access to Private network via firewall rules. | Worker Nodes and monitoring systems. |
Private | 10.0.0.0/16 | Strictly controlled access; internal communication only. | Master Nodes and Data Storage Nodes. |
All network traffic is monitored using Suricata intrusion detection system. Detailed network diagrams are available on the Network Topology page. We employ VPN access for remote administration.
Security Considerations
Security is paramount. The following measures are in place:
- Firewall: A stateful firewall (iptables) protects all nodes.
- Intrusion Detection: Suricata monitors network traffic for malicious activity.
- Access Control: Role-Based Access Control (RBAC) is enforced within Kubernetes and on all servers.
- Data Encryption: All data at rest and in transit is encrypted using TLS 1.3. See our Encryption Policy for details.
- Regular Security Audits: We conduct regular vulnerability scans and penetration tests. Results are documented in the Security Audit Reports.
- Multi-Factor Authentication (MFA): Enabled for all administrative accounts. Refer to the MFA Implementation Guide.
Disaster Recovery
The redundant system in Singapore serves as a disaster recovery site. Data is replicated asynchronously to Singapore using DRBD. In the event of a primary site failure, the Singapore system can be brought online with minimal downtime. The Disaster Recovery Plan outlines the complete procedure.
Future Enhancements
Planned future enhancements include upgrading to Kubernetes 1.25, implementing a service mesh (Istio), and exploring the use of GPU acceleration for AI model training. Details can be found in the Roadmap document. We are also investigating the integration of Prometheus for more granular monitoring.
Main Page
System Administration Guide
Database Schema
Message Queue Architecture
Infrastructure Monitoring System
Asset Management Database
Software Repository
Ansible Configuration
Network Topology
VPN Access
Encryption Policy
Security Audit Reports
MFA Implementation Guide
Disaster Recovery Plan
Roadmap
Debian Linux
Kubernetes
PostgreSQL
RabbitMQ
Intel-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Core i7-6700K/7700 Server | 64 GB DDR4, NVMe SSD 2 x 512 GB | CPU Benchmark: 8046 |
Core i7-8700 Server | 64 GB DDR4, NVMe SSD 2x1 TB | CPU Benchmark: 13124 |
Core i9-9900K Server | 128 GB DDR4, NVMe SSD 2 x 1 TB | CPU Benchmark: 49969 |
Core i9-13900 Server (64GB) | 64 GB RAM, 2x2 TB NVMe SSD | |
Core i9-13900 Server (128GB) | 128 GB RAM, 2x2 TB NVMe SSD | |
Core i5-13500 Server (64GB) | 64 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Server (128GB) | 128 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Workstation | 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000 |
AMD-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Ryzen 5 3600 Server | 64 GB RAM, 2x480 GB NVMe | CPU Benchmark: 17849 |
Ryzen 7 7700 Server | 64 GB DDR5 RAM, 2x1 TB NVMe | CPU Benchmark: 35224 |
Ryzen 9 5950X Server | 128 GB RAM, 2x4 TB NVMe | CPU Benchmark: 46045 |
Ryzen 9 7950X Server | 128 GB DDR5 ECC, 2x2 TB NVMe | CPU Benchmark: 63561 |
EPYC 7502P Server (128GB/1TB) | 128 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/2TB) | 128 GB RAM, 2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/4TB) | 128 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/1TB) | 256 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/4TB) | 256 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 9454P Server | 256 GB RAM, 2x2 TB NVMe |
Order Your Dedicated Server
Configure and order your ideal server configuration
Need Assistance?
- Telegram: @powervps Servers at a discounted price
⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️