AI in Cornwall

From Server rental store
Revision as of 05:07, 16 April 2025 by Admin (talk | contribs) (Automated server configuration article)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

```wiki

  1. REDIRECT AI in Cornwall

AI in Cornwall: Server Configuration Overview

This article details the server configuration supporting the "AI in Cornwall" initiative. It aims to provide a comprehensive overview for new administrators and those seeking to understand the infrastructure. The project leverages a hybrid cloud approach, combining on-premise hardware with cloud-based services for scalability and resilience. This document will cover the core server components, network topology, and software stack. See also System Administration Guide for general MediaWiki site administration.

Core Infrastructure Components

The "AI in Cornwall" project relies on a tiered architecture. The core infrastructure consists of three primary tiers: Data Acquisition, Processing, and Delivery. Each tier has specific server requirements, outlined below. For information on Security Protocols, please see the dedicated security documentation.

Tier Purpose Server Role Key Technologies
Data Acquisition Gathering data from sensors and external APIs Edge Servers, Data Ingestion Servers Python, MQTT, REST APIs, Database Schema
Processing Performing AI/ML tasks, model training, and data analysis GPU Servers, CPU Servers, Data Storage Servers TensorFlow, PyTorch, CUDA, Data Analytics Tools
Delivery Providing AI-driven insights and applications to end-users Web Servers, Application Servers Node.js, Flask, React, API Documentation

On-Premise Hardware Specifications

The on-premise infrastructure is housed in a dedicated data center in Truro. The following tables detail the specifications of the key server types. Refer to Hardware Inventory for a complete listing of all equipment.

GPU Servers

These servers are critical for the computationally intensive tasks of model training and inference.

Component Specification
CPU Dual Intel Xeon Gold 6248R (24 cores/48 threads per CPU)
GPU 4 x NVIDIA A100 (80GB HBM2e)
RAM 512GB DDR4 ECC Registered (3200 MHz)
Storage 2 x 4TB NVMe SSD (RAID 0) + 2 x 16TB SAS HDD (RAID 1)
Network Dual 100GbE NIC
Power Supply 2 x 2000W Redundant Power Supplies

Data Storage Servers

These servers provide high-capacity, reliable storage for datasets and model artifacts.

Component Specification
CPU Intel Xeon Silver 4210 (10 cores/20 threads)
RAM 256GB DDR4 ECC Registered (2666 MHz)
Storage 8 x 20TB SAS HDD (RAID 6) - Total usable capacity: 120TB
Network Quad 10GbE NIC
Power Supply 2 x 850W Redundant Power Supplies

Edge Servers

These servers are deployed closer to data sources (e.g., sensor networks) to perform initial data filtering and pre-processing.

Component Specification
CPU Intel Core i7-8700K (6 cores/12 threads)
RAM 32GB DDR4 (2666 MHz)
Storage 1 x 1TB NVMe SSD
Network Dual 1GbE NIC
Power Supply 1 x 650W Power Supply

Software Stack

The servers run a combination of Linux (Ubuntu Server 20.04 LTS) and containerized applications managed by Kubernetes. Key software components include:

  • Operating System: Ubuntu Server 20.04 LTS
  • Containerization: Docker, Kubernetes
  • Programming Languages: Python 3.8, Node.js 16
  • AI/ML Frameworks: TensorFlow 2.x, PyTorch 1.x
  • Databases: PostgreSQL 13, Redis 6
  • Message Queue: RabbitMQ 3.9
  • Monitoring: Prometheus, Grafana. See Monitoring Dashboard Guide for details.
  • Version Control: Git Repository Access

Network Topology

The on-premise network is a fully meshed topology with redundant connections. The servers are segmented into VLANs based on their roles. A firewall (Palo Alto Networks PA-Series) protects the network perimeter. The network is connected to the cloud provider (AWS) via a dedicated VPN connection. Refer to Network Diagrams for a visual representation of the network topology.

Cloud Integration

The "AI in Cornwall" project utilizes AWS for scalability and disaster recovery. Specifically:

  • AWS S3: For storing large datasets and model backups.
  • AWS EC2: For bursting capacity during peak demand.
  • AWS SageMaker: For managed machine learning services.
  • AWS RDS: For relational database instances. See Database Backup Procedures.

Future Considerations

Future plans include upgrading the GPU servers with newer NVIDIA H100 GPUs and exploring the use of serverless computing for certain tasks. We are also investigating the integration of federated learning to enable collaborative model training without sharing raw data. See the Roadmap for planned updates.


Data Center Management Kubernetes Deployment Security Best Practices Database Administration Network Troubleshooting AWS Integration Guide Monitoring and Alerting Incident Response Plan Backup and Recovery System Performance Tuning Software Updates User Account Management Firewall Configuration Virtualization Technologies Disaster Recovery Planning Change Management Process ```


Intel-Based Server Configurations

Configuration Specifications Benchmark
Core i7-6700K/7700 Server 64 GB DDR4, NVMe SSD 2 x 512 GB CPU Benchmark: 8046
Core i7-8700 Server 64 GB DDR4, NVMe SSD 2x1 TB CPU Benchmark: 13124
Core i9-9900K Server 128 GB DDR4, NVMe SSD 2 x 1 TB CPU Benchmark: 49969
Core i9-13900 Server (64GB) 64 GB RAM, 2x2 TB NVMe SSD
Core i9-13900 Server (128GB) 128 GB RAM, 2x2 TB NVMe SSD
Core i5-13500 Server (64GB) 64 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Server (128GB) 128 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Workstation 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000

AMD-Based Server Configurations

Configuration Specifications Benchmark
Ryzen 5 3600 Server 64 GB RAM, 2x480 GB NVMe CPU Benchmark: 17849
Ryzen 7 7700 Server 64 GB DDR5 RAM, 2x1 TB NVMe CPU Benchmark: 35224
Ryzen 9 5950X Server 128 GB RAM, 2x4 TB NVMe CPU Benchmark: 46045
Ryzen 9 7950X Server 128 GB DDR5 ECC, 2x2 TB NVMe CPU Benchmark: 63561
EPYC 7502P Server (128GB/1TB) 128 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/2TB) 128 GB RAM, 2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/4TB) 128 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/1TB) 256 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/4TB) 256 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 9454P Server 256 GB RAM, 2x2 TB NVMe

Order Your Dedicated Server

Configure and order your ideal server configuration

Need Assistance?

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️