AI in Greater Manchester

From Server rental store
Revision as of 06:01, 16 April 2025 by Admin (talk | contribs) (Automated server configuration article)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

AI in Greater Manchester: Server Configuration Overview

This article details the server configuration supporting the “AI in Greater Manchester” initiative. It is designed for newcomers to our MediaWiki site and provides a technical overview of the infrastructure. The goal of this project is to provide a centralized resource for information regarding Artificial Intelligence research, development, and deployment across the Greater Manchester region. This document outlines the server specifications, software stack, and network topology. It assumes a basic understanding of server administration and networking concepts. See Server Administration Basics for a refresher.

Overview

The "AI in Greater Manchester" project utilizes a hybrid server infrastructure consisting of on-premise hardware at the University of Manchester and cloud-based resources from Amazon Web Services (AWS). This hybrid approach allows for flexibility, scalability, and cost optimization. Data sensitivity and regulatory compliance requirements dictate that certain datasets remain on-premise, while others are processed in the cloud. The entire system is managed using Configuration Management Systems like Ansible. Regular Server Backups are performed to ensure data integrity.

On-Premise Server Specifications

The core on-premise infrastructure resides within a dedicated, secure server room at the University of Manchester. These servers are responsible for hosting the primary MediaWiki instance, data storage for sensitive datasets, and initial data processing pipelines.

Server Name Role CPU RAM Storage Network Interface
gm-mediawiki-01 MediaWiki Frontend Intel Xeon Gold 6248R (24 cores) 128 GB DDR4 ECC 2 x 1TB NVMe SSD (RAID 1) 10 Gbps Ethernet
gm-data-01 Data Storage (Sensitive Data) Intel Xeon Silver 4210 (10 cores) 64 GB DDR4 ECC 8 x 4TB SAS HDD (RAID 6) 1 Gbps Ethernet
gm-compute-01 Initial Data Processing AMD EPYC 7763 (64 cores) 256 GB DDR4 ECC 2 x 2TB NVMe SSD (RAID 0) 10 Gbps Ethernet

These servers run Ubuntu Server 22.04 LTS and are interconnected via a dedicated VLAN. Access is strictly controlled via Firewall Configuration and multi-factor authentication. Monitoring is performed using Nagios Monitoring.

AWS Cloud Infrastructure

The AWS component of the infrastructure provides scalable compute resources for large-scale data processing, machine learning model training, and hosting of web applications. We primarily utilize the following AWS services:

Service Instance Type (Example) Purpose Region
EC2 p3.8xlarge Machine Learning Model Training eu-west-1 (London)
S3 Standard Data Storage (Non-Sensitive) eu-west-1 (London)
Lambda Python 3.9 Serverless Functions (API endpoints) eu-west-1 (London)
RDS (PostgreSQL) db.r5.large Metadata Database eu-west-1 (London)

The AWS infrastructure is managed using Infrastructure as Code with Terraform. Security is enforced through AWS Identity and Access Management (IAM) policies and VPC configurations. Data transfer between on-premise and AWS is secured using VPN Connections.

Software Stack

The following software components are crucial to the operation of the "AI in Greater Manchester" platform:

  • MediaWiki 1.40: The core content management system. See MediaWiki Installation Guide for details.
  • PHP 8.1: Used for MediaWiki and custom web applications.
  • PostgreSQL 14: The primary database for MediaWiki and metadata storage. See PostgreSQL Database Administration.
  • Python 3.9: Used for data processing, machine learning, and serverless functions.
  • TensorFlow 2.9: Machine learning framework for model training.
  • Pandas & NumPy: Data analysis libraries for Python.
  • Nginx: Web server and reverse proxy. See Nginx Configuration.

Network Topology

The on-premise network is a segmented VLAN environment. The AWS infrastructure is connected via a VPN tunnel. The following table summarizes the key network components:

Component IP Address Range Purpose
On-Premise VLAN 192.168.10.0/24 Internal network for on-premise servers
AWS VPC 10.0.0.0/16 Virtual Private Cloud in AWS
VPN Tunnel N/A Secure connection between on-premise and AWS
Public Load Balancer N/A Distributes traffic to MediaWiki frontend

Network monitoring is performed using Network Monitoring Tools like Prometheus and Grafana. Regular Security Audits are conducted to identify and mitigate vulnerabilities. See DNS Configuration for details about the domain name setup.


Future Considerations

Future improvements to the infrastructure include exploring the use of containerization with Docker and Kubernetes and implementing a more robust CI/CD pipeline. We are also investigating the potential benefits of using GPU-accelerated instances for machine learning workloads. Scalability Planning is an ongoing process.


Intel-Based Server Configurations

Configuration Specifications Benchmark
Core i7-6700K/7700 Server 64 GB DDR4, NVMe SSD 2 x 512 GB CPU Benchmark: 8046
Core i7-8700 Server 64 GB DDR4, NVMe SSD 2x1 TB CPU Benchmark: 13124
Core i9-9900K Server 128 GB DDR4, NVMe SSD 2 x 1 TB CPU Benchmark: 49969
Core i9-13900 Server (64GB) 64 GB RAM, 2x2 TB NVMe SSD
Core i9-13900 Server (128GB) 128 GB RAM, 2x2 TB NVMe SSD
Core i5-13500 Server (64GB) 64 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Server (128GB) 128 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Workstation 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000

AMD-Based Server Configurations

Configuration Specifications Benchmark
Ryzen 5 3600 Server 64 GB RAM, 2x480 GB NVMe CPU Benchmark: 17849
Ryzen 7 7700 Server 64 GB DDR5 RAM, 2x1 TB NVMe CPU Benchmark: 35224
Ryzen 9 5950X Server 128 GB RAM, 2x4 TB NVMe CPU Benchmark: 46045
Ryzen 9 7950X Server 128 GB DDR5 ECC, 2x2 TB NVMe CPU Benchmark: 63561
EPYC 7502P Server (128GB/1TB) 128 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/2TB) 128 GB RAM, 2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/4TB) 128 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/1TB) 256 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/4TB) 256 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 9454P Server 256 GB RAM, 2x2 TB NVMe

Order Your Dedicated Server

Configure and order your ideal server configuration

Need Assistance?

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️