AI in Sussex

From Server rental store
Revision as of 08:30, 16 April 2025 by Admin (talk | contribs) (Automated server configuration article)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

```wiki

  1. REDIRECT AI in Sussex

AI in Sussex: Server Configuration Documentation

This document details the server configuration supporting the "AI in Sussex" project, a research initiative leveraging artificial intelligence for local data analysis. This guide is intended for new system administrators and developers contributing to the project. It covers hardware, software, networking, and security aspects of the server infrastructure. Please familiarize yourself with our System Administration Guidelines before making any changes.

Overview

The "AI in Sussex" project utilizes a cluster of servers located within the University of Sussex data center. These servers are responsible for data ingestion, model training, model deployment, and API access for researchers. The architecture emphasizes scalability, reliability, and data security. See Data Security Policy for more information. We utilize a hybrid cloud approach, supplementing on-premise resources with cloud-based services for peak workloads. Refer to Cloud Resource Allocation for details.

Hardware Configuration

The core server infrastructure consists of four primary nodes: three dedicated to computation and one acting as a central data repository. Each node is based on a similar hardware configuration, detailed below.

Node Type CPU RAM Storage Network Interface
Intel Xeon Gold 6248R (24 cores) | 256 GB DDR4 ECC | 2 x 4TB NVMe SSD (RAID 1) | 10 Gbps Ethernet
Intel Xeon Gold 6248R (24 cores) | 256 GB DDR4 ECC | 2 x 4TB NVMe SSD (RAID 1) | 10 Gbps Ethernet
Intel Xeon Gold 6248R (24 cores) | 256 GB DDR4 ECC | 2 x 4TB NVMe SSD (RAID 1) | 10 Gbps Ethernet
Intel Xeon Silver 4210 (10 cores) | 128 GB DDR4 ECC | 8 x 8TB SATA HDD (RAID 6) | 10 Gbps Ethernet

All servers run on a dedicated power circuit with UPS backup. See Power Management Procedures for details on emergency shutdown procedures.

Software Stack

The software stack is designed for efficient AI development and deployment. It includes the operating system, programming languages, deep learning frameworks, and containerization tools.

Component Version Description
Ubuntu Server 22.04 LTS | Provides a stable and secure base for the entire stack.
3.9 | Primary programming language for data science and machine learning.
2.12 | Deep learning framework for model development and training.
2.0 | Alternative deep learning framework.
20.10 | Containerization platform for consistent deployment.
1.26 | Container orchestration system for managing the cluster.
15 | Database for metadata and experiment tracking.

All software is managed using Ansible for automated configuration and deployment. Refer to Ansible Playbook Repository for details. Regular security updates are applied using unattended upgrades. See Security Patching Schedule.

Networking Configuration

The server cluster is connected to the University of Sussex network via a dedicated VLAN. Each server has a static IP address assigned within this VLAN.

Server Name IP Address Subnet Mask Gateway
192.168.10.10 | 255.255.255.0 | 192.168.10.1
192.168.10.11 | 255.255.255.0 | 192.168.10.1
192.168.10.12 | 255.255.255.0 | 192.168.10.1
192.168.10.13 | 255.255.255.0 | 192.168.10.1

A firewall is configured to restrict access to the server cluster, allowing only authorized traffic from specific IP addresses. See Firewall Ruleset for details. We implement network segmentation to isolate the AI environment from other university systems. Refer to the Network Diagram.

Security Considerations

Security is paramount for the "AI in Sussex" project. We employ several security measures to protect data and prevent unauthorized access. These include:

  • Regular security audits. See Audit Log Review Procedure.
  • Strong password policies. Refer to Password Policy.
  • Two-factor authentication for all administrative accounts.
  • Data encryption at rest and in transit.
  • Intrusion detection and prevention systems.
  • Vulnerability scanning.

All access to the server cluster is logged and monitored. See Security Incident Response Plan in case of a security breach.

Further Documentation


```


Intel-Based Server Configurations

Configuration Specifications Benchmark
Core i7-6700K/7700 Server 64 GB DDR4, NVMe SSD 2 x 512 GB CPU Benchmark: 8046
Core i7-8700 Server 64 GB DDR4, NVMe SSD 2x1 TB CPU Benchmark: 13124
Core i9-9900K Server 128 GB DDR4, NVMe SSD 2 x 1 TB CPU Benchmark: 49969
Core i9-13900 Server (64GB) 64 GB RAM, 2x2 TB NVMe SSD
Core i9-13900 Server (128GB) 128 GB RAM, 2x2 TB NVMe SSD
Core i5-13500 Server (64GB) 64 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Server (128GB) 128 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Workstation 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000

AMD-Based Server Configurations

Configuration Specifications Benchmark
Ryzen 5 3600 Server 64 GB RAM, 2x480 GB NVMe CPU Benchmark: 17849
Ryzen 7 7700 Server 64 GB DDR5 RAM, 2x1 TB NVMe CPU Benchmark: 35224
Ryzen 9 5950X Server 128 GB RAM, 2x4 TB NVMe CPU Benchmark: 46045
Ryzen 9 7950X Server 128 GB DDR5 ECC, 2x2 TB NVMe CPU Benchmark: 63561
EPYC 7502P Server (128GB/1TB) 128 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/2TB) 128 GB RAM, 2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/4TB) 128 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/1TB) 256 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/4TB) 256 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 9454P Server 256 GB RAM, 2x2 TB NVMe

Order Your Dedicated Server

Configure and order your ideal server configuration

Need Assistance?

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️