Server rental store

AI in Preston

AI in Preston: Server Configuration Documentation

Welcome to the documentation for the "AI in Preston" server cluster. This document details the hardware and software configuration of the servers supporting our Artificial Intelligence initiatives within the Preston data centre. This guide is aimed at newcomers to the wiki and server administration tasks. Please read carefully before attempting any modifications.

Overview

The "AI in Preston" project utilizes a distributed server architecture to handle the intensive computational demands of machine learning model training and inference. The cluster is designed for scalability and redundancy, employing a combination of high-performance compute nodes and dedicated storage servers. This documentation covers the core components and their configurations. We will cover the network topology, compute nodes, storage infrastructure, and software stack. Be sure to read the Server Access Policy before attempting to connect to any of these servers. Familiarize yourself with the Data Backup Procedures as well.

Network Topology

The server cluster is deployed within a dedicated VLAN at the Preston data centre. The network is segmented to isolate AI traffic from other services. Key network components include:

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️