Server rental store

AI in Bristol

# AI in Bristol: Server Configuration

This article details the server configuration supporting the "AI in Bristol" project, a collaborative initiative focused on advancing Artificial Intelligence research within the Bristol area. It is intended for newcomers to our MediaWiki site and provides a technical overview of the infrastructure. This documentation will cover hardware specifications, software stack, networking considerations, and ongoing maintenance procedures. Understanding these components is crucial for contributing to the project, troubleshooting issues, and proposing improvements.

Overview

The "AI in Bristol" project relies on a distributed server infrastructure to handle the computational demands of machine learning model training, data analysis, and deployment of AI services. The system is designed for scalability, reliability, and efficient resource utilization. We leverage a hybrid cloud approach, combining on-premise hardware with cloud-based resources from AWS. This allows us to balance cost, performance and control. See System Architecture for a high-level diagram.

Hardware Specifications

Our core on-premise infrastructure consists of several dedicated servers. Details are provided in the table below:

Server Name CPU RAM Storage GPU Network Interface
ai-bristol-01 2 x Intel Xeon Gold 6248R (30 cores total) 256 GB DDR4 ECC 4 x 4TB NVMe SSD (RAID 10) 2 x NVIDIA A100 (80GB) 100 Gbps Ethernet
ai-bristol-02 2 x AMD EPYC 7763 (64 cores total) 512 GB DDR4 ECC 8 x 8TB SATA SSD (RAID 6) 4 x NVIDIA RTX 3090 (24GB) 100 Gbps Ethernet
ai-bristol-03 1 x Intel Xeon Platinum 8280 (28 cores) 128 GB DDR4 ECC 2 x 2TB NVMe SSD (RAID 1) 1 x NVIDIA Tesla V100 (32GB) 10 Gbps Ethernet

These servers are housed in a secure data center with redundant power and cooling. See Data Center Details for more information. We also utilize cloud instances for burst capacity and specialized tasks. Cloud Resource Management details our AWS configuration.

Software Stack

The software environment is built around a Linux base, specifically Ubuntu Server 22.04. We utilize containerization technology using Docker and orchestration with Kubernetes to ensure portability and scalability. Key software components include:

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️