Server rental store

AI in Derby

AI in Derby: Server Configuration

This article details the server configuration for the "AI in Derby" project, a research initiative focused on applying Artificial Intelligence to historical data analysis of the Derby Museum and Art Gallery collections. This document is intended for new system administrators and developers joining the project. It covers hardware, software, and network considerations. Please consult the Project Documentation for a broader overview of the project goals.

Hardware Overview

The "AI in Derby" project utilizes a clustered server environment to handle the computational demands of machine learning tasks. Each node in the cluster is a dedicated server. We started with three nodes, with plans for expansion as the project progresses. The following table details the specifications of each server node:

Server Node Processor RAM Storage Network Interface
Node 1 (ai-derby-01) Intel Xeon Gold 6248R (24 cores) 256 GB DDR4 ECC 4 x 4TB NVMe SSD (RAID 10) 10 Gigabit Ethernet
Node 2 (ai-derby-02) Intel Xeon Gold 6248R (24 cores) 256 GB DDR4 ECC 4 x 4TB NVMe SSD (RAID 10) 10 Gigabit Ethernet
Node 3 (ai-derby-03) Intel Xeon Gold 6248R (24 cores) 256 GB DDR4 ECC 4 x 4TB NVMe SSD (RAID 10) 10 Gigabit Ethernet

All servers are housed in a dedicated rack within the Data Center. Power and cooling are redundant, and access is strictly controlled as per the Security Policy.

Software Stack

The software stack is designed for flexibility and scalability. We utilize a Linux-based operating system and a containerized environment for deploying and managing applications.

Operating System

We use Ubuntu Server 22.04 LTS as our base operating system. This provides a stable and well-supported platform. Regular security updates are applied via Unattended Upgrades. Detailed OS configuration instructions can be found in the OS Configuration Guide.

Containerization

Docker and Kubernetes are used for containerization and orchestration. This allows for easy deployment, scaling, and management of applications. All AI models and related services are packaged as Docker containers. The Kubernetes cluster is managed using kubectl. Access to the Kubernetes cluster is limited to authorized personnel. Refer to the Kubernetes Access Guide for details.

Data Storage

Data is stored on a shared network file system provided by a dedicated NAS Device. The NAS device utilizes a RAID 6 configuration for data redundancy. Access to the NAS is controlled via NFS Permissions.

AI Frameworks

The following AI frameworks are used:

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️