Server rental store

AI in Ashford

# AI in Ashford: Server Configuration

This document details the server configuration for the "AI in Ashford" project, a new initiative utilizing Artificial Intelligence for enhanced civic services. This guide is aimed at newcomers to the server administration team and provides a comprehensive overview of the hardware and software deployed. Please refer to the Server Administration Policy before making any changes.

Overview

The "AI in Ashford" project requires significant computational resources for model training, inference, and data storage. The system is built on a distributed architecture to ensure scalability and redundancy. We utilize a hybrid cloud approach, leveraging both on-premise hardware and cloud services from Cloud Provider X. This setup allows for flexibility and cost optimization. The core AI workloads are handled by GPU servers located in the Ashford Data Center, while less intensive tasks and long-term data storage are managed in the cloud. See the Network Topology Diagram for a visual representation of the system. Regular backups are performed according to the Backup and Recovery Procedures.

Hardware Specifications

The following tables detail the hardware specifications for the key server components. All servers are monitored using Monitoring System Y, and alerts are configured for critical metrics like CPU usage, memory consumption, and disk space. Physical security is managed by Security Team A.

GPU Servers (Ashford Data Center)

These servers are responsible for the bulk of the AI processing.

Server Name CPU GPU Memory (RAM) Storage Network Interface
ai-gpu-01 Intel Xeon Gold 6248R @ 3.0GHz NVIDIA A100 (80GB) x 2 512GB DDR4 ECC 8TB NVMe SSD (RAID 0) 100GbE
ai-gpu-02 Intel Xeon Gold 6248R @ 3.0GHz NVIDIA A100 (80GB) x 2 512GB DDR4 ECC 8TB NVMe SSD (RAID 0) 100GbE
ai-gpu-03 AMD EPYC 7763 (64-Core) @ 2.45GHz NVIDIA RTX 3090 (24GB) x 2 256GB DDR4 ECC 4TB NVMe SSD (RAID 1) 25GbE

Database Servers (On-Premise)

These servers store the training data, model parameters, and processed results. They are subject to strict access control as detailed in the Access Control Policy.

Server Name CPU Memory (RAM) Storage Database Software Network Interface
db-primary Intel Xeon Silver 4210 @ 2.1GHz 128GB DDR4 ECC 20TB SAS HDD (RAID 6) PostgreSQL 13 10GbE
db-replica Intel Xeon Silver 4210 @ 2.1GHz 128GB DDR4 ECC 20TB SAS HDD (RAID 6) PostgreSQL 13 (Replica) 10GbE

Cloud Storage (Cloud Provider X)

This provides scalable storage for archiving and less frequently accessed data.

Service Storage Type Capacity Redundancy Access Protocol
Cloud Archive Glacier Deep Archive 100TB Geo-redundant S3
Cloud Backup Standard S3 50TB Geo-redundant S3

Software Configuration

The software stack is based on Linux (Ubuntu Server 20.04 LTS) and utilizes containerization with Docker and orchestration with Kubernetes. All code is managed using Version Control System Z.

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️