AI in Derby

From Server rental store
Jump to navigation Jump to search

AI in Derby: Server Configuration

This article details the server configuration for the "AI in Derby" project, a research initiative focused on applying Artificial Intelligence to historical data analysis of the Derby Museum and Art Gallery collections. This document is intended for new system administrators and developers joining the project. It covers hardware, software, and network considerations. Please consult the Project Documentation for a broader overview of the project goals.

Hardware Overview

The "AI in Derby" project utilizes a clustered server environment to handle the computational demands of machine learning tasks. Each node in the cluster is a dedicated server. We started with three nodes, with plans for expansion as the project progresses. The following table details the specifications of each server node:

Server Node Processor RAM Storage Network Interface
Node 1 (ai-derby-01) Intel Xeon Gold 6248R (24 cores) 256 GB DDR4 ECC 4 x 4TB NVMe SSD (RAID 10) 10 Gigabit Ethernet
Node 2 (ai-derby-02) Intel Xeon Gold 6248R (24 cores) 256 GB DDR4 ECC 4 x 4TB NVMe SSD (RAID 10) 10 Gigabit Ethernet
Node 3 (ai-derby-03) Intel Xeon Gold 6248R (24 cores) 256 GB DDR4 ECC 4 x 4TB NVMe SSD (RAID 10) 10 Gigabit Ethernet

All servers are housed in a dedicated rack within the Data Center. Power and cooling are redundant, and access is strictly controlled as per the Security Policy.

Software Stack

The software stack is designed for flexibility and scalability. We utilize a Linux-based operating system and a containerized environment for deploying and managing applications.

Operating System

We use Ubuntu Server 22.04 LTS as our base operating system. This provides a stable and well-supported platform. Regular security updates are applied via Unattended Upgrades. Detailed OS configuration instructions can be found in the OS Configuration Guide.

Containerization

Docker and Kubernetes are used for containerization and orchestration. This allows for easy deployment, scaling, and management of applications. All AI models and related services are packaged as Docker containers. The Kubernetes cluster is managed using kubectl. Access to the Kubernetes cluster is limited to authorized personnel. Refer to the Kubernetes Access Guide for details.

Data Storage

Data is stored on a shared network file system provided by a dedicated NAS Device. The NAS device utilizes a RAID 6 configuration for data redundancy. Access to the NAS is controlled via NFS Permissions.

AI Frameworks

The following AI frameworks are used:

These frameworks are installed within the Docker containers.

Network Configuration

The server cluster is connected to the internal network via a 10 Gigabit Ethernet switch. The following table details the network configuration:

Server Node IP Address Subnet Mask Gateway
ai-derby-01 192.168.1.10 255.255.255.0 192.168.1.1
ai-derby-02 192.168.1.11 255.255.255.0 192.168.1.1
ai-derby-03 192.168.1.12 255.255.255.0 192.168.1.1

The NAS device has a static IP address of 192.168.1.20. DNS resolution is handled by the internal DNS Server. Firewall rules are configured using iptables to restrict access to the servers.

Monitoring and Logging

The server cluster is monitored using Prometheus and Grafana. These tools provide real-time metrics on server performance and resource utilization. Logs are collected using Fluentd and stored in Elasticsearch. Alerts are configured to notify administrators of any critical issues. See the Monitoring Guide for more information.

Logging Levels

The following table outlines the standard logging levels used across the system:

Level Description
DEBUG Detailed information, typically used for development.
INFO General operational events.
WARNING Potential problems or unusual situations.
ERROR A serious problem that may require intervention.
CRITICAL A critical error that requires immediate attention.

Security Considerations

Security is a top priority. All servers are hardened according to the Server Hardening Guide. Regular security audits are performed. Access to the servers is restricted to authorized personnel only. All data is encrypted at rest and in transit. Please review the Security Policy for detailed information. The Incident Response Plan details procedures for handling security incidents.

Main Page Help:Contents Manual:Configuration Special:ListUsers Project:About


Intel-Based Server Configurations

Configuration Specifications Benchmark
Core i7-6700K/7700 Server 64 GB DDR4, NVMe SSD 2 x 512 GB CPU Benchmark: 8046
Core i7-8700 Server 64 GB DDR4, NVMe SSD 2x1 TB CPU Benchmark: 13124
Core i9-9900K Server 128 GB DDR4, NVMe SSD 2 x 1 TB CPU Benchmark: 49969
Core i9-13900 Server (64GB) 64 GB RAM, 2x2 TB NVMe SSD
Core i9-13900 Server (128GB) 128 GB RAM, 2x2 TB NVMe SSD
Core i5-13500 Server (64GB) 64 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Server (128GB) 128 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Workstation 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000

AMD-Based Server Configurations

Configuration Specifications Benchmark
Ryzen 5 3600 Server 64 GB RAM, 2x480 GB NVMe CPU Benchmark: 17849
Ryzen 7 7700 Server 64 GB DDR5 RAM, 2x1 TB NVMe CPU Benchmark: 35224
Ryzen 9 5950X Server 128 GB RAM, 2x4 TB NVMe CPU Benchmark: 46045
Ryzen 9 7950X Server 128 GB DDR5 ECC, 2x2 TB NVMe CPU Benchmark: 63561
EPYC 7502P Server (128GB/1TB) 128 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/2TB) 128 GB RAM, 2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/4TB) 128 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/1TB) 256 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/4TB) 256 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 9454P Server 256 GB RAM, 2x2 TB NVMe

Order Your Dedicated Server

Configure and order your ideal server configuration

Need Assistance?

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️