AI in Herne Bay

From Server rental store
Jump to navigation Jump to search
  1. AI in Herne Bay: Server Configuration

This article details the server configuration powering the "AI in Herne Bay" project, providing a technical overview for administrators and those interested in understanding the infrastructure. This project focuses on local AI model deployment for community benefit, specifically focused on image recognition and natural language processing tasks relating to the town of Herne Bay, Kent. This is intended as a guide for newcomers to the MediaWiki system and assumes a basic understanding of server terminology.

Overview

The "AI in Herne Bay" project utilizes a cluster of servers hosted in a dedicated rack within the Herne Bay Community Centre data cabinet. The cluster is designed for high availability and scalability, utilizing a combination of commodity hardware and open-source software. The primary goal is to provide a platform for experimentation with AI models without reliance on external cloud services, fostering local expertise and data privacy. We utilize Debian Linux as our base operating system.

Hardware Specification

The server cluster consists of three primary nodes: a master node, a compute node, and a storage node. Each node is independently powered and networked.

Node Type CPU RAM Storage Network Interface
Master Node Intel Xeon E3-1220 v3 32GB DDR3 ECC 2 x 500GB SSD (RAID 1) 1Gbps Ethernet
Compute Node AMD Ryzen 7 5700X 64GB DDR4 ECC 1 x 1TB NVMe SSD 10Gbps Ethernet
Storage Node Intel Xeon E5-2620 v4 64GB DDR4 ECC 8 x 4TB HDD (RAID 6) 1Gbps Ethernet

The master node handles cluster management, job scheduling using Slurm Workload Manager, and API endpoint routing. The compute node is dedicated to running AI training and inference workloads, leveraging its powerful CPU and fast storage. The storage node provides persistent storage for datasets, model checkpoints, and logs. The network is configured with a dedicated VLAN for inter-node communication. We also utilize a UPS (Uninterruptible Power Supply) to maintain operations during brief power outages.

Software Stack

The software stack is built around open-source components, chosen for their flexibility and community support.

Component Version Purpose
Operating System Debian 11 (Bullseye) Base operating system for all nodes
Containerization Docker 20.10 Packaging and running AI models in isolated environments
Orchestration Docker Compose Defining and managing multi-container applications
AI Framework TensorFlow 2.9 Machine learning framework for model development and deployment
Python 3.9 Primary programming language for AI development
Slurm Workload Manager 22.05 Resource management and job scheduling

All applications are containerized using Docker, ensuring consistency across deployments. Docker Compose simplifies the management of multi-container applications. We also employ Prometheus for server monitoring and Grafana for data visualization.

Networking Configuration

The server cluster utilizes a private network with static IP addresses. A firewall, configured using iptables, restricts access to essential services only. The following table outlines the key networking parameters:

Node Type IP Address Subnet Mask Gateway
Master Node 192.168.10.10 255.255.255.0 192.168.10.1
Compute Node 192.168.10.11 255.255.255.0 192.168.10.1
Storage Node 192.168.10.12 255.255.255.0 192.168.10.1

DNS resolution is handled by a local BIND9 server running on the master node. Access to the cluster from outside the local network is provided through a reverse proxy running on the master node, secured with Let's Encrypt certificates. We utilize SSH for remote administration.


Future Expansion

Planned future expansion includes adding a dedicated GPU server for accelerated AI training. We are also investigating the use of Kubernetes for more sophisticated container orchestration. We also plan to integrate a dedicated backup solution using rsync. The current setup is a proof-of-concept; future iterations will focus on improving scalability and resilience.



Special:Search/AI Special:Search/Herne Bay Special:Search/Debian Special:Search/Docker Special:Search/TensorFlow Special:Search/Slurm Special:Search/Prometheus Special:Search/Grafana Special:Search/iptables Special:Search/BIND9 Special:Search/SSH Special:Search/rsync Special:Search/kubernetes Special:Search/Let's Encrypt Special:Search/RAID Help:Contents Main Page


Intel-Based Server Configurations

Configuration Specifications Benchmark
Core i7-6700K/7700 Server 64 GB DDR4, NVMe SSD 2 x 512 GB CPU Benchmark: 8046
Core i7-8700 Server 64 GB DDR4, NVMe SSD 2x1 TB CPU Benchmark: 13124
Core i9-9900K Server 128 GB DDR4, NVMe SSD 2 x 1 TB CPU Benchmark: 49969
Core i9-13900 Server (64GB) 64 GB RAM, 2x2 TB NVMe SSD
Core i9-13900 Server (128GB) 128 GB RAM, 2x2 TB NVMe SSD
Core i5-13500 Server (64GB) 64 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Server (128GB) 128 GB RAM, 2x500 GB NVMe SSD
Core i5-13500 Workstation 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000

AMD-Based Server Configurations

Configuration Specifications Benchmark
Ryzen 5 3600 Server 64 GB RAM, 2x480 GB NVMe CPU Benchmark: 17849
Ryzen 7 7700 Server 64 GB DDR5 RAM, 2x1 TB NVMe CPU Benchmark: 35224
Ryzen 9 5950X Server 128 GB RAM, 2x4 TB NVMe CPU Benchmark: 46045
Ryzen 9 7950X Server 128 GB DDR5 ECC, 2x2 TB NVMe CPU Benchmark: 63561
EPYC 7502P Server (128GB/1TB) 128 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/2TB) 128 GB RAM, 2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (128GB/4TB) 128 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/1TB) 256 GB RAM, 1 TB NVMe CPU Benchmark: 48021
EPYC 7502P Server (256GB/4TB) 256 GB RAM, 2x2 TB NVMe CPU Benchmark: 48021
EPYC 9454P Server 256 GB RAM, 2x2 TB NVMe

Order Your Dedicated Server

Configure and order your ideal server configuration

Need Assistance?

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️