AI in Lincolnshire
```wiki
AI in Lincolnshire: Server Configuration Overview
This article details the server configuration powering the "AI in Lincolnshire" project, a regional initiative focused on applying Artificial Intelligence to improve local services. This documentation is intended for new system administrators and developers joining the project. It covers hardware, software, and networking details. Understanding these components is crucial for maintaining and expanding the system. Refer to the System Administration Guide for general MediaWiki administration information.
Project Goals
The "AI in Lincolnshire" project aims to:
- Improve traffic flow using predictive modelling (see Traffic Modelling Details).
- Optimize agricultural yields through data analysis (see Agricultural Data Pipeline).
- Enhance healthcare resource allocation using machine learning (see Healthcare Application Specifications).
- Provide a platform for local AI research and development (see Research Access Policy).
Hardware Infrastructure
The core infrastructure consists of three primary server clusters: a data ingestion cluster, a processing cluster, and a serving cluster. Each cluster is located within a secure data centre in Lincoln. Detailed specifications are presented below. Consult the Data Centre Access Procedures before visiting the facility.
Server Role | Server Model | CPU | RAM | Storage | Network Interface |
---|---|---|---|---|---|
Dell PowerEdge R750 | 2x Intel Xeon Gold 6338 | 256 GB DDR4 | 24x 4TB SAS HDD (RAID 6) | 10GbE | |||||
HPE ProLiant DL380 Gen10 Plus | 2x AMD EPYC 7763 | 512 GB DDR4 | 16x 8TB NVMe SSD (RAID 10) | 100GbE | |||||
Supermicro SuperServer 1U 847BE1C-R1K28LPB | 2x Intel Xeon Silver 4310 | 128 GB DDR4 | 8x 2TB NVMe SSD (RAID 1) | 10GbE |
The Data Ingestion servers handle the continuous stream of data from various sources, including sensors, public APIs, and local databases. The Processing cluster performs the computationally intensive AI model training and inference. The Serving cluster hosts the APIs and user interfaces that deliver AI-powered services to end-users. A Hardware Inventory List is maintained separately.
Software Stack
The software stack is built around a Linux foundation, utilizing containerization for portability and scalability. The specific distribution is Ubuntu Server 22.04 LTS. See the Software Licensing Documentation for details on licensing.
Layer | Software | Version | Purpose | |||
---|---|---|---|---|---|---|
Ubuntu Server | 22.04 LTS | Base OS | Docker | 24.0.5 | Application Packaging & Deployment | Kubernetes | 1.27 | Container Management | Python | 3.10 | Primary AI Development Language | TensorFlow | 2.12 | Deep Learning Framework | PostgreSQL | 15 | Relational Database | RabbitMQ | 3.9 | Asynchronous Communication |
All code is version controlled using Git and hosted on a private GitLab instance. Continuous Integration/Continuous Deployment (CI/CD) pipelines are managed using GitLab CI/CD. Regular security audits are conducted by the Security Team.
Networking Configuration
The server clusters are interconnected via a dedicated 100GbE network fabric. External access is provided through a load-balanced firewall. Refer to the Network Diagram for a visual representation of the network topology.
Component | IP Address Range | Subnet Mask | Gateway |
---|---|---|---|
192.168.1.0/24 | 255.255.255.0 | 192.168.1.1 | 10.0.0.0/16 | 255.255.0.0 | 10.0.0.1 | 172.16.0.0/16 | 255.255.0.0 | 172.16.0.1 | 203.0.113.10 | 255.255.255.0 | N/A (External Facing) |
DNS is managed internally using Bind9. All communication between clusters is encrypted using TLS. See the Security Policy for detailed security configuration guidelines. Access to the network is governed by the Access Control List. For troubleshooting, consult the Network Monitoring Tools.
Future Expansion
Planned expansions include adding a GPU cluster for accelerated model training and increasing storage capacity to accommodate growing datasets. The Expansion Roadmap details these plans. Consider the Capacity Planning Guide when proposing new infrastructure changes.
Server Monitoring is critical for maintaining system health. Backup Procedures are in place to ensure data integrity. Disaster Recovery Plan outlines steps for restoring service in the event of a failure.
```
Intel-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Core i7-6700K/7700 Server | 64 GB DDR4, NVMe SSD 2 x 512 GB | CPU Benchmark: 8046 |
Core i7-8700 Server | 64 GB DDR4, NVMe SSD 2x1 TB | CPU Benchmark: 13124 |
Core i9-9900K Server | 128 GB DDR4, NVMe SSD 2 x 1 TB | CPU Benchmark: 49969 |
Core i9-13900 Server (64GB) | 64 GB RAM, 2x2 TB NVMe SSD | |
Core i9-13900 Server (128GB) | 128 GB RAM, 2x2 TB NVMe SSD | |
Core i5-13500 Server (64GB) | 64 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Server (128GB) | 128 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Workstation | 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000 |
AMD-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Ryzen 5 3600 Server | 64 GB RAM, 2x480 GB NVMe | CPU Benchmark: 17849 |
Ryzen 7 7700 Server | 64 GB DDR5 RAM, 2x1 TB NVMe | CPU Benchmark: 35224 |
Ryzen 9 5950X Server | 128 GB RAM, 2x4 TB NVMe | CPU Benchmark: 46045 |
Ryzen 9 7950X Server | 128 GB DDR5 ECC, 2x2 TB NVMe | CPU Benchmark: 63561 |
EPYC 7502P Server (128GB/1TB) | 128 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/2TB) | 128 GB RAM, 2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/4TB) | 128 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/1TB) | 256 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/4TB) | 256 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 9454P Server | 256 GB RAM, 2x2 TB NVMe |
Order Your Dedicated Server
Configure and order your ideal server configuration
Need Assistance?
- Telegram: @powervps Servers at a discounted price
⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️