<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://serverrental.store/index.php?action=history&amp;feed=atom&amp;title=AI_in_Herne_Bay</id>
	<title>AI in Herne Bay - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://serverrental.store/index.php?action=history&amp;feed=atom&amp;title=AI_in_Herne_Bay"/>
	<link rel="alternate" type="text/html" href="https://serverrental.store/index.php?title=AI_in_Herne_Bay&amp;action=history"/>
	<updated>2026-04-15T11:33:50Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.36.1</generator>
	<entry>
		<id>https://serverrental.store/index.php?title=AI_in_Herne_Bay&amp;diff=2320&amp;oldid=prev</id>
		<title>Admin: Automated server configuration article</title>
		<link rel="alternate" type="text/html" href="https://serverrental.store/index.php?title=AI_in_Herne_Bay&amp;diff=2320&amp;oldid=prev"/>
		<updated>2025-04-16T06:09:01Z</updated>

		<summary type="html">&lt;p&gt;Automated server configuration article&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;# AI in Herne Bay: Server Configuration&lt;br /&gt;
&lt;br /&gt;
This article details the server configuration powering the &amp;quot;AI in Herne Bay&amp;quot; project, providing a technical overview for administrators and those interested in understanding the infrastructure. This project focuses on local AI model deployment for community benefit, specifically focused on image recognition and natural language processing tasks relating to the town of Herne Bay, Kent. This is intended as a guide for newcomers to the MediaWiki system and assumes a basic understanding of server terminology.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;AI in Herne Bay&amp;quot; project utilizes a cluster of servers hosted in a dedicated rack within the Herne Bay Community Centre data cabinet. The cluster is designed for high availability and scalability, utilizing a combination of commodity hardware and open-source software. The primary goal is to provide a platform for experimentation with AI models without reliance on external cloud services, fostering local expertise and data privacy. We utilize [[Debian Linux]] as our base operating system.&lt;br /&gt;
&lt;br /&gt;
== Hardware Specification ==&lt;br /&gt;
&lt;br /&gt;
The server cluster consists of three primary nodes: a master node, a compute node, and a storage node. Each node is independently powered and networked.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Node Type&lt;br /&gt;
! CPU&lt;br /&gt;
! RAM&lt;br /&gt;
! Storage&lt;br /&gt;
! Network Interface&lt;br /&gt;
|-&lt;br /&gt;
| Master Node&lt;br /&gt;
| Intel Xeon E3-1220 v3&lt;br /&gt;
| 32GB DDR3 ECC&lt;br /&gt;
| 2 x 500GB SSD (RAID 1)&lt;br /&gt;
| 1Gbps Ethernet&lt;br /&gt;
|-&lt;br /&gt;
| Compute Node&lt;br /&gt;
| AMD Ryzen 7 5700X&lt;br /&gt;
| 64GB DDR4 ECC&lt;br /&gt;
| 1 x 1TB NVMe SSD&lt;br /&gt;
| 10Gbps Ethernet&lt;br /&gt;
|-&lt;br /&gt;
| Storage Node&lt;br /&gt;
| Intel Xeon E5-2620 v4&lt;br /&gt;
| 64GB DDR4 ECC&lt;br /&gt;
| 8 x 4TB HDD (RAID 6)&lt;br /&gt;
| 1Gbps Ethernet&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The master node handles cluster management, job scheduling using [[Slurm Workload Manager]], and API endpoint routing. The compute node is dedicated to running AI training and inference workloads, leveraging its powerful CPU and fast storage. The storage node provides persistent storage for datasets, model checkpoints, and logs.  The network is configured with a dedicated VLAN for inter-node communication. We also utilize a [[UPS (Uninterruptible Power Supply)]] to maintain operations during brief power outages.&lt;br /&gt;
&lt;br /&gt;
== Software Stack ==&lt;br /&gt;
&lt;br /&gt;
The software stack is built around open-source components, chosen for their flexibility and community support.  &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Component&lt;br /&gt;
! Version&lt;br /&gt;
! Purpose&lt;br /&gt;
|-&lt;br /&gt;
| Operating System&lt;br /&gt;
| Debian 11 (Bullseye)&lt;br /&gt;
| Base operating system for all nodes&lt;br /&gt;
|-&lt;br /&gt;
| Containerization&lt;br /&gt;
| Docker 20.10&lt;br /&gt;
| Packaging and running AI models in isolated environments&lt;br /&gt;
|-&lt;br /&gt;
| Orchestration&lt;br /&gt;
| Docker Compose&lt;br /&gt;
| Defining and managing multi-container applications&lt;br /&gt;
|-&lt;br /&gt;
| AI Framework&lt;br /&gt;
| TensorFlow 2.9&lt;br /&gt;
| Machine learning framework for model development and deployment&lt;br /&gt;
|-&lt;br /&gt;
| Python&lt;br /&gt;
| 3.9&lt;br /&gt;
| Primary programming language for AI development&lt;br /&gt;
|-&lt;br /&gt;
| Slurm Workload Manager&lt;br /&gt;
| 22.05&lt;br /&gt;
| Resource management and job scheduling&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
All applications are containerized using [[Docker]], ensuring consistency across deployments. [[Docker Compose]] simplifies the management of multi-container applications. We also employ [[Prometheus]] for server monitoring and [[Grafana]] for data visualization.&lt;br /&gt;
&lt;br /&gt;
== Networking Configuration ==&lt;br /&gt;
&lt;br /&gt;
The server cluster utilizes a private network with static IP addresses. A firewall, configured using [[iptables]], restricts access to essential services only.  The following table outlines the key networking parameters:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Node Type&lt;br /&gt;
! IP Address&lt;br /&gt;
! Subnet Mask&lt;br /&gt;
! Gateway&lt;br /&gt;
|-&lt;br /&gt;
| Master Node&lt;br /&gt;
| 192.168.10.10&lt;br /&gt;
| 255.255.255.0&lt;br /&gt;
| 192.168.10.1&lt;br /&gt;
|-&lt;br /&gt;
| Compute Node&lt;br /&gt;
| 192.168.10.11&lt;br /&gt;
| 255.255.255.0&lt;br /&gt;
| 192.168.10.1&lt;br /&gt;
|-&lt;br /&gt;
| Storage Node&lt;br /&gt;
| 192.168.10.12&lt;br /&gt;
| 255.255.255.0&lt;br /&gt;
| 192.168.10.1&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
DNS resolution is handled by a local [[BIND9]] server running on the master node. Access to the cluster from outside the local network is provided through a reverse proxy running on the master node, secured with [[Let's Encrypt]] certificates. We utilize [[SSH]] for remote administration.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Future Expansion ==&lt;br /&gt;
&lt;br /&gt;
Planned future expansion includes adding a dedicated GPU server for accelerated AI training. We are also investigating the use of [[Kubernetes]] for more sophisticated container orchestration.  We also plan to integrate a dedicated backup solution using [[rsync]].  The current setup is a proof-of-concept; future iterations will focus on improving scalability and resilience.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Special:Search/AI]]&lt;br /&gt;
[[Special:Search/Herne Bay]]&lt;br /&gt;
[[Special:Search/Debian]]&lt;br /&gt;
[[Special:Search/Docker]]&lt;br /&gt;
[[Special:Search/TensorFlow]]&lt;br /&gt;
[[Special:Search/Slurm]]&lt;br /&gt;
[[Special:Search/Prometheus]]&lt;br /&gt;
[[Special:Search/Grafana]]&lt;br /&gt;
[[Special:Search/iptables]]&lt;br /&gt;
[[Special:Search/BIND9]]&lt;br /&gt;
[[Special:Search/SSH]]&lt;br /&gt;
[[Special:Search/rsync]]&lt;br /&gt;
[[Special:Search/kubernetes]]&lt;br /&gt;
[[Special:Search/Let's Encrypt]]&lt;br /&gt;
[[Special:Search/RAID]]&lt;br /&gt;
[[Help:Contents]]&lt;br /&gt;
[[Main Page]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Server Hardware]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Intel-Based Server Configurations ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Configuration&lt;br /&gt;
! Specifications&lt;br /&gt;
! Benchmark&lt;br /&gt;
|-&lt;br /&gt;
| [[Core i7-6700K/7700 Server]]&lt;br /&gt;
| 64 GB DDR4, NVMe SSD 2 x 512 GB&lt;br /&gt;
| CPU Benchmark: 8046&lt;br /&gt;
|-&lt;br /&gt;
| [[Core i7-8700 Server]]&lt;br /&gt;
| 64 GB DDR4, NVMe SSD 2x1 TB&lt;br /&gt;
| CPU Benchmark: 13124&lt;br /&gt;
|-&lt;br /&gt;
| [[Core i9-9900K Server]]&lt;br /&gt;
| 128 GB DDR4, NVMe SSD 2 x 1 TB&lt;br /&gt;
| CPU Benchmark: 49969&lt;br /&gt;
|-&lt;br /&gt;
| [[Core i9-13900 Server (64GB)]]&lt;br /&gt;
| 64 GB RAM, 2x2 TB NVMe SSD&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| [[Core i9-13900 Server (128GB)]]&lt;br /&gt;
| 128 GB RAM, 2x2 TB NVMe SSD&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| [[Core i5-13500 Server (64GB)]]&lt;br /&gt;
| 64 GB RAM, 2x500 GB NVMe SSD&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| [[Core i5-13500 Server (128GB)]]&lt;br /&gt;
| 128 GB RAM, 2x500 GB NVMe SSD&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| [[Core i5-13500 Workstation]]&lt;br /&gt;
| 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000&lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== AMD-Based Server Configurations ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Configuration&lt;br /&gt;
! Specifications&lt;br /&gt;
! Benchmark&lt;br /&gt;
|-&lt;br /&gt;
| [[Ryzen 5 3600 Server]]&lt;br /&gt;
| 64 GB RAM, 2x480 GB NVMe&lt;br /&gt;
| CPU Benchmark: 17849&lt;br /&gt;
|-&lt;br /&gt;
| [[Ryzen 7 7700 Server]]&lt;br /&gt;
| 64 GB DDR5 RAM, 2x1 TB NVMe&lt;br /&gt;
| CPU Benchmark: 35224&lt;br /&gt;
|-&lt;br /&gt;
| [[Ryzen 9 5950X Server]]&lt;br /&gt;
| 128 GB RAM, 2x4 TB NVMe&lt;br /&gt;
| CPU Benchmark: 46045&lt;br /&gt;
|-&lt;br /&gt;
| [[Ryzen 9 7950X Server]]&lt;br /&gt;
| 128 GB DDR5 ECC, 2x2 TB NVMe&lt;br /&gt;
| CPU Benchmark: 63561&lt;br /&gt;
|-&lt;br /&gt;
| [[EPYC 7502P Server (128GB/1TB)]]&lt;br /&gt;
| 128 GB RAM, 1 TB NVMe&lt;br /&gt;
| CPU Benchmark: 48021&lt;br /&gt;
|-&lt;br /&gt;
| [[EPYC 7502P Server (128GB/2TB)]]&lt;br /&gt;
| 128 GB RAM, 2 TB NVMe&lt;br /&gt;
| CPU Benchmark: 48021&lt;br /&gt;
|-&lt;br /&gt;
| [[EPYC 7502P Server (128GB/4TB)]]&lt;br /&gt;
| 128 GB RAM, 2x2 TB NVMe&lt;br /&gt;
| CPU Benchmark: 48021&lt;br /&gt;
|-&lt;br /&gt;
| [[EPYC 7502P Server (256GB/1TB)]]&lt;br /&gt;
| 256 GB RAM, 1 TB NVMe&lt;br /&gt;
| CPU Benchmark: 48021&lt;br /&gt;
|-&lt;br /&gt;
| [[EPYC 7502P Server (256GB/4TB)]]&lt;br /&gt;
| 256 GB RAM, 2x2 TB NVMe&lt;br /&gt;
| CPU Benchmark: 48021&lt;br /&gt;
|-&lt;br /&gt;
| [[EPYC 9454P Server]]&lt;br /&gt;
| 256 GB RAM, 2x2 TB NVMe&lt;br /&gt;
| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Order Your Dedicated Server ==&lt;br /&gt;
[https://powervps.net/?from=32 Configure and order] your ideal server configuration&lt;br /&gt;
&lt;br /&gt;
=== Need Assistance? ===&lt;br /&gt;
* Telegram: [https://t.me/powervps @powervps Servers at a discounted price]&lt;br /&gt;
&lt;br /&gt;
⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️&lt;br /&gt;
&lt;br /&gt;
{{Exchange Box}}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
</feed>