AI in the Hong Kong Rainforest
AI in the Hong Kong Rainforest: Server Configuration
This article details the server configuration powering the "AI in the Hong Kong Rainforest" project, a research initiative utilizing artificial intelligence to monitor and analyze biodiversity within the Hong Kong rainforest ecosystem. This documentation is intended for new system administrators joining the team and provides a comprehensive overview of the hardware, software, and network components. Understanding this configuration is crucial for maintaining system stability, performance, and data integrity. We will cover hardware specifications, software stack, and network topology. Please consult the Data Security Protocol before making any changes.
Hardware Overview
The project utilizes a clustered server environment to handle the computationally intensive tasks of image recognition, audio analysis, and data storage. The core infrastructure consists of four primary servers: three dedicated to AI processing and one for data storage and management. Each server is housed in a climate-controlled rack within the Hong Kong University data center. Access is restricted to authorized personnel only, following the Access Control Policy.
The following table details the specifications of the AI processing servers:
Server Role | Processor | RAM | GPU | Storage |
---|---|---|---|---|
Intel Xeon Gold 6248R @ 3.0 GHz | 256 GB DDR4 ECC | NVIDIA Tesla V100 | 2 x 4TB NVMe SSD (RAID 1) | ||||
Intel Xeon Gold 6248R @ 3.0 GHz | 256 GB DDR4 ECC | NVIDIA Tesla V100 | 2 x 4TB NVMe SSD (RAID 1) | ||||
Intel Xeon Gold 6248R @ 3.0 GHz | 256 GB DDR4 ECC | NVIDIA Tesla V100 | 2 x 4TB NVMe SSD (RAID 1) |
The data storage server has the following specifications:
Server Role | Processor | RAM | Storage |
---|---|---|---|
Intel Xeon Silver 4210 @ 2.1 GHz | 128 GB DDR4 ECC | 8 x 16TB SAS HDD (RAID 6) |
Power redundancy is provided by dual power supplies in each server and a UPS system capable of sustaining operations for at least 30 minutes during a power outage. See the Disaster Recovery Plan for more details.
Software Stack
The servers run a customized Linux distribution based on Ubuntu Server 20.04 LTS. The core software components are detailed below. All software is regularly updated following the Software Update Schedule.
- Operating System: Ubuntu Server 20.04 LTS
- Containerization: Docker and Kubernetes are utilized for deploying and managing AI models. Specifically, we use Kubernetes deployments for scalability.
- AI Frameworks: TensorFlow and PyTorch are the primary AI frameworks used for model development and training. Refer to the Model Training Documentation for specific model details.
- Programming Languages: Python is the primary programming language.
- Database: PostgreSQL is used for storing metadata and analysis results. Database backups are performed daily, as outlined in the Backup Procedures.
- Monitoring: Prometheus and Grafana are used for system monitoring and alerting. See the Monitoring Dashboard Guide.
- Version Control: Git is used for version control of all code and configuration files. We follow the Git Workflow.
The following table outlines the key software versions:
Software | Version |
---|---|
Ubuntu Server | 20.04 LTS |
Docker | 20.10 |
Kubernetes | 1.23 |
TensorFlow | 2.8.0 |
PyTorch | 1.10.0 |
PostgreSQL | 13.7 |
Prometheus | 2.30.1 |
Grafana | 8.3.3 |
Network Topology
The servers are connected to the Hong Kong University network via a dedicated 10 Gigabit Ethernet connection. A private subnet (192.168.10.0/24) is used for internal communication between the servers. Firewall rules are configured to restrict access to the servers from the public internet, following the Network Security Policy. The data storage server is accessible to the AI processing servers via NFS.
The network configuration is as follows:
- AI Processing Servers: 192.168.10.11, 192.168.10.12, 192.168.10.13
- Data Storage Server: 192.168.10.10
- Gateway: 192.168.10.1
- DNS Servers: 8.8.8.8, 8.8.4.4
Regular network performance testing is conducted, as described in the Performance Testing Procedures.
Future Considerations
Planned upgrades include migrating to a more scalable database solution (e.g., a distributed database) and exploring the use of specialized AI accelerators (e.g., Google TPUs). Please refer to the Future Development Roadmap for more information. We are also investigating the integration of Federated Learning techniques to improve model accuracy without compromising data privacy. Finally, we aim to enhance the Automated Deployment Pipeline for faster and more reliable software releases.
Intel-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Core i7-6700K/7700 Server | 64 GB DDR4, NVMe SSD 2 x 512 GB | CPU Benchmark: 8046 |
Core i7-8700 Server | 64 GB DDR4, NVMe SSD 2x1 TB | CPU Benchmark: 13124 |
Core i9-9900K Server | 128 GB DDR4, NVMe SSD 2 x 1 TB | CPU Benchmark: 49969 |
Core i9-13900 Server (64GB) | 64 GB RAM, 2x2 TB NVMe SSD | |
Core i9-13900 Server (128GB) | 128 GB RAM, 2x2 TB NVMe SSD | |
Core i5-13500 Server (64GB) | 64 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Server (128GB) | 128 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Workstation | 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000 |
AMD-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Ryzen 5 3600 Server | 64 GB RAM, 2x480 GB NVMe | CPU Benchmark: 17849 |
Ryzen 7 7700 Server | 64 GB DDR5 RAM, 2x1 TB NVMe | CPU Benchmark: 35224 |
Ryzen 9 5950X Server | 128 GB RAM, 2x4 TB NVMe | CPU Benchmark: 46045 |
Ryzen 9 7950X Server | 128 GB DDR5 ECC, 2x2 TB NVMe | CPU Benchmark: 63561 |
EPYC 7502P Server (128GB/1TB) | 128 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/2TB) | 128 GB RAM, 2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/4TB) | 128 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/1TB) | 256 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/4TB) | 256 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 9454P Server | 256 GB RAM, 2x2 TB NVMe |
Order Your Dedicated Server
Configure and order your ideal server configuration
Need Assistance?
- Telegram: @powervps Servers at a discounted price
⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️