Server rental store

AI in the Australian Outback

# AI in the Australian Outback: Server Configuration

This article details the server configuration for our “AI in the Australian Outback” project, focusing on the infrastructure supporting remote data analysis and predictive modeling. This project utilizes machine learning to analyze environmental data collected from sensors deployed across vast, sparsely populated regions of Australia. This guide is intended for new team members responsible for server maintenance and scaling.

Project Overview

The "AI in the Australian Outback" project aims to predict bushfire risk, monitor wildlife populations, and optimize resource allocation using data gathered from a network of sensor nodes. A key challenge is the remote location of these sensors and the limited bandwidth available for data transmission. The server infrastructure is designed to handle intermittent connectivity, large data volumes, and the computational demands of complex AI models. We leverage a hybrid cloud approach, utilizing on-premise servers for initial data processing and cloud services for model training and long-term storage. See also Data Acquisition Strategy for information on data sources.

Server Hardware Specifications

Our primary on-premise server, affectionately nicknamed “Dingo”, is responsible for initial data ingestion, pre-processing, and real-time analysis. A secondary server, “Wallaby”, acts as a hot standby for redundancy.

Component Specification (Dingo) Specification (Wallaby)
CPU 2 x Intel Xeon Gold 6248R (24 cores/48 threads) 2 x Intel Xeon Gold 6248R (24 cores/48 threads)
RAM 256 GB DDR4 ECC Registered 256 GB DDR4 ECC Registered
Storage (OS) 1 TB NVMe SSD 1 TB NVMe SSD
Storage (Data) 16 TB RAID 6 (SAS 7.2k RPM) 16 TB RAID 6 (SAS 7.2k RPM)
Network Interface 10 Gbps Ethernet x 2 10 Gbps Ethernet x 2
Power Supply 2 x 1200W Redundant 2 x 1200W Redundant

These servers are housed in a climate-controlled rack at our regional data center. See Data Center Access Procedures for details on physical access.

Software Stack

The software stack is built around a Linux foundation, optimized for data science workloads.

Software Version Purpose
Operating System Ubuntu Server 22.04 LTS Base OS, system management
Programming Language Python 3.10 Primary language for data analysis and AI models
Machine Learning Framework TensorFlow 2.12 Deep learning framework
Data Storage PostgreSQL 15 Relational database for metadata and configuration
Message Queue RabbitMQ 3.9 Asynchronous message handling for sensor data
Web Server Nginx 1.23 Serving API endpoints and monitoring dashboards
Monitoring Prometheus & Grafana System and application monitoring

Detailed installation guides for each component can be found in the Software Installation Manual. We utilize Docker for containerization to ensure consistent environments across development and production.

Cloud Integration

We utilize Amazon Web Services (AWS) for model training and long-term data archiving. Specifically, we use:

⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️