Chromium
- Chromium Server Configuration
This article details the configuration of our "Chromium" servers, designed for high-throughput image and video transcoding. These servers are critical to the functionality of Video Processing Services and Image Repository. This guide is intended for newcomers to the server infrastructure team.
Overview
The Chromium servers represent a significant investment in our infrastructure, providing dedicated resources for media handling. They are specifically tuned for the demands of Transcoding Pipeline and Content Delivery Network operations. Each server utilizes a specialized hardware configuration and a tailored software stack to maximize performance and reliability. Understanding these details is crucial for effective Server Maintenance and Troubleshooting.
Hardware Specifications
The Chromium servers are built around a consistent hardware base to simplify management and ensure predictable performance. The following table details the core components:
Component | Specification |
---|---|
CPU | Dual Intel Xeon Gold 6248R (24 cores/48 threads per CPU) |
RAM | 256GB DDR4 ECC Registered 2933MHz |
Storage (OS) | 500GB NVMe SSD (Read/Write optimized) |
Storage (Media) | 16TB SAS HDD (7200 RPM, RAID 6) |
Network Interface | Dual 100 Gigabit Ethernet |
GPU | 2x NVIDIA Quadro RTX 8000 (48GB GDDR6) |
This configuration emphasizes both processing power (CPU and GPU) and storage capacity for handling large media files. The dual 100GbE interfaces ensure high-bandwidth network connectivity for efficient data transfer to and from Storage Clusters.
Software Configuration
The Chromium servers run a customized instance of Ubuntu Server 22.04 LTS. Several key software packages are installed and configured to support the transcoding workflow. We employ a containerized approach using Docker and Kubernetes for application deployment and orchestration.
Operating System & Core Utilities
The base operating system is hardened according to our Security Policy. Essential utilities include:
- `systemd`: For system and service management.
- `rsync`: For efficient file synchronization and backups.
- `ntp`: For accurate time synchronization.
- `fail2ban`: For intrusion prevention.
Transcoding Stack
The core transcoding functionality is provided by a combination of tools:
Software | Version | Purpose |
---|---|---|
FFmpeg | 5.1.2 | Primary transcoding engine |
HandBrake CLI | 1.6.1 | Used for specific codec optimizations |
NVIDIA CUDA Toolkit | 11.8 | Enables GPU-accelerated transcoding |
Docker | 20.10.17 | Containerization platform |
Kubernetes | 1.26.3 | Container orchestration |
These tools are deployed within Docker containers and managed by Kubernetes, allowing for scalability and fault tolerance. The Transcoding Workflow is meticulously documented elsewhere.
Network Configuration
Each Chromium server is assigned a static IP address within the `10.0.0.0/16` network. The dual 100GbE interfaces are configured as a link aggregation group (LAG) for increased bandwidth and redundancy. Firewall rules, managed by iptables, restrict access to essential ports only. DNS resolution is handled by our internal DNS Servers.
Monitoring and Alerting
Continuous monitoring is critical for ensuring the health and performance of the Chromium servers. We utilize the following tools:
Tool | Metric Monitored | Alerting Threshold |
---|---|---|
Prometheus | CPU utilization, memory usage, disk I/O, network traffic | CPU > 90%, Memory > 80%, Disk I/O > 95%, Network errors |
Grafana | Visualization of Prometheus metrics | N/A |
Nagios | Service availability (FFmpeg, Docker, Kubernetes) | Service down |
ELK Stack (Elasticsearch, Logstash, Kibana) | Log aggregation and analysis | Error rates exceeding predefined limits |
Alerts are routed to the On-Call Schedule and escalated according to severity. Detailed logs are stored and analyzed using the ELK stack for troubleshooting and performance optimization. Refer to the Monitoring Dashboard Documentation for more information.
Common Issues and Troubleshooting
Common issues include GPU memory exhaustion, network connectivity problems, and container failures. Consult the Knowledge Base for detailed troubleshooting guides. Always check the Server Logs first when investigating issues. Regular Performance Tuning is essential to maintain optimal server performance.
Server Documentation Home Internal Wiki Home
Intel-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Core i7-6700K/7700 Server | 64 GB DDR4, NVMe SSD 2 x 512 GB | CPU Benchmark: 8046 |
Core i7-8700 Server | 64 GB DDR4, NVMe SSD 2x1 TB | CPU Benchmark: 13124 |
Core i9-9900K Server | 128 GB DDR4, NVMe SSD 2 x 1 TB | CPU Benchmark: 49969 |
Core i9-13900 Server (64GB) | 64 GB RAM, 2x2 TB NVMe SSD | |
Core i9-13900 Server (128GB) | 128 GB RAM, 2x2 TB NVMe SSD | |
Core i5-13500 Server (64GB) | 64 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Server (128GB) | 128 GB RAM, 2x500 GB NVMe SSD | |
Core i5-13500 Workstation | 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000 |
AMD-Based Server Configurations
Configuration | Specifications | Benchmark |
---|---|---|
Ryzen 5 3600 Server | 64 GB RAM, 2x480 GB NVMe | CPU Benchmark: 17849 |
Ryzen 7 7700 Server | 64 GB DDR5 RAM, 2x1 TB NVMe | CPU Benchmark: 35224 |
Ryzen 9 5950X Server | 128 GB RAM, 2x4 TB NVMe | CPU Benchmark: 46045 |
Ryzen 9 7950X Server | 128 GB DDR5 ECC, 2x2 TB NVMe | CPU Benchmark: 63561 |
EPYC 7502P Server (128GB/1TB) | 128 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/2TB) | 128 GB RAM, 2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (128GB/4TB) | 128 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/1TB) | 256 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
EPYC 7502P Server (256GB/4TB) | 256 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
EPYC 9454P Server | 256 GB RAM, 2x2 TB NVMe |
Order Your Dedicated Server
Configure and order your ideal server configuration
Need Assistance?
- Telegram: @powervps Servers at a discounted price
⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️