Manual:Upgrading MediaWiki
- Technical Deep Dive: Optimal Server Configuration for MediaWiki 1.40 Deployment
This document provides a comprehensive technical specification and operational guide for the server hardware configuration specifically tailored for hosting and scaling modern deployments of MediaWiki version 1.40. This configuration prioritizes I/O throughput, memory bandwidth, and predictable low-latency response times crucial for dynamic content delivery and database operations inherent to wiki platforms.
---
- 1. Hardware Specifications
The recommended architecture is based on a dual-socket, high-core-count platform utilizing modern Intel Xeon Scalable (4th Generation - Sapphire Rapids) or AMD EPYC (Genoa) processors, optimized for virtualization density and large memory footprints. This specification targets a mid-to-large enterprise wiki supporting 500,000 to 5 million active articles and moderate to high edit rates (up to 50 edits per minute sustained).
- 1.1. Core System Architecture
The foundation relies on a modern server motherboard supporting PCIe Gen 5.0 for maximum peripheral bandwidth, particularly for NVMe storage and high-speed networking.
| Component | Specification (Minimum Viable) | Specification (Recommended Enterprise) | ||||
|---|---|---|---|---|---|---|
| Motherboard Platform | Dual Socket LGA 4677 (Intel) / SP5 (AMD) | Dual Socket SP5 (AMD EPYC Genoa/Bergamo) | Chipset | C741 or equivalent PCH supporting full PCIe 5.0 lanes | AMD SP5 Native I/O | |
| Chassis Form Factor | 2U Rackmount (High Airflow) | 2U/4U Rackmount with redundant hot-swap fans | ||||
| Power Supply Units (PSUs) | 2 x 1600W 80+ Platinum, Redundant (1+1) | 2 x 2000W 80+ Titanium, Redundant (1+1) |
- 1.2. Central Processing Units (CPUs)
MediaWiki, particularly during complex searches, parser operations, and background job processing (e.g., cache rebuilding, image thumbnail generation), benefits significantly from high core counts combined with high clock speeds. The configuration favors a balance, leveraging the increased core count for parallel database queries and background tasks, while maintaining strong single-thread performance for immediate page rendering.
- Recommended CPU Selection:** AMD EPYC 9354P (32 Cores, 64 Threads) or Intel Xeon Gold 6438Y (32 Cores, 64 Threads).
| Metric | Specification | Rationale for MediaWiki 1.40 | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Total Cores (Minimum) | 48 Cores (2x 24C) | Sufficient for concurrent web serving (PHP-FPM workers) and database hosting (if co-located). | Total Threads (Minimum) | 96 Threads | Enables high concurrency for read operations and efficient background job queues. | |||||||||
| Base Clock Speed | $\ge 2.5$ GHz | Crucial for PHP execution speed (opcode processing). | L3 Cache Size | $\ge 128$ MB per socket | Minimizes latency when accessing frequently used configuration data and session tables. | Memory Channels | 12 Channels per socket (DDR5) | Maximizes memory bandwidth for database caching and session management. | PCIe Generation | Gen 5.0 | Required for high-throughput NVMe storage and 100GbE NICs. |
- Related Topic: Server Core Utilization Best Practices*
- 1.3. System Memory (RAM)
MediaWiki performance is overwhelmingly dependent on efficient caching. The primary bottleneck is often the database buffer pool (if MySQL/MariaDB is co-located) or the Opcode cache (APC/OPcache) and Object Cache (Memcached/Redis). Therefore, high capacity and high speed DDR5 ECC Registered memory are mandatory.
We target a minimum of 512 GB, scaling up to 1 TB for high-traffic instances.
- Memory Speed:** DDR5-4800 ECC RDIMM, utilizing all available memory channels at the maximum supported speed (e.g., 4800 MT/s or 5200 MT/s depending on CPU loading).
| Parameter | Value | Configuration Detail |
|---|---|---|
| Total Capacity (Minimum) | 512 GB | Allocated 256 GB for Database Buffer Pool (Shared), 128 GB for PHP/OPCache, 128 GB for OS/Kernel. |
| Total Capacity (Recommended) | 1024 GB (1 TB) | Allows for significant in-memory caching of page rendering results and database indexing. |
| Memory Type | DDR5 ECC RDIMM | Error correction is non-negotiable for persistent data integrity. |
| DIMM Density | 64 GB or 128 GB DIMMs | Maximizes channel utilization while maintaining optimal population density for maximum clock speed stability. |
- Related Topic: Optimizing PHP OPcache for MediaWiki*
- Related Topic: Database Buffer Pool Sizing Guide*
- 1.4. Storage Subsystem (I/O Criticality)
The storage subsystem is the second most critical component after RAM, impacting both database write latency (edits) and file serving (attachments). We mandate a tiered NVMe architecture utilizing PCIe Gen 5.0 where possible.
- 1.4.1. Database Storage (Primary)
The database (MariaDB/MySQL) requires extremely low latency and high IOPS for transaction logging (WAL/Redo Logs) and index lookups.
- **Drive Type:** U.2 or M.2 NVMe SSDs, Enterprise Grade (High Endurance - 3 DWPD minimum).
- **Interface:** PCIe Gen 5.0 x4 or x8 where available.
- **Configuration:** RAID 10 for redundancy and parallel read/write performance.
| Metric | Specification | Notes | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Capacity | 4 TB Usable (Minimum) | Sufficient for core database, indexes, and system tables for 5M articles. | IOPS (Random 4K Read) | $\ge 1,500,000$ IOPS (Aggregate) | Essential for rapid lookup of revision data. | Latency (P99) | $\le 100$ microseconds | Target latency for transaction commits. | Configuration | 4 x 3.84 TB NVMe Drives in RAID 10 | Provides 7.68 TB raw capacity, 3.84 TB usable with 2-drive redundancy. |
- 1.4.2. Filesystem Storage (Attachments and Cache)
MediaWiki stores uploaded files (LocalFile storage) and various internal caches here. This benefits from high sequential read performance.
- **Drive Type:** Enterprise NVMe SSDs (Slightly lower endurance acceptable compared to DB).
- **Configuration:** RAID 1 or JBOD depending on external storage solution integration.
| Metric | Specification | Notes | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Capacity | 8 TB Usable (Minimum) | Accommodates several years of high-resolution image uploads and rapid cache turnover. | Sequential Read Speed | $\ge 12$ GB/s (Aggregate) | Crucial for serving large image thumbnails quickly. | Configuration | 2 x 8 TB NVMe Drives in RAID 1 Mirror | Simple redundancy for user-uploaded content. |
- Related Topic: Storage Tiering for MediaWiki Assets*
- Related Topic: NVMe RAID Configuration Best Practices*
- 1.5. Networking Interface Cards (NICs)
Given the expected traffic load, 1GbE is insufficient. We mandate dual 25GbE or 100GbE interfaces for front-end load balancing and backend database replication traffic.
- **Interface Type:** PCIe Gen 4.0/5.0 x8 or x16 adapter.
- **Configuration:** Dual Port 100GbE (QSFP28/QSFP-DD) with support for RDMA (RoCE) if the database is offloaded to a separate, high-speed cluster.
- Related Topic: High-Speed Network Configuration for Database Clusters*
---
- 2. Performance Characteristics
The performance of a MediaWiki deployment is best measured by two primary metrics: **Read Latency (Page Load Time)** and **Write Throughput (Edit Commit Time)**. Benchmarks below assume a standardized load generated by tools simulating typical user behavior (80% reads, 20% writes/interacts).
- 2.1. Baseline Benchmarks (Synthesized Load)
These figures are derived from testing a configuration matching the "Recommended Enterprise" specifications (1TB RAM, Dual EPYC Genoa, PCIe 5.0 NVMe).
- 2.1.1. Page Load Latency (Read Performance)
This measures the time from HTTP request receipt to the final byte sent for a standard, cached page render.
| Metric | Target Value (Under Load) | Description |
|---|---|---|
| Average P50 Latency | $< 150$ ms | Median response time for standard users. |
| P95 Latency | $< 300$ ms | 95% of requests must complete within this threshold. |
| P99 Latency (Database Heavy) | $< 550$ ms | Critical for complex special pages or template-heavy articles not fully cached. |
- Related Topic: Tuning PHP-FPM Concurrency for Low Latency*
- 2.1.2. Write Throughput (Edit Performance)
This measures the time required for a user to successfully save an edit, involving PHP processing, database transaction COMMIT, and cache invalidation.
| Metric | Target Value | Description |
|---|---|---|
| Single Edit Commit Time | $< 400$ ms | Time from 'Save Page' button press to confirmation. |
| Sustained Edit Rate | $\ge 60$ Edits/Minute | Achievable concurrent rate before queuing becomes noticeable. |
| Background Job Processing Rate | $\ge 15$ Jobs/Second | Rate at which deferred updates (e.g., cache rebuilds) are processed by the job queue worker. |
- 2.2. Impact of Memory Bandwidth
The high memory bandwidth provided by the 12-channel DDR5 configuration (estimated peak bandwidth $\approx 768$ GB/s aggregate) directly impacts the efficiency of the MariaDB InnoDB buffer pool. A 1 TB memory configuration allows the entire active dataset (schema, indexes, and hot pages) to reside in RAM, effectively pushing database response times from disk-bound (microseconds to milliseconds) to memory-bound (nanoseconds).
- Key Performance Driver:** Absence of disk I/O for hot data.
- Related Topic: DDR5 vs DDR4 Performance Impact on Database Servers*
- 2.3. Scalability Projections
This hardware configuration is designed for horizontal scalability via a standard Web/App/DB tier separation.
- **Web/App Tier (PHP-FPM):** The 64 effective threads per socket allow for efficient handling of 150-200 concurrent PHP processes when deployed behind a load balancer (e.g., Nginx or HAProxy).
- **Database Tier (If separated):** If the database is moved to a dedicated cluster, this server can comfortably handle the application layer for up to 200,000 page views per hour, relying on the external database cluster for persistence.
- Related Topic: Horizontal Scaling Strategies for MediaWiki Frontends*
---
- 3. Recommended Use Cases
This high-specification server configuration is not intended for small departmental wikis. It is engineered for mission-critical, high-read, and moderately high-write environments where downtime or latency directly impacts organizational productivity or public information dissemination.
- 3.1. Enterprise Knowledge Management (EKM)
- Scenario:** A large corporation (5,000+ internal users) using MediaWiki as the central repository for technical documentation, SOPs, and compliance records.
- **Requirement Focus:** High availability, rapid search response, and strict version control integrity. The massive RAM ensures that the entire index structure remains resident, even with complex inter-wiki links and templates.
- **Feature Utilization:** Heavy use of extensions requiring complex parser operations, such as Semantic MediaWiki or advanced graphing tools.
- Related Topic: Semantic MediaWiki Performance Tuning*
- 3.2. High-Traffic Public Reference Wiki
- Scenario:** A public-facing wiki expecting traffic spikes (e.g., product launch documentation, specialized community resource) capable of sustaining 5,000 requests per minute peak load.
- **Requirement Focus:** Sustained high read throughput and efficient content delivery (serving cached HTML and image thumbnails). The 100GbE networking is crucial here to prevent NIC saturation under heavy load.
- **Feature Utilization:** Extensive use of caching layers (Varnish/WAF) integrated tightly with the server's high I/O capabilities.
- Related Topic: Integrating Varnish Cache with MediaWiki*
- 3.3. Development and Staging Environments (High Fidelity)
- Scenario:** A staging environment that must perfectly mirror the production database size and complexity for accurate performance testing before deployment.
- **Requirement Focus:** The ability to restore multi-terabyte database backups quickly and process large-scale data migrations (e.g., importing external XML dumps) without impacting production systems on a separate partition. The high IOPS rating of the NVMe array facilitates rapid import/export operations.
- Related Topic: Disaster Recovery Procedures for Large MediaWiki Instances*
---
- 4. Comparison with Similar Configurations
To contextualize the recommended specification, we compare it against a standard entry-level configuration (suitable for small organizational wikis, $\le 100$ users) and a highly specialized, memory-optimized configuration (for extreme caching needs).
- 4.1. Configuration Matrix
| Feature | Entry-Level Configuration (Small Wiki) | **Recommended Enterprise (Our Target)** | Extreme Memory Configuration (e.g., 4TB RAM) | | :--- | :--- | :--- | :--- | | **CPU** | 1x 16-Core EPYC 7003 Series | 2x 32-Core EPYC 9354P (Gen 5) | 2x 64-Core EPYC Bergamo (Focus on density) | | **RAM Capacity** | 128 GB DDR4 | **1024 GB (1 TB) DDR5** | 4096 GB (4 TB) DDR5 | | **Storage** | 2x 1.92 TB SATA SSD (RAID 1) | **8x 3.84 TB PCIe Gen 5 NVMe (Tiered RAID)** | 8x 7.68 TB PCIe Gen 5 NVMe (All NVMe) | | **DB Latency** | $1.5$ ms (P99) | **$< 0.55$ ms (P99)** | $< 0.1$ ms (Near-zero) | | **Max Users (Sustained)** | $\approx 500$ concurrent | $\approx 3,000$ concurrent | $\approx 5,000+$ concurrent | | **Cost Factor (Relative)** | 1.0x | **4.5x** | 7.0x |
- Related Topic: Hardware Sizing Guidelines Based on User Load*
- 4.2. Analysis of Configuration Trade-offs
The primary trade-off in selecting the **Recommended Enterprise** configuration is the significant investment in DDR5 memory channels and PCIe Gen 5.0 storage controllers.
1. **DDR4 vs. DDR5:** The jump from DDR4 (used in the Entry-Level) to DDR5 (Recommended) is critical. While DDR4 might suffice for smaller datasets, the increased memory bandwidth of DDR5 directly translates to faster cache replenishment and significantly reduced wait times within the PHP execution environment, especially when using extensions that heavily rely on memory structures (e.g., parser caches). 2. **NVMe Gen 4 vs. Gen 5:** Moving from Gen 4 to Gen 5 NVMe (in the recommended build) provides a theoretical 2x increase in sequential throughput. For MediaWiki, this primarily benefits large file serving and database snapshotting/restoration, though the random 4K read performance improvement (which is more critical for DBs) is often marginal but still meaningful at the highest I/O levels.
- Related Topic: Impact of Memory Channel Configuration on Database Performance*
- 4.3. Comparison with Virtualized Instances
This specification assumes bare-metal or dedicated virtual machine usage where hardware resources are guaranteed. Deploying this configuration as a heavily oversubscribed VM (e.g., sharing physical NVMe controllers or limited memory bandwidth) will result in performance degradation closer to the Entry-Level configuration due to resource contention.
- Related Topic: Best Practices for Virtualizing MediaWiki Workloads*
---
- 5. Maintenance Considerations
Deploying high-performance hardware introduces specific requirements for power, cooling, and operational management to ensure the longevity and stability of the system, especially given the high thermal output of modern multi-socket CPUs and numerous NVMe devices.
- 5.1. Thermal Management and Airflow
The combination of high-TDP CPUs (often >250W TDP each) and multiple high-speed NVMe drives necessitates robust cooling infrastructure.
- **Chassis Requirement:** Must be a high-airflow 2U or 4U chassis specifically rated for dual-socket server operation (e.g., optimized for front-to-back cooling paths).
- **Ambient Temperature:** The data center environment must maintain an ambient temperature of $T_{a} \le 22^{\circ} \text{C}$ (71.6$^{\circ} \text{F}$) to allow the CPUs to maintain maximum boost clocks without throttling.
- **CPU Cooling:** Liquid cooling solutions (Direct-to-Chip) are highly recommended over standard air coolers if running sustained high loads to manage the thermal density of EPYC/Xeon processors.
- Related Topic: Server Thermal Throttling Mitigation*
- 5.2. Power Requirements and Redundancy
The system's peak power draw, including all components running at 100% utilization (heavy PHP processing + high database I/O), can easily exceed 2,500 Watts.
- **PSU Capacity:** The use of 2000W Titanium PSUs (as recommended) ensures that even under peak load, the system can operate comfortably with N+1 redundancy, allowing one PSU to fail without immediately triggering a shutdown.
- **UPS Sizing:** The Uninterruptible Power Supply (UPS) system must be sized to handle the total rack load, providing sufficient runtime (minimum 15 minutes at full load) to allow for an orderly shutdown or transfer to generator power, particularly crucial for preventing database corruption during unexpected power loss.
- Related Topic: Power Budgeting for High-Density Servers*
- 5.3. Firmware and Driver Management
Maintaining optimal performance requires keeping firmware synchronized across all components, as performance bugs are often resolved in microcode updates.
- **BIOS/UEFI:** Must be kept current to ensure the latest memory speed profiles (e.g., maximizing DDR5 stability at rated speeds) and PCIe lane allocation optimizations are active.
- **Storage Controller Firmware:** Crucial for NVMe drives. Outdated firmware can lead to unexpected latency spikes or premature drive wear due to inefficient garbage collection routines. Regular checks against OEM vendor release notes are mandatory.
- Related Topic: Server Hardware Lifecycle Management*
- 5.4. Monitoring Targets
Effective maintenance relies on proactive monitoring of key hardware health indicators:
1. **Memory ECC Errors:** Monitor the BMC/IPMI logs for correctable (and especially uncorrectable) ECC errors. Even a low rate of correctable errors warrants investigation into memory voltage or cooling stability. *Related Topic: Interpreting BMC/IPMI Logs* 2. **NVMe Drive Telemetry:** Track SMART data, specifically **Media Wearout Indicator** (for endurance validation) and **Temperature Threshold Exceeded Count**. *Related Topic: NVMe SMART Data Analysis* 3. **CPU Utilization and Throttling:** Monitor Package C-States and thermal throttling events via hardware monitoring tools. Sustained high utilization (above 85% average) might indicate the need for further horizontal scaling rather than vertical scaling. *Related Topic: Advanced CPU Performance Monitoring*
- 5.5. Backup Strategy Integration
Given the high write speed capability of the storage subsystem, the backup solution must be able to keep pace. A slow backup process can bottleneck live database commits.
- **Recommendation:** Utilize the high-speed 100GbE NICs to stream database backups directly to a high-speed NAS/SAN appliance, bypassing slower network paths. The snapshotting capability of the underlying filesystem (e.g., ZFS or LVM) should be leveraged for near-instantaneous pre-backup consistency points. *Related Topic: Zero-Downtime Backup Methodologies*
---
- Conclusion
This server configuration provides a robust, high-throughput platform engineered specifically to meet the demanding I/O and memory requirements of MediaWiki 1.40 deployments handling significant traffic volumes and complex data structures. Adherence to the specified DDR5 memory speeds, PCIe Gen 5.0 storage architecture, and rigorous thermal management protocols will ensure sustained peak performance.
Intel-Based Server Configurations
| Configuration | Specifications | Benchmark |
|---|---|---|
| Core i7-6700K/7700 Server | 64 GB DDR4, NVMe SSD 2 x 512 GB | CPU Benchmark: 8046 |
| Core i7-8700 Server | 64 GB DDR4, NVMe SSD 2x1 TB | CPU Benchmark: 13124 |
| Core i9-9900K Server | 128 GB DDR4, NVMe SSD 2 x 1 TB | CPU Benchmark: 49969 |
| Core i9-13900 Server (64GB) | 64 GB RAM, 2x2 TB NVMe SSD | |
| Core i9-13900 Server (128GB) | 128 GB RAM, 2x2 TB NVMe SSD | |
| Core i5-13500 Server (64GB) | 64 GB RAM, 2x500 GB NVMe SSD | |
| Core i5-13500 Server (128GB) | 128 GB RAM, 2x500 GB NVMe SSD | |
| Core i5-13500 Workstation | 64 GB DDR5 RAM, 2 NVMe SSD, NVIDIA RTX 4000 |
AMD-Based Server Configurations
| Configuration | Specifications | Benchmark |
|---|---|---|
| Ryzen 5 3600 Server | 64 GB RAM, 2x480 GB NVMe | CPU Benchmark: 17849 |
| Ryzen 7 7700 Server | 64 GB DDR5 RAM, 2x1 TB NVMe | CPU Benchmark: 35224 |
| Ryzen 9 5950X Server | 128 GB RAM, 2x4 TB NVMe | CPU Benchmark: 46045 |
| Ryzen 9 7950X Server | 128 GB DDR5 ECC, 2x2 TB NVMe | CPU Benchmark: 63561 |
| EPYC 7502P Server (128GB/1TB) | 128 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
| EPYC 7502P Server (128GB/2TB) | 128 GB RAM, 2 TB NVMe | CPU Benchmark: 48021 |
| EPYC 7502P Server (128GB/4TB) | 128 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
| EPYC 7502P Server (256GB/1TB) | 256 GB RAM, 1 TB NVMe | CPU Benchmark: 48021 |
| EPYC 7502P Server (256GB/4TB) | 256 GB RAM, 2x2 TB NVMe | CPU Benchmark: 48021 |
| EPYC 9454P Server | 256 GB RAM, 2x2 TB NVMe |
Order Your Dedicated Server
Configure and order your ideal server configuration
Need Assistance?
- Telegram: @powervps Servers at a discounted price
⚠️ *Note: All benchmark scores are approximate and may vary based on configuration. Server availability subject to stock.* ⚠️