Building the Modular Foundation for AI Factories with NVIDIA MGX

Table of Contents
- Why modular architecture matters now
- Inside the MGX rack system
- Mechanical components
- Electrical components
- Plumbing (cooling) infrastructure
- Get started
Why modular architecture matters now
- NVIDIA MGX offers a modular reference architecture for accelerated computing, allowing partners to design multiple systems with a building-block approach.
- It supports multiple product generations and hundreds of GPU, DPU, CPU, storage, and networking combinations for AI, HPC, and digital twins.
- Enables high-density, rack-scale deployments without compromising performance or reliability.
- Streamlines build process, cutting deployment timelines from 12 months to less than 90 days.
- Empowers organizations to optimize data center builds for cost, performance, and supply chain resilience.
Inside the MGX rack system
Mechanical components
- The modular MGX rack provides structural integrity and serviceability for high-density data center deployments.
- The 33 kW Power Shelf supplies substantial power, and the MGX Power Whip offers flexible connections for power distribution.
- MGX components in NVIDIA GB200 NVL72 and GB300 NVL72 systems manage power density and thermal loads for liquid-cooled, rack-scale platforms.
- Advanced liquid-cooled MGX architecture addresses energy demands for high-performance computing, like the Blackwell compute nodes.
Electrical components
- The system enables training models with up to 1.8 trillion parameters on coherent 72-GPU domains using NVLink switches.
- Deploy inference clusters with low latency variance across 72-node racks for AI workloads.
Plumbing (cooling) infrastructure
- Ensures efficient thermal management for high-powered systems like the GB200 NVL72 and GB300 NVL72.
Get started
- NVIDIA MGX is the foundation for the AI factory era, enabling evolution with silicon innovations and modular upgrade paths for data center investments.