Common Configs
This page captures some common configurations we have seen GPU Clouds implement for Bare Metal infrastructure.
8× NVIDIA H100 SXM5
- 2× Intel Xeon Platinum 8480+ @ 2.0 GHz (224 Threads)
- 2 TB DDR5 4800 MT/s
- 8× H100 SXM5 GPUs (640 GB GPU Memory)
- NVLink + NVSwitch Interconnect
- 2× 1.92 TB NVMe U.2 Gen5 (System Drive)
- 8× 7.68 TB NVMe U.2 Gen5 (Data Drive)
- NVIDIA BlueField-3 DPU
- 8× ConnectX-7 Single Port NDR InfiniBand HCA
- Unlimited Free Ingress & Egress
4× NVIDIA L40S
- 2× Intel Xeon Gold 6448Y @ 2.1 GHz (128 Threads)
- 1 TB DDR5 @ 4800 MT/s
- 4× L40S GPUs (192 GB GPU Memory)
- 1× 960 GB NVMe U.2 Gen4 (System Drive)
- 2× 3.84 TB NVMe U.2 Gen4 (Data Drives)
- NVIDIA BlueField-3 DPU
- Unlimited Free Ingress & Egress
CPU Node (No GPU)
- 2× Intel Xeon Gold 6448Y @ 2.1 GHz (64 Threads)
- 1 TB DDR5-4800 (16× 64GB RDIMM)
- 1× 960 GB NVMe (System Drive)
- 2× 3.84 TB NVMe (Data Drives)
- NVIDIA BlueField-3 B3220 DPU
- Dual-Port NDR200 / 200 GbE Networking
By Use Case¶
End users can select the Bare Metal configuration best suited for their compute needs.
-
8-GPU Node (8× H100 SXM5) Ideal for high-end distributed training, LLM finetuning, and dense AI model workloads.
-
4-GPU Node (4× L40S) Balanced option for training, inferencing, and high-throughput tasks.
-
CPU Node (No GPU) Recommended for orchestration layers, storage controllers, and prep workloads.