download

2-Tier / 2-Plane Compute Fabric

GPU Compute Fabric BOM Calculator

2-Tier Backend Fabric · Nokia 7220 IXR-H5-64O · 8 GPUs/Server · 32 Servers/SU

NIC & GPU Configuration
Per-GPU NIC / Fabric Plane Mode
NIC Vendor (within mode)
GPU Platform
Vera Rubin NVL72 · Compute Fabric Spec

Octal-plane (8-plane) backend over Nokia 7220 IXR-H6-64O. Each Rubin compute tray has 4 GPU packages and 8 ConnectX-9 SuperNICs (2 per GPU). 18 trays per NVL72 rack · 14 racks per Scalable Unit.

Per Compute Tray
4 GPUs
8x CX-9 SuperNIC · 4 OSFP cages
Per NVL72 Rack
72 GPUs
18 trays · 144 SuperNICs
Per Scalable Unit
1,008 GPUs
14 racks · 32 leaves + 16 spines
Per-GPU Bandwidth
1.6 Tb/s
2x 800G CX-9 SuperNIC · 8 lanes x 200G
Cluster Size
Number of GPUs1024GPUs
256 (1 SU)1024204840968192 (32 SU)
1,008 (1 SU)16,128 (16 SU)32,256 (32 SU)64,512 (64 SU)129,024 (128 SU)
Custom (any ×256)
Architecture Summary
Need a Custom Fabric Design?Get in touch to discuss GPU cluster requirements, Nokia DC switching options, validated designs and deployment guidance.

Discuss a Custom Fabric Design

Send an email to get in touch. Copy the details below and paste them into your preferred email client.

Toalperen.akpinar@nokia.com
Subject[Nokia GPU Fabrics]
Bill of Materials
ComponentRole / SideQty
Per-Plane Breakdown (Single Plane View)
GPU × NIC Compatibility Matrix (Nokia 7220 IXR-H5-64O fabric · Tomahawk 5 ASIC)
GPU PlatformBCM Thor2 (400G, PCIe Gen5)NVIDIA CX7 (400G, PCIe Gen5)NVIDIA CX8 (800G, PCIe Gen6)BCM 800G (future / TBD)
Vendor-validated pairing Feasible · suboptimal (over- or under-provisioned) Future / not generally shipping
All combinations are SerDes-compatible with the H5-64O (Tomahawk 5): 800G OSFP cages run 4×200G PAM4 (CX8) or break out to 2×400G as 4×100G PAM4 (CX7 / Thor2). Compatibility is about vendor validation, host PCIe generation, and bandwidth balance — not signaling.
Nokia 7220 IXR-H5-64O · 64 × 800G OSFP cages / 128 × 400GE logical ports · 64 server-facing + 64 spine-facing per leaf · 1:1 ratio
Spine count is rounded up to the next ECMP-friendly tier (power-of-2) so leaf-spine hashing distributes uniformly across all paths.
2-Tier (Leaf-Spine) · Max 8192 GPUs for this switch architecture (32 SU, 1 link/leaf-spine pair)
Created by Alperen Akpinar
Copied!

Leave a Reply