Available In Stock

Quantum-2 InfiniBand
Fabric Switch

Ultra-low latency networking platform for hyperscale AI fabrics, high-density clusters and unified GPU communication.

Architecture

Quantum 2

Process

400G NDR

Transistors

51.2Tb/s

Memory

64 Ports

Quantum-2 InfiniBand

Technical Infrastructure

Comprehensive performance metrics and architectural specifications.

Parameter Value
Brand NVIDIA Networking
Application AI Cluster Fabric
Ports 64 x 400Gb/s
Switching Capacity 51.2Tb/s
Latency 130ns
Rack Unit 1U
Cooling Redundant hot-swap fans
Power Dual redundant PSU
Management Web / CLI / Telemetry
Fabric NDR InfiniBand
Deployment GPU Pods and Clusters
Use Case Scale-out AI networking

*Switching capacity and latency vary by deployment topology.

View Detailed Architecture Whitepaper open_in_new

Precision Engineering

Explore every detail of the hardware that powers the modern AI era.

Fabric Ports

Fabric Ports

Dense 400G connectivity for large training clusters.

Cluster Cabling

Cluster Cabling

Low-latency topology for synchronized AI workloads.

Rack Integration

Rack Integration

Purpose-built for modern pod-level deployment standards.

Ready to Transform Your Infrastructure?

Our engineering team is ready to assist you in configuring the ideal deployment for your specific workload.

Explore Infrastructure Family