NVIDIA Quantum-2 QM9700-NS2F Managed InfiniBand Switch 64-Port 400G NDR 51.2 Tb/s Throughput P2C Airflow
Подробная информация о продукте:
| Фирменное наименование: | Mellanox |
| Номер модели: | MQM9700-NS2F (920-9B210-00FN-0M0) |
| Документ: | MQM9700 series.pdf |
Оплата и доставка Условия:
| Количество мин заказа: | 1 шт. |
|---|---|
| Цена: | Negotiate |
| Упаковывая детали: | Внешняя коробка |
| Время доставки: | На основе инвентаризации |
| Условия оплаты: | Т/Т |
| Поставка способности: | Поставка по проекту/партии |
|
Подробная информация |
|||
| Номер модели: | MQM9700-NS2F (920-9B210-00FN-0M0) | Бренд: | Mellanox |
|---|---|---|---|
| Подключение к восходящей линии связи: | 400 Гбит / с | Имя: | Переключатель сети 1U InfiniBand MQM9700-NS2F Mellanox NDR 400Gb/S для сервера |
| Ключевое слово: | Сеть Mellanox | Конфигурация порта: | 64x 400G, 32x OSFP |
| Выделить: | NVIDIA Quantum-2 InfiniBand switch 64-port,Mellanox network switch 400G NDR,InfiniBand switch 51.2 Tb/s throughput |
||
Характер продукции
Industry-leading NDR 400Gb/s per port | 51.2 Tb/s aggregate throughput | SHARPv3 in-network computing | P2C forward airflow for optimized thermal design | Ultra-low latency for AI & HPC fabrics
The NVIDIA Quantum-2 QM9700-NS2F is a fully managed 1U InfiniBand switch with power-to-connector (P2C) forward airflow, delivering an unprecedented 64 ports of 400Gb/s (NDR) non-blocking bandwidth. Designed for extreme-scale AI, scientific research, and high-performance computing (HPC) clusters, it enables massive scalability with 51.2 Tb/s aggregate bidirectional throughput and over 66.5 billion packets per second. Leveraging SHARPv3, adaptive routing, and RDMA, the QM9700-NS2F accelerates data movement and in-network computing for the most demanding workloads.
- Switch Radix: 64 non-blocking 400G ports (32 OSFP connectors)
- Throughput: 51.2 Tb/s aggregate bidirectional
- Packet Rate: >66.5 billion packets per second (BPPS)
- SHARPv3: 32x AI acceleration improvement vs prior generation
- Airflow: Power-to-connector (P2C) forward airflow — ideal for cold-aisle intake to hot-aisle exhaust
- Management: On-board subnet manager for up to 2,000 nodes, MLNX-OS, CLI, WebUI, SNMP, JSON API
- Power & Cooling: 1+1 redundant hot-swap PSU, hot-swappable fan units, front-to-rear cooling (P2C)
Built on the NVIDIA Quantum-2 platform, the QM9700 series redefines data center switching density and efficiency. The QM9700-NS2F (managed, P2C forward airflow) integrates 64 ports of 400Gb/s InfiniBand in a compact 1U chassis. It supports port-split technology to deliver up to 128 ports of 200Gb/s, offering flexible topologies like Fat Tree, DragonFly+, SlimFly, and multi-dimensional Torus. Backward compatibility with previous InfiniBand generations ensures smooth integration into existing infrastructure. With advanced telemetry, congestion control, and self-healing network capabilities, this switch maximizes application throughput while simplifying operations.
32 OSFP cages supporting 400G NDR or 128x200G via splitter cables, delivering the densest top-of-rack InfiniBand switch in 1U form factor.
Third-generation NVIDIA Scalable Hierarchical Aggregation and Reduction Protocol accelerates AI collective operations by up to 32X, reducing data movement and latency.
Remote Direct Memory Access (RDMA) and adaptive routing eliminate bottlenecks, while enhanced virtual lane mapping and congestion control maintain consistent performance.
Automatic failover, link-level retransmission, and advanced monitoring capabilities ensure non-disruptive operations for mission-critical workloads.
On-board subnet manager supports up to 2000 nodes out-of-the-box; full chassis management via CLI, WebUI, SNMP, and JSON/REST API.
Optimized for scalable, cost-effective cluster designs; double-density radix reduces network layers and lowers total cost of ownership.
The Quantum-2 platform integrates NVIDIA’s latest 400G SerDes technology, delivering 51.2 Tb/s of switching capacity. Key innovations include SHARPv3 in-network computing, which offloads collective operations from compute nodes, drastically improving AI training efficiency. The switch leverages Remote Direct Memory Access (RDMA) to bypass kernel overhead, achieving microsecond-scale latency. Adaptive routing dynamically distributes traffic across multiple paths to avoid hotspots, while enhanced quality of service (QoS) and virtual lane (VL) mapping guarantee bandwidth for critical applications. The QM9700-NS2F also incorporates advanced telemetry for real-time fabric monitoring and self-healing capabilities that automatically re-route traffic around link failures.
- AI & Machine Learning Clusters: High-radix NDR switches enable massive GPU supercomputing pods with SHARPv3 accelerating NCCL collectives.
- HPC Research Centers: SlimFly or Fat Tree topologies connect thousands of nodes with ultra-low latency for weather simulation, genomics, and physics.
- Enterprise Data Centers: Consolidate East-West traffic with 400G spine-and-leaf architecture, reducing tier count and operational costs.
- Cloud & Hyperscale: DragonFly+ and multi-dimensional torus for maximum scalability with high bandwidth density per rack unit.
- Storage & IO Expansion: Connect high-performance storage systems using NVMe over Fabrics (NVMe-oF) via InfiniBand.
The QM9700-NS2F is fully interoperable with NVIDIA InfiniBand adapters (ConnectX-6, ConnectX-7, ConnectX-8), cables (active/passive copper, active fiber, optical modules), and previous FDR/EDR/HDR generations. It runs MLNX-OS with extensive API support for automation frameworks. Compatible with NVIDIA Unified Fabric Manager (UFM) for advanced monitoring, predictive analytics, and telemetry. The switch integrates seamlessly with major HPC schedulers and open network automation tools.
| Component | Supported Models/Standards |
|---|---|
| Host Channel Adapters | NVIDIA ConnectX-6 / ConnectX-7 / ConnectX-8 InfiniBand, NDR 400G HCAs |
| Cables & Transceivers | OSFP passive copper (up to 2.5m), active copper, active fiber (up to 500m), optical modules (QSFP-DD to OSFP adapters for 200G split) |
| Previous InfiniBand Speeds | HDR (200Gb/s), EDR (100Gb/s), FDR (56Gb/s) – backward compatible |
| Management & Automation | MLNX-OS, UFM, Prometheus/Grafana via SNMP, JSON-RPC, Ansible modules |
| Parameter | Specification |
|---|---|
| Ports & Speed | 64 non-blocking ports of 400Gb/s (NDR) InfiniBand; 32 OSFP connectors; supports 128 ports @200Gb/s via splitter cables |
| Switching Capacity | 51.2 Tb/s aggregate bidirectional throughput; >66.5 billion packets per second (BPPS) |
| Latency | Sub-130ns port-to-port with dynamic routing (typical) |
| Processor & Memory | x86 Coffee Lake i3, 8GB DDR4 SO-DIMM (2666 MT/s), 16GB M.2 SSD |
| Management Interfaces | 1x USB 3.0, 1x USB (I2C), 1x RJ45 (Ethernet), 1x RJ45 (UART) |
| Power Supply | 1+1 redundant, hot-swappable, 200–240Vac, 80 PLUS Gold+ certified |
| Cooling & Airflow | Power-to-connector (P2C) forward airflow (NS2F model); hot-swappable fan units, front-to-rear cooling |
| Dimensions (HxWxD) | 1.7 in (43.6 mm) x 17.0 in (438 mm) x 26.0 in (660.4 mm) |
| Weight | 14.5 kg (31.97 lbs) |
| Operating Conditions | Temperature: 0°C to 40°C; Humidity: 10% to 85% non-condensing; Altitude up to 3050m |
| Regulatory & Safety | RoHS, CE, FCC, VCCI, cTUVus, CB, RCM, ENERGY STAR |
| Warranty | 1-year manufacturer warranty (extendable) |
| Orderable Part Number | Description | Management | Airflow Direction |
|---|---|---|---|
| MQM9700-NS2F | 64 ports 400Gb/s InfiniBand, managed switch, 32 OSFP ports | Full on-board subnet manager, MLNX-OS | Power-to-Connector (P2C) – forward airflow |
| MQM9700-NS2R | 64 ports 400Gb/s InfiniBand, managed switch | Managed (same features) | Connector-to-Power (C2P) – reverse airflow |
| MQM9790-NS2F | 64 ports 400Gb/s, unmanaged switch | Unmanaged, external UFM management | P2C forward airflow |
| MQM9790-NS2R | 64 ports 400Gb/s, unmanaged switch | Unmanaged | C2P reverse airflow |
The QM9700-NS2F is ideal for customers requiring advanced on-box management (subnet manager) and power-to-connector (P2C) forward airflow. This airflow pattern draws cool air from the cold aisle through the power side and exhausts through the connector side, matching standard front-to-rear cooling designs. Verify your rack airflow strategy before ordering.
Warehouses and fulfillment partners enable rapid worldwide shipping with secure packaging.
Our in-house engineers help validate network topologies, cabling plans, and firmware requirements.
Authorized partner pricing with extended warranty options and advanced replacement programs.
Dedicated support in English, Mandarin, Cantonese and regional languages for seamless procurement.
Starsurge provides end-to-end lifecycle support: from design consultation to deployment and post-sales maintenance. Our services include on-site installation guidance, RMA processing, firmware upgrade assistance, and customized cabling solutions. For high-volume projects, we offer dedicated account management and 24/7 technical escalation. The QM9700-NS2F ships with 1-year hardware warranty; extended support packages are available upon request.
The primary difference is the airflow direction: NS2F uses power-to-connector (P2C) forward airflow (cold air intake from power side, exhaust through connector side), while NS2R uses connector-to-power (C2P) reverse airflow. Choose based on your data center's hot/cold aisle layout.
Yes, the QM9700 supports backward compatibility using appropriate adapter cables or breakout options for HDR (200G), EDR (100G), and FDR (56G) speeds. Please consult the compatibility matrix or contact Starsurge for validated cable SKUs.
No, the QM9700-NS2F features an integrated on-board subnet manager capable of managing up to 2000 nodes, simplifying small to medium deployments. For larger fabrics, external SM or UFM can be used.
Yes, the InfiniBand fabric is agnostic to server vendor. Any server with a supported InfiniBand HCA (e.g., ConnectX series) can connect seamlessly.
Typical power varies with port configuration and cabling. The PSUs are 1+1 redundant 200-240Vac, Gold+ rated. Contact us for detailed power planning based on your deployment.
- Ensure airflow direction (P2C forward for NS2F) matches your rack ventilation strategy — this model intakes from power side and exhausts through OSFP connector side.
- Use only approved NVIDIA OSFP modules or qualified passive/active copper/fiber cables for 400G performance.
- Installation should follow ESD precautions and be performed by qualified network personnel.
- Firmware updates must be executed using MLNX-OS guidelines to avoid interruption; schedule maintenance windows accordingly.
- Confirm that the input power is within 200–240V AC range with redundant feeds to maintain high availability.
Founded in 2008, Starsurge is a technology-driven provider of network hardware, IT services, and system integration solutions. We serve government, healthcare, manufacturing, education, finance, and enterprise sectors worldwide. With an experienced sales and technical team, we deliver reliable networking equipment including switches, NICs, wireless controllers, cables, and IoT solutions. Our customer-first approach ensures scalable, efficient, and future-ready infrastructure. Multilingual support and global delivery capabilities make Starsurge your trusted partner for NVIDIA and data center solutions.
- ☑ Confirm airflow direction (P2C forward) matches your data center cold-aisle intake design
- ☑ Verify power input: 200–240V AC with redundant feeds and proper circuit capacity
- ☑ Select appropriate OSFP cables/transceivers: active/passive copper or fiber, based on distance requirements
- ☑ Plan network topology (Fat Tree, DragonFly+, etc.) and scaling nodes
- ☑ Ensure host adapters are NDR-capable (ConnectX-7 or newer for 400G) or compatible with lower speeds
- ☑ Allocate management IP and review MLNX-OS configuration (no additional licensing for core switching)
- ☑ Prepare rack space: 1U height, depth up to 660mm including cable management and power cabling







