NVIDIA Quantum 2 MQM9790-NS2R 64-Port 400Gb/s Unmanaged InfiniBand Switch with Reverse Airflow (C2P)
Подробная информация о продукте:
| Фирменное наименование: | Mellanox |
| Номер модели: | MQM9700-NS2R (920-9B210-00RN-0M2) |
| Документ: | MQM9700 series.pdf |
Оплата и доставка Условия:
| Количество мин заказа: | 1 шт. |
|---|---|
| Цена: | Negotiate |
| Упаковывая детали: | Внешняя коробка |
| Время доставки: | На основе инвентаризации |
| Условия оплаты: | Т/Т |
| Поставка способности: | Поставка по проекту/партии |
|
Подробная информация |
|||
| Модель №.: | MQM9700-NS2R (920-9B210-00RN-0M2) | скорость передачи: | 400Г |
|---|---|---|---|
| Порты: | 64 | Технология: | Infiniband |
| Максимальная скорость: | Ндр | Транспортный пакет: | упаковка |
Характер продукции
Engineered for extreme-scale AI and HPC environments where external fabric management is preferred. The MQM9790-NS2R delivers 64 non-blocking ports of 400Gb/s InfiniBand in a compact 1U chassis with connector-to-power (C2P) reverse airflow, enabling flexible topologies and SHARPv3 in-network computing while relying on external Subnet Managers such as NVIDIA UFM or OpenSM.
Hong Kong Starsurge Group presents the NVIDIA Quantum-2 MQM9790-NS2R — a high-performance, unmanaged 400Gb/s InfiniBand switch with reverse airflow (connector-to-power). As part of the QM9700 series, this switch provides 32 OSFP ports supporting 64× 400Gb/s connections (or up to 128 ports at 200Gb/s via port-split technology). With a landmark 51.2 terabits per second bidirectional throughput and over 66.5 billion packets per second capacity, the MQM9790-NS2R is designed for customers who require external Subnet Manager control (e.g., NVIDIA UFM) for advanced telemetry, monitoring, and large-scale fabric orchestration.
- Unprecedented Port Density: 64 ports of 400Gb/s NDR InfiniBand in 1U, non-blocking architecture.
- Double-Density 200Gb/s Mode: Supports up to 128 ports of 200Gb/s using NVIDIA port-split technology, reducing TCO for leaf-spine topologies.
- External Subnet Manager Ready: Designed for use with NVIDIA UFM (Unified Fabric Manager), OpenSM, or other external Subnet Managers — ideal for large-scale, centrally managed fabrics.
- In-Network Computing: SHARPv3 (Scalable Hierarchical Aggregation and Reduction Protocol) delivers 32x higher AI acceleration compared to prior generation.
- Advanced Fabric Services: RDMA, adaptive routing, congestion control, enhanced VL mapping, and self-healing network capabilities.
- Reverse Airflow (C2P): Connector-to-Power airflow direction ideal for data center layouts with cold aisle on the port side.
- Redundancy & Reliability: 1+1 redundant hot-swappable power supplies, hot-swappable fan units.
- Backward Compatible: Supports previous InfiniBand generations (HDR, EDR, FDR).
Built on the NVIDIA Quantum-2 ASIC, the MQM9790-NS2R leverages state-of-the-art 400Gb/s SerDes technology. It incorporates Remote Direct Memory Access (RDMA) for low CPU overhead and high throughput, adaptive routing to avoid fabric hotspots, and NVIDIA SHARPv3 for in-network reductions, dramatically accelerating collective operations in MPI and AI frameworks. Without an onboard Subnet Manager, this model offloads fabric management to external controllers, enabling centralized policy enforcement and deeper telemetry when paired with NVIDIA UFM.
With support for multiple topologies — including Fat Tree, SlimFly, DragonFly+, and multi-dimensional Torus — the MQM9790-NS2R enables architects to build cost-effective, highly resilient networks for next-generation supercomputing while maintaining centralized management.
- AI & Machine Learning Clusters: High-speed GPU interconnects for large language model training with centralized UFM management.
- High-Performance Computing (HPC): Research simulations, weather modeling, genomics requiring external fabric orchestration.
- Hyperscale Data Centers: Large-scale InfiniBand fabrics where external Subnet Managers provide unified control across hundreds of switches.
- Enterprise Cloud & Finance: Low-latency trading systems with centralized monitoring and policy management.
- Government & Education: National labs and university supercomputing centers that require dedicated fabric management appliances.
Fully interoperable with NVIDIA InfiniBand ecosystem: ConnectX-6/7 adapters, LinkX cables, and Quantum-2 series switches. Backwards compatible with HDR, EDR, and FDR devices. Requires an external Subnet Manager such as NVIDIA UFM (recommended for large fabrics), OpenSM, or other IB-compliant SM running on a dedicated server or VM. Supports major Linux distributions (RHEL, Ubuntu, Rocky Linux) and Windows Server with appropriate drivers.
| Parameter | Specification (MQM9790-NS2R) |
|---|---|
| Ports | 32 OSFP connectors supporting 64 ports of 400Gb/s InfiniBand (NDR) or 128 ports of 200Gb/s |
| Aggregate Throughput | 51.2 Tb/s bidirectional, non-blocking |
| Packet Forwarding Capacity | Over 66.5 billion packets per second (BPPS) |
| Management Type | Unmanaged — requires external Subnet Manager (UFM, OpenSM, or other) |
| CPU & Memory | x86 Coffee Lake i3, 8GB DDR4 SO-DIMM (2666 MT/s), 16GB M.2 SSD |
| Power Supply | 1+1 redundant, hot-swappable, 200-240Vac, 80 Plus Gold, ENERGY STAR certified |
| Cooling / Airflow | Reverse airflow: Connector-to-Power (C2P), hot-swappable fan units |
| Dimensions (HxWxD) | 1.7 in (43.6mm) x 17.0 in (438mm) x 26.0 in (660.4mm) |
| Weight | Approx. 14.5 kg |
| Operating Temperature | 0°C to 40°C |
| Humidity (Operating) | 10% to 85% non-condensing |
| Altitude | Up to 3050m |
| Regulatory Compliance | CE, FCC, VCCI, ICES, RCM, RoHS, CB, cTUVus, CU |
| Warranty | 1 year standard (extended options available) |
| Orderable Part Number (OPN) | Description | Airflow Direction | Management |
|---|---|---|---|
| MQM9700-NS2F | 64 ports 400Gb/s, managed | P2C (power-to-connector / forward) | On-board Subnet Manager |
| MQM9700-NS2R | 64 ports 400Gb/s, managed | C2P (connector-to-power / reverse) | On-board Subnet Manager |
| MQM9790-NS2F | 64 ports 400Gb/s, unmanaged | P2C (forward) | External Subnet Manager required |
| MQM9790-NS2R | 64 ports 400Gb/s, unmanaged | C2P (reverse) | External Subnet Manager required |
For C2P airflow (connector-to-power), cold air enters from the OSFP connector side and exhausts through the power supply side. Verify your data center cooling layout before ordering. MQM9790-NS2R is the ideal unmanaged choice for racks requiring reverse airflow.
- Centralized Fabric Management: Perfect for deployments using NVIDIA UFM, enabling single-pane-of-glass management across hundreds of switches.
- Reverse Airflow Flexibility: C2P cooling matches specific data center hot/cold aisle configurations, improving thermal efficiency.
- Highest Radix in 1U: 64x 400G ports minimizes switch tiers, lowers latency, and simplifies cabling.
- Cost-Effective for Large Fabrics: Eliminates redundant onboard SM processors when external management is already deployed.
- Energy Efficient: Gold+ power supplies and smart fan control reduce OPEX.
- Future-Ready Scalability: Support for SHARPv3 and adaptive routing ensures investment protection for next-gen AI frameworks.
Hong Kong Starsurge Group provides end-to-end support including pre-sales consulting, integration services, and global logistics. Our technical team offers deployment assistance, RMA services, and extended warranty options. For volume orders, we deliver customized cabling and configuration validation. Multilingual support available for APAC, EMEA, and Americas regions.
• Verify C2P airflow orientation matches your rack: cold aisle must be on the connector (OSFP) side.
• An external Subnet Manager must be present on the fabric for the switch to operate.
• Use only NVIDIA-qualified optical modules and cables to maintain signal integrity and compliance.
• Operating altitude up to 3050m; temperature not to exceed 40°C.
• Some advanced UFM features may require separate licensing; consult Starsurge sales team.
• Specifications not publicly confirmed by NVIDIA are marked as provided — please confirm before ordering.
Founded in 2008, Starsurge is a technology-driven provider of network hardware, IT services, and system integration solutions. We serve government, healthcare, manufacturing, education, finance, and enterprise clients worldwide. With an experienced technical and sales team, Starsurge delivers tailored networking infrastructure — including IoT solutions, custom software development, and global logistics. Our customer-first approach ensures reliable quality, responsive service, and scalable network designs. As an authorized channel partner for leading brands, we bridge cutting-edge technology with real-world deployment success.
Request a Quote →• NVIDIA ConnectX-6 / ConnectX-7 InfiniBand adapters
• NVIDIA Quantum-2 QM9700/9790 series switches
• LinkX OSFP cables (passive copper up to 3m, active optical up to 500m)
• External Subnet Managers: NVIDIA UFM, OpenSM, other IB-compliant SM
• Operating systems: RHEL 8/9, Ubuntu 20.04/22.04, Windows Server 2022 with Mellanox WinOF-2
✓ Verify required airflow direction: C2P (connector-to-power) matches your rack's cold aisle placement.
✓ Ensure external Subnet Manager (UFM or OpenSM) is deployed and accessible on the fabric.
✓ Plan cabling: 400G OSFP to OSFP or split to 2x200G.
✓ Confirm power input: 200-240Vac with redundant feeds.
✓ For large scale deployments, request a topology design review and UFM integration support from Starsurge engineers.
- Whitepaper: "Scaling AI Fabrics with NVIDIA Quantum-2 and UFM" (available on request)
- Deployment Guide: Fat Tree vs. DragonFly+ with QM9700 Series
- Airflow Best Practices: Choosing P2C vs C2P for Data Center Cooling
- Compatibility List: NVIDIA Certified OSFP Cables for NDR







