NVIDIA Quantum-2 MQM9700-NS2R 64-Port 400Gb/s Managed InfiniBand Switch with Reverse Airflow (C2P)
Подробная информация о продукте:
| Фирменное наименование: | Mellanox |
| Номер модели: | MQM9700-NS2F (920-9B210-00FN-0M0) |
| Документ: | MQM9700 series.pdf |
Оплата и доставка Условия:
| Количество мин заказа: | 1 шт. |
|---|---|
| Цена: | Negotiate |
| Упаковывая детали: | Внешняя коробка |
| Время доставки: | На основе инвентаризации |
| Условия оплаты: | Т/Т |
| Поставка способности: | Поставка по проекту/партии |
|
Подробная информация |
|||
| Номер модели: | MQM9700-NS2F (920-9B210-00FN-0M0) | Бренд: | Mellanox |
|---|---|---|---|
| Подключение к восходящей линии связи: | 400 Гбит / с | Имя: | Переключатель сети 1U InfiniBand MQM9700-NS2F Mellanox NDR 400Gb/S для сервера |
| Ключевое слово: | Сеть Mellanox | Конфигурация порта: | 64x 400G, 32x OSFP |
| Выделить: | NVIDIA Quantum-2 InfiniBand switch,400Gb/s managed network switch,64-port InfiniBand switch with reverse airflow |
||
Характер продукции
NVIDIA Quantum-2 MQM9700-NS2R 64-Port 400Gb/s Managed InfiniBand Switch
Fully managed NDR switch with reverse airflow (connector-to-power) and integrated subnet management. Designed for extreme-scale AI and HPC environments, delivering 64 non-blocking ports of 400Gb/s InfiniBand in a compact 1U chassis.
Key Features
- 64 ports of 400Gb/s NDR InfiniBand in 1U non-blocking architecture
- Double-density 200Gb/s mode supporting up to 128 ports via port-split technology
- Integrated subnet manager for up to 2,000 nodes out-of-the-box
- SHARPv3 in-network computing with 32x higher AI acceleration
- Reverse airflow (C2P) for data centers requiring rear-to-front cooling
- 1+1 redundant hot-swappable power supplies and fan units
- Backward compatible with HDR, EDR, and FDR InfiniBand generations
64
400Gb/s Ports
51.2 Tb/s
Bisection Bandwidth
1U
Rack Height
SHARPv3
AI Acceleration
C2P
Reverse Airflow
2000
Nodes Managed
Technical Specifications
| Ports | 32 OSFP connectors supporting 64 ports of 400Gb/s InfiniBand (NDR) or 128 ports of 200Gb/s |
| Aggregate Throughput | 51.2 Tb/s bidirectional, non-blocking |
| Packet Forwarding Capacity | Over 66.5 billion packets per second (BPPS) |
| Management | Fully managed with on-board Subnet Manager (supports up to 2000 nodes) |
| Power Supply | 1+1 redundant, hot-swappable, 200-240Vac, 80 Plus Gold |
| Cooling / Airflow | Reverse airflow: Connector-to-Power (C2P), hot-swappable fan units |
| Dimensions | 1.7 in (43.6mm) x 17.0 in (438mm) x 26.0 in (660.4mm) |
| Weight | Approx. 14.5 kg |
| Operating Temperature | 0°C to 40°C |
Typical Deployments
- AI & Machine Learning Clusters for large language model training and inference
- High-Performance Computing (HPC) research simulations and modeling
- Hyperscale Data Centers with spine-leaf architectures
- Enterprise Cloud & Finance for low-latency trading systems
- Government & Education supercomputing centers
Compatibility
Fully interoperable with NVIDIA InfiniBand ecosystem including ConnectX-6/7 adapters, LinkX cables, and Quantum-2 series switches. Backwards compatible with HDR, EDR, and FDR devices. Supports major Linux distributions and Windows Server.
Buyer Checklist
- Verify required airflow direction: C2P matches your rack's cold aisle placement
- On-board subnet manager ready for up to 2,000 nodes
- Plan cabling: 400G OSFP to OSFP or split to 2x200G
- Confirm power input: 200-240Vac with redundant feeds
- For large deployments, request topology design review
Important Notices: Verify C2P airflow orientation matches your rack configuration. Use only NVIDIA-qualified optical modules and cables. Operating altitude up to 3050m; temperature not to exceed 40°C.
Хотите узнать больше подробностей об этом продукте







