NVIDIA mellanox ConnectX-6 MCX653106A-ECAT 100Gb/s Dual-Port InfiniBand Adapter Ethernet Capable

Подробная информация о продукте:

Фирменное наименование: Mellanox
Номер модели: MCX653106A-ECAT
Документ: connectx-6-infiniband.pdf

Оплата и доставка Условия:

Количество мин заказа: 1 шт.
Цена: Negotiate
Упаковывая детали: Внешняя коробка
Время доставки: На основе инвентаризации
Условия оплаты: Т/Т
Поставка способности: Поставка по проекту/партии
Лучшая цена контакт

Подробная информация

Статус продуктов: Запас Приложение: Сервер
Тип интерфейса:: Infiniband Порты: Двойной
Максимальная скорость: 100GBE Тип: Проводной
Состояние: Новое и оригинальное Гарантийный срок: 1 год
Модель: MCX653106A-ECAT Имя: Порт Nic ConnectX- 6 VPI Hdr100 Edr Ib MCX653106A-ECAT Mellanox 100gb двойной
Ключевое слово: Сетевая карта Mellanox
Выделить:

Mellanox ConnectX-6 network adapter

,

100Gb/s InfiniBand Ethernet card

,

dual-port Mellanox network card

Характер продукции

NVIDIA ConnectX-6 MCX653106A-ECAT 100Gb/s Dual-Port InfiniBand Smart Adapter

Versatile dual-port 100Gb/s InfiniBand and Ethernet adapter card with PCIe 3.0/4.0 x16 interface—delivering RDMA, NVMe-oF offloads, block-level encryption, and in-network computing for cost-optimized HPC, enterprise, and cloud deployments.

  • Dual-port 100Gb/s InfiniBand (EDR/HDR100) and 100/50/40/25/10GbE connectivity
  • PCIe Gen 3.0/4.0 x16 (backward compatible) | Up to 215 million messages per second
  • Hardware offloads: NVMe-oF target/initiator, XTS-AES 256/512-bit encryption, MPI tag matching
  • NVIDIA In-Network Computing and GPUDirect RDMA support
  • Low-profile PCIe stand-up form factor, RoHS compliant
Characteristics
  • 100Gb/s Throughput: Dual ports operating at up to 100Gb/s InfiniBand (EDR/HDR100) or Ethernet with full bidirectional bandwidth.
  • In-Network Computing: Offloads collective operations (MPI, NCCL, SHMEM) using NVIDIA SHARP technology.
  • Block-Level Encryption: Hardware AES-XTS 256/512-bit encryption/decryption without CPU overhead; FIPS compliant.
  • NVMe-oF Offloads: Target and initiator offloads for NVMe over Fabrics, reducing CPU utilization.
  • Advanced Virtualization: SR-IOV up to 1K VFs, ASAP² acceleration for OVS and virtual switching.
Technology & Standards

The MCX653106A-ECAT integrates NVIDIA In-Network Computing engines (SHARP), RDMA (IBTA 1.3), RoCE, and NVMe-oF. It supports PCIe Gen 4.0 (x16) and Gen 3.0, PAM4 and NRZ SerDes, and advanced features like Dynamically Connected Transport (DCT), On-Demand Paging (ODP), and Adaptive Routing. Overlay offloads for VXLAN, NVGRE, Geneve are hardware-accelerated. Compliant with IEEE 802.3bj, 802.3bm, 802.3by, and InfiniBand Trade Association specifications.

Working Principle: Smart Offload Architecture

ConnectX-6 offloads communication and storage tasks from the host CPU to the adapter hardware. For MPI collectives, the adapter processes data in transit using SHARP, reducing endpoint traffic. For storage, NVMe-oF commands are processed directly on the adapter, freeing CPU cores. Block encryption/decryption occurs inline at wire speed. The result is lower latency, higher message rate (215 Mpps), and improved application scalability—even at 100Gb/s speeds.

Applications & Deployment
  • Mid-Range HPC Clusters: MPI-based simulations requiring cost-effective 100Gb/s interconnect.
  • AI Inference & Training: GPU clusters with GPUDirect RDMA and NCCL collectives.
  • NVMe-oF Storage: Target/initiator offload for high-performance NVMe storage access.
  • Virtualized Data Centers: SR-IOV and ASAP² for OVS offload in NFV and cloud.
  • Enterprise Cloud: 100Gb Ethernet connectivity for virtualization and storage convergence.
Technical Specifications & Ordering Options
Model Ports & Speed Host Interface Form Factor Encryption Protocols OPN
ConnectX-6 2x QSFP56 (100Gb/s IB/Eth) PCIe 3.0/4.0 x16 PCIe stand-up (low-profile) AES-XTS 256/512-bit InfiniBand, Ethernet, NVMe-oF MCX653106A-ECAT
ConnectX-6 1x QSFP56 (100Gb/s) PCIe 4.0 x8 PCIe stand-up AES-XTS IB/Eth MCX651105A-EDAT
ConnectX-6 2x QSFP56 (200Gb/s) PCIe 4.0 x16 PCIe stand-up AES-XTS IB/Eth MCX653106A-HDAT

Note: MCX653106A-ECAT supports 100Gb/s InfiniBand (EDR/HDR100) and 100/50/25/10GbE. Dimensions: 167.65mm x 68.90mm (without bracket). Includes tall bracket (short bracket accessory). Power consumption < 15W typical.

Advantages & Differentiators
  • vs. ConnectX-5: Double the bandwidth (100Gb/s vs. 50Gb/s), integrated SHARP for in-network computing, and block-level encryption at no extra cost.
  • vs. Competitor NICs: True hardware offload for NVMe-oF and MPI collectives—not just stateless offloads.
  • Cost-Optimized 100G: Ideal for balancing performance and budget in mid-size clusters.
  • FIPS Compliance: Hardware encryption meets government security standards.
Service & Support

We offer 24/7 technical consultation, RMA services, and integration support for ConnectX-6 adapters. Each card is backed by a 1-year warranty (extendable). Our team provides driver validation for major Linux distributions, Windows, and VMware. Pre-sales configuration assistance for InfiniBand/Ethernet fabric design is available.

Frequently Asked Questions (FAQ)

Q: Is the MCX653106A-ECAT compatible with 200Gb/s Quantum switches?

A: Yes, it is interoperable with NVIDIA Quantum QM8700/QM8790 switches when using HDR100 mode (100Gb/s per port).

Q: Can this adapter be used for Ethernet as well as InfiniBand?

A: Yes, it supports both InfiniBand and Ethernet protocols. The firmware auto-detects the switch type and configures the appropriate mode.

Q: Does it support RoCE (RDMA over Converged Ethernet)?

A: Yes, ConnectX-6 fully supports RoCE, providing low-latency RDMA in Ethernet environments.

Q: What is the maximum message rate?

A: The adapter delivers up to 215 million messages per second, ideal for small-packet HPC workloads.

Q: Is the card compatible with PCIe Gen 3.0 slots?

A: Yes, it is fully compatible with PCIe Gen 3.0 x16 slots; performance will be limited to ~100Gb/s aggregate, which matches the port speed.

Precautions & Compatibility Notes
  • PCIe Slot Requirement: For optimal performance, install in a PCIe Gen 3.0 x16 or Gen 4.0 x8/x16 slot.
  • Cooling: Ensure adequate airflow in server chassis; passive cooling requires minimum 200 LFM.
  • Cabling: Use QSFP56 passive/active copper or optical modules rated for 100Gb/s (EDR/HDR100).
  • Driver Support: Use latest NVIDIA MLNX_OFED for Linux or WinOF-2 for Windows.
  • Operating Temperature: 0°C to 70°C; store between -40°C and 85°C.
Company Introduction
NVIDIA  mellanox ConnectX-6 MCX653106A-ECAT 100Gb/s Dual-Port InfiniBand Adapter Ethernet Capable 0

With over a decade of experience, we operate a large-scale factory backed by a strong technical team. Our extensive customer base and domain expertise enable us to offer competitive pricing without compromising on quality. As authorized distributors for Mellanox, Ruckus, Aruba, and Extreme, we stock original network switches, network card (nic card) solutions, wireless Access Points, controllers, and cabling. We maintain a 10 million USD inventory to ensure rapid fulfillment across diverse product lines. Every shipment is verified for accuracy, and we provide 24/7 consultation and technical support. Our professional sales and technical teams have earned a high reputation in global markets—partner with us for reliable infrastructure solutions.

Хотите узнать больше подробностей об этом продукте
Мне интересно NVIDIA mellanox ConnectX-6 MCX653106A-ECAT 100Gb/s Dual-Port InfiniBand Adapter Ethernet Capable не могли бы вы прислать мне более подробную информацию, такую ​​как тип, размер, количество, материал и т. д.
Спасибо!
Жду твоего ответа.