# Line below added 29 Nov 2024
  • HHHL 100Gb QSFP56 1 port NIVIDA MELLANOX MCX653105A-ECAT ConnectX-6 PCI-E X16 Gen 4

HHHL 100Gb QSFP56 1 port NIVIDA MELLANOX MCX653105A-ECAT ConnectX-6 PCI-E X16 Gen 4

  • Up to HDR100 EDR InfiniBand and 100GbE Ethernet connectivity per port
  • Max bandwidth of 200Gb/s
  • Up to 215 million messages/sec
  • Sub 0.6usec latency
  • Block-level XTS-AES mode hardware encryption
  • FIPS capable
  • Advanced storage capabilities including block-level encryption and checksum offloads
  • Supports both 50G SerDes (PAM4) and 25 SerDes (NRZ)-based ports
  • Best-in-class packing with nsubnanosecond accuracy
  • PCIe Gen3 and PCIe Gen4 support
  • RoHS-compliant
  • ODCC-compatible

  • $USD $954.00

    *RRP Pricing*

    To View Channel Discounts Please Login


HHHL 100Gb QSFP56 1 port NIVIDA MELLANOX MCX653105A-ECAT ConnectX-6 PCI-E X16 Gen 4

Mellanox ConnectX®-6 VPI Adapter Single-Port Adapter Card supporting 200Gb/s with Virtual Protocol Interconnect VPIConnectX-Virtual Protocol Interconnect® (VPI) is a groundbreaking addition to the Mellanox Connect

Mellanox ConnectX®-6 VPI Adapter 

Single-Port Adapter Card supporting 200Gb/s with Virtual Protocol Interconnect VPI

ConnectX-Virtual Protocol Interconnect® (VPI) is a groundbreaking addition to the Mellanox ConnectX series of industry-leading adapter cards. Providing up to two ports of 200Gb/s for InfiniBand and Ethernet connectivity, sub-600ns latency and 215 million messages per second, ConnectX-6 VPI enables the highest performance and most flexible solution aimed at meeting the continually growing demands of data center applications.

In addition to all the existing innovative features of past versions, ConnectX-6 offers a number of enhancements to further improve performance and scalability.

ConnectX-6 VPI supports up to HDR, HDR100, EDR, FDR, QDR, DDR and SDR InfiniBand speeds as well as up to 200, 100, 50, 40, 25, and 10Gb/s Ethernet speeds.

Benefits

  • Industry-leading throughput, low CPU utilization and high message rate
  • Highest performance and most intelligent fabric for compute and storage infrastructures
  • Cutting-edge performance in virtualized networks including Network Function Virtualization (NFV)
  • Mellanox Host Chaining technology for economical rack design
  • Smart interconnect for x86, Power, Arm, GPU and FPGA-based compute and storage platforms
  • Flexible programmable pipeline for new network flows
  • Cutting-edge performance in virtualized networks, e.g., NFV
  • Efficient service chaining enablement
  • Increased I/O consolidation efficiencies, reducing data center costs & complexity






Physical

Adapter Card Size: 6.6 in. x 2.71 in. (167.65mm x 68.90mm)

Connector: Single QSFP56 InfiniBand and Ethernet (copper and optical)

Protocol Support

InfiniBand: IBTA v1.4a

Auto-Negotiation: 1X/2X/4X SDR (2.5Gb/s per lane), DDR (5Gb/s per lane), QDR (10Gb/s per lane), FDR10 (10.3125Gb/s per lane), FDR (14.0625Gb/s per lane), EDR (25Gb/s per lane) port, HDR100 (2 lane x 50Gb/s per lane)


Ethernet: 100GBASE-CR4, 100GBASE-CR2, 100GBASE-KR4, 100GBASE-SR4, 50GBASE-R2, 50GBASE-R4, 40GBASE-CR4, 40GBASE-KR4, 40GBASE-SR4, 40GBASE-LR4, 40GBASE-ER4, 40GBASE-R2, 25GBASE-R, 20GBASE-KR2, 10GBASE-LR,10GBASE-ER, 10GBASE-CX4, 10GBASE-CR, 10GBASE-KR, SGMII, 1000BASE-CX, 1000BASE-KX, 10GBASE-SR

Data Rate

InfiniBand

SDR/DDR/QDR/FDR/EDR/HDR100

Ethernet

1/10/25/40/50/100 Gb/s

PCIe Gen3/4: SERDES @ 8.0GT/s/16GT/s, x16 lanes (2.0 and 1.1 compatible)

Power and Airflow

Voltage : 3.3Aux

Maximum current: 100mA


Power

Cable

Typical Powerb

Passive Cables

15.6W

Maximum Power

Please refer to NVIDIA ConnectX-6 InfiniBand/VPI Power and Airflow Specifications (requires NVONline login credentials)

Maximum power available through QSFP56 port: 5W

Airflow (LFM) /

Ambient Temperature

Cable Type


Airflow Direction

Heatsink to Port

Port to Heatsink

Passive Cables

300 LFM / 55°C

200 LFM / 35°C

NVIDIA Active 2.7W Cables

300 LFM / 55°C

200 LFM / 35°C

Environmental

Temperature

Operational

0°C to 55°C

Non-operational

-40°C to 70°Cc

Humidity

Operational

10% to 85% relative humidity

Non-operational

10% to 90% relative humidity

Altitude (Operational)

3050m

Regulatory

Safety: CB / cTUVus / CE

EMC: CE / FCC / VCCI / ICES / RCM / KC

RoHS: RoHS Compliant

Notes: a. The ConnectX-6 adapters supplement the IBTA auto-negotiation specification to get better bit error rates and longer cable reaches. This supplemental feature only initiates when connected to another NVIDIA InfiniBand product.

b. Typical power for ATIS traffic load.

c. The non-operational storage temperature specifications apply to the product without its package.

CATALOGUE 1
Title Version Date Size
Product Catalog

Tags: HHHL, 100Gb, QSFP56, 1 port, NVIDIA, MELLANOX, MCX653105A-ECAT, ConnectX-6, PCI-E X16, Gen 4, InfiniBand, Ethernet Card