ConnectX®-4 Lx EN Cards

10/25/40/50 Gigabit Ethernet Adapter Cards supporting Mellanox Multi-Host® technology, RDMA, Overlay Networks Encapsulation/Decapsulation and more

ConnectX-4 Lx EN Network Controller with 1/10/25/40/50Gb/s Ethernet connectivity addresses virtualized infrastructure challenges, delivering best-in-class and highest performance to various demanding markets and applications. Providing true hardware-based I/O isolation with unmatched scalability and efficiency, achieving the most cost-effective and flexible solution for Web 2.0, Cloud, data analytics, database, and storage platforms.

ConnectX-4 Lx EN provides an unmatched combination of 1, 10, 25, 40, and 50GbE bandwidth, sub-microsecond latency and a 75 million packets per second message rate. It includes native hardware support for RDMA over Converged Ethernet, Ethernet stateless offload engines, Overlay Networks, and GPUDirect® Technology.

Learn more about Ethernet technology.


The Open Compute Project (OCP) mission is to develop and specify the most cost-efficient, energy-efficient and scalable enterprise and Web 2 data centers. Mellanox ConnectX-4 Lx EN OCP adapter cards deliver leading Ethernet connectivity for performance-driven server and storage applications in Web 2.0, Enterprise Data Centers and Cloud environments. The OCP Mezzanine adapter form factor is designed to mate into OCP servers. ConnectX-4 Lx EN for Open Compute Project (OCP) improves network performance by increasing available bandwidth while decreasing the associated transport load on the CPU especially in virtualized server environments. Moreover, ConnectX-4 Lx EN introduces the new Multi-Host technology, which enables a new innovative rack design that achieves maximum CAPEX and OPEX savings without compromising on network performance.
For additional information on Mellanox OCP products, click here.



  • Highest performing boards for applications requiring high bandwidth, low latency and high message rate
  • 1/10/25/40/50GbE connectivity for servers and storage
  • Standard PCIe and Open Compute Project form factor
  • Industry leading throughput and latency for Web 2.0, Cloud and Big Data applications
  • Maximizing data centers’ return on investment (ROI) with Multi-Host technology
  • Smart interconnect for x86, Power, ARM, and GPU-based compute and storage platforms
  • Cutting-edge performance in virtualized overlay networks
  • Efficient I/O consolidation, lowering data center costs and complexity
  • Virtualization acceleration
  • Power efficiency
  • 1/10/25/40/50Gb/s speeds
  • Single and dual-port options available
  • Multi-Host technology (specific OPNs)
  • Connectivity to up-to 4 independent hosts (specific OPNs)
  • OCP Specification 2 and 0.5, as applicable (specific OPNs)
  • Virtualization
  • Low latency RDMA over Converged Ethernet
  • CPU offloading of transport operations
  • Application offloading
  • Mellanox PeerDirectTM communication acceleration
  • Hardware offloads for NVGRE, VXLAN and GENEVE encapsulated traffic
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • Erasure Coding offload
  • RoHS-R6

NVIDIA Mellanox Cookie Policy

This website uses cookies which may help to deliver content tailored to your preferences and interests, provide you with a better browsing experience, and to analyze our traffic. You may delete and/or block out cookies from this site, but it may affect how the site operates. Further information can be found in our Privacy Policy.