Mellanox Announces ConnectX-2, An Advanced High-Performance, Low Power, Connectivity Solution for Virtualized and Cloud Data Centers

ConnectX®-2 Extends Mellanox’s Leadership in High-Speed Data Communication with Breakthrough Latency, Throughput, Efficiency and Agility

VMWORLD 2009, SAN FRANCISCO, CA – August 31, 2009 – Mellanox® Technologies, Ltd. (NASDAQ: MLNX; TASE: MLNX), a leading supplier of end-to-end connectivity solutions for data center servers and storage, today announced the availability of its ConnectX-2 line of I/O adapter device products which expands the performance potential of applications in data center, high-performance computing  and embedded environments.  The advanced 40Gb/s InfiniBand, 10 Gigabit Ethernet, and Fibre Channel over Ethernet (FCoE) or Fibre Channel over InfiniBand (FCoIB) features in ConnectX-2 enhance data center productivity and service flexibility, reduces power and costs, and deliver the best return-on-investment and value for interconnect connectivity solutions. ConnectX-2 addresses the rapidly increasing demand for high-bandwidth interconnect products for enterprise-class server and storage systems.

ConnectX-2 delivers up to 30 % less power consumption than its predecessor, helping data centers to lower their power and cooling costs for server and storage I/O. ConnectX-2, with its integrated NIC and PHY, provides additional cost and power saving by minimizing board real-estate.

“Mellanox end-to-end InfiniBand and Ethernet-based connectivity solutions close the gap between processing power and I/O by delivering breakthrough latency, utilization and efficiency,” said John Monson, vice president of marketing at Mellanox Technologies. “ConnectX-2 supports green initiatives with low power consumption and its superior performance and network unification capabilities for both InfiniBand and Ethernet improves the productivity of high-performance, cloud and virtualized data center environments.”

“Deployments of virtualization and cloud computing infrastructure are becoming greater in scale and complexity,” said, Cindy Borovick, Research Vice President, Datacenter Networks at IDC. “Mellanox’s ConnectX-2 adapters, with their combination of high-performance, scalability and protocol flexibility benefits, can help simplify I/O system design and make it easier for IT managers to deploy infrastructure that meets the challenges of a dynamic data center.”

“The increasing demands to provide critical business services have expanded the need for higher bandwidth and capabilities,” said Raju Penumatcha, Vice President of Netra Systems and Networking at Sun. “Organizations leveraging the Sun Blade X6275 server module, powered by the new Intel Xeon processor 5500 series and Mellanox’s industry-leading 40Gb/s InfiniBand technology, can maximize the full potential, power, and performance of their systems.”

ConnectX-2 advanced virtualization support removes roadblocks to server virtualization adoptions with improved performance and security with Hypervisor offloads and a complete PCI-SIG Single Root I/O Virtualization (SRIOV) technology implementation. The PCI-SIG SRIOV support, in conjunction with the system virtualization technologies, allows multiple operating systems running simultaneously within a single server to natively share PCI Express® devices.  The advanced virtualization capabilities of ConnectX-2 free up CPU resources helping each virtual machine to achieve native operating system performance, while resource isolation and protection ensures virtual machine security.

ConnectX-2 brings the highest bandwidth and lowest latency available to high-performance and transaction-sensitive applications. The InfiniBand adapter products deliver up to 40Gb/s of bandwidth with latencies as low as 1 microsecond. The Ethernet adapter products deliver up to two ports of 10Gb/s bandwidth with 6 microsecond TCP latency or 3 microsecond RDMA latency and kernel bypass for Low Latency Ethernet (LLE) environments. ConnectX-2’s Virtual Protocol Interconnect™ unified I/O technology provides a one-wire solution for any networking, clustering, storage and management application with an enhanced quality of service to guarantee high application productivity.

ConnectX-2’s support of FCoE and FCoIB provides a substantial performance boost while also enabling I/O consolidation over Ethernet or InfiniBand in data center environments, and benefits IT Managers with a simplified, cost effective data center to manage and support.

ConnectX-2 products are available in a variety of configurations. ConnectX-2 VPI delivers both InfiniBand and Ethernet with auto-sensing or preconfigured QSFP and SFP+ ports. ConnectX-2 EN delivers 10 Gigabit Ethernet, and ConnectX-2 ENt delivers 10 Gigabit Ethernet with integrated 10GBASE-T PHYs. Samples are available today with general availability in October. Pricing is available upon request.

About Mellanox
Mellanox Technologies is a leading supplier of end-to-end connectivity solutions for servers and storage that optimize data center performance. Mellanox products deliver market-leading bandwidth, performance, scalability, power conservation and cost-effectiveness while converging multiple legacy network technologies into one future-proof solution. For the best in performance and scalability, Mellanox is the choice for Fortune 500 data centers and the world’s most powerful supercomputers. Founded in 1999, Mellanox Technologies is headquartered in Sunnyvale, California and Yokneam, Israel. For more information, visit Mellanox at

Mellanox, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are registered trademarks of Mellanox Technologies, Ltd. BridgeX, PhyX, and Virtual Protocol Interconnect are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.

NVIDIA Mellanox Cookie Policy

This website uses cookies which may help to deliver content tailored to your preferences and interests, provide you with a better browsing experience, and to analyze our traffic. You may delete and/or block out cookies from this site, but it may affect how the site operates. Further information can be found in our Privacy Policy.