Mellanox Adapter Architecture Delivers Seamless Connectivity to InfiniBand and Ethernet

“ConnectX” Spans Server and Storage Protocols to Enable Fast, Flexible, and Future-Proofed Data Center Connectivity

SUPERCOMPUTING 2006, TAMPA, FLORIDA, NOVEMBER 13, 2006 – Mellanox™ Technologies Ltd, a leading supplier of semiconductor-based high-performance interconnect products, today announced ConnectX™, its fourth generation adapter architecture that extends InfiniBand’s field-proven price/performance and service-oriented features by incorporating connectivity to 1- and 10 Gigabit Ethernet fabrics. For data center managers, ConnectX will enable enhanced price-performance, service-oriented I/O infrastructure, and investment protection with transparent upgrades to higher-speed protocols. For OEMs, the fourth-generation ConnectX architecture enhances serverdesign flexibility and shortens time to revenue.

Server virtualization, storage growth, and increasing use of clustered multi-core servers are all driving the need for faster and more reliable data center communications fabrics. Products based on the ConnectX architecture offer a comprehensive solution to these needs by supporting the industry’s fastest connectivity along with integrated support for common data center protocols. The ConnectX architecture will enable products that support up to 40Gb/s InfiniBand data rates (initial products will support 10Gb/s and 20Gb/s data rates), with latencies as low as 1 microsecond, in addition to 1- and 10 Gigabit Ethernet. It supports protocols using IPv4 or IPv6, as well as popular data networking (TCP/IP, sockets, MPI, etc.) and storage (SCSI, iSCSI, FC, etc.) protocols. This broad support gives data center managers maximum flexibility in equipment choices in enterprise data centers, high-performance computing and embedded applications.

Note: For more detailed technical information visit the Mellanox’s ConnectX Architecture Web Page.

With ConnectX, data center managers can provide the optimum connections to any compute or storage resource, including both InfiniBand and 10 Gigabit Ethernet, without the need for forklift upgrades. In stateless offload mode, ConnectX 10 Gigabit Ethernet products integrate seamlessly with existing operating systems and applications. Alternatively protocols such as RDMA can bring some of the performance advantages of InfiniBand to the Ethernet environment. And, with an architecture designed to support 40Gb/s InfiniBand interfaces, data center managers have a future transparent upgrade path to the industry’s highest-speed connectivity, powering dramatic improvements in application performance.

“Customers porting applications to 1U servers and server blades with little room for network connectivity (network, storage, and server) are deploying InfiniBand adapters powered by Mellanox silicon in increasing numbers,” said Brian Garrett, an industry analyst with the Enterprise Strategy Group. “As the market for high bandwidth low latency Mellanox silicon expands out of the high performance computing market and into the commercial computing and storage markets, ESG believes that Mellanox has made a brilliant move by incorporating 10 Gigabit Ethernet support into their amazingly affordable family of single chip solutions.”

The ConnectX architecture implements proven and common I/O channels-based services across all protocol and fabric options—including Remote Direct Memory Access (RDMA) and transport offload protocols supported by the OpenFabrics Alliance (—allowing for coherent end-to-end I/O services with easy application portability and migration across different connectivity options.

“While low-power multi-core CPUs and virtualization promise to reduce power and thermal loads, they are only part of the solution. Servers, storage, and networking must be integrated into a single design if data center scalability and agility is to be achieved,” said Joe Skorupa, Research Vice President, Gartner Inc.

“To enable the best I/O services over Ethernet and InfiniBand, the ConnectX architecture adds value in the areas of virtualization, storage and networking acceleration, enhanced end-to-end QoS, congestion control, and I/O consolidation,” said Eyal Waldman, Chairman and CEO of Mellanox Technologies. “ConnectX will spawn a family of products that will provide best-in-class cluster and grid performance, scalability and manageability.”

Next generation InfiniBand products using the ConnectX architecture (including dual-port silicon devices and adapter cards supporting 10 and 20Gb/s InfiniBand speeds) will be available in the first quarter of 2007. Also available will be dual-port, multi-protocol silicon devices and adapter cards supporting 20Gb/s InfiniBand and RDMA and transport acceleration over 1- or 10 Gigabit Ethernet. 40Gb/s products are expected to reach the market in the 2008 timeframe.

Visit Mellanox Technologies at SuperComputing '06 - November 13-16, 2006
Visit the Mellanox booth (#1535) at SuperComputing ’06, and listen to daily presentations about the new Mellanox ConnectX adapter architecture.

About Mellanox

Mellanox Technologies is a leading supplier of semiconductor-based, high-performance interconnect products that facilitate data transmission between servers and storage systems through communications infrastructure equipment. Our products are an integral part of a total solution focused on computing, storage and communication applications used in enterprise data centers, high-performance computing and embedded systems. Based on InfiniBand technology, our field-proven adapter and switch integrated circuits deliver industry-leading performance and capabilities, and serve as the building blocks for creating reliable and scalable interconnect solutions.

Founded in 1999, Mellanox Technologies is headquartered in Santa Clara, California and Yokneam, Israel. For more information on Mellanox’s solutions, please visit

Mellanox is a registered trademark of Mellanox Technologies, Inc. and ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are trademarks of Mellanox Technologies, Inc. All other trademarks are property of their respective owners.

For more information:
Mellanox Technologies, Inc.
Brian Sparks

NVIDIA Mellanox Cookie Policy

This website uses cookies which may help to deliver content tailored to your preferences and interests, provide you with a better browsing experience, and to analyze our traffic. You may delete and/or block out cookies from this site, but it may affect how the site operates. Further information can be found in our Privacy Policy.