Mellanox Ships 20Gb/s and 60Gb/s InfiniBand Adapters and Switch Silicon

Double Data Rate Interconnect Technology Ready for Data Center Server Fabric Deployments at less than a 30 % Premium over 10Gb/s InfiniBand

SANTA CLARA, Calif. and Heidelberg, Germany, INTERNATIONAL SUPERCOMPUTER CONFERENCE – June 20, 2005 – Mellanox TM Technologies Ltd, the leader in business and technical computing interconnects, has announced the immediate availability of industry-standard, double data rate (DDR) InfiniBand Host Channel Adapter (HCA) silicon and PCI Express cards, along with switch silicon that supports 20Gb/s server-to-server and server-to-storage connectivity and 60Gb/s switch-to-switch I/O connections over a single copper cable. These solutions offer twice the bandwidth of current 10Gb/s InfiniBand solutions and up to ten times the price/performance of competing interconnects.

Mellanox has shipped in excess of 500 000 10Gb/s InfiniBand ports into enterprise data centers and high-performance compute (HPC) clusters. These markets are now demanding InfiniBand DDR interconnect solutions because the technology:

  1. Delivers today on a performance roadmap unparalleled in the industry
  2. Balances compute resources and interconnect bandwidth
  3. Unifies compute and storage networks with increased cable manageability
  4. Offers double the bandwidth at less than 30 % price premium

“The production availability of InfiniBand DDR solutions signals a new era of clustered computing and storage performance,” said Eyal Waldman, President and CEO of Mellanox Technologies. “20Gb/s and 60Gb/s I/O technology is in lock-step with emerging multi-core CPU server architectures and widely deployed PCI Express in-the-box interconnect. At an incremental cost over 10Gb/s InfiniBand, 20Gb/s technology truly facilitates a unified computing and storage fabric that offers the highest performance, yet reduces the total cost of ownership in both data center and HPC environments.”

Mellanox Delivers on InfiniBand Roadmap with Unparalleled Performance

Delivering 1475MB/s of uni-directional throughput with 2,7µs of latency on standard Ohio State MPI benchmarks, DDR solutions further extends Mellanox’s InfiniBand fabric performance leadership over other 10Gb/s I/O solutions by more than 55 %.

“As InfiniBand increasingly becomes an interconnect of choice in not just high performance computing environments, but also in mainstream enterprise grids and data center virtualization solutions, DDR technology will only help to deliver an even more compelling price/performance metric,” said Krish Ramakrishnan, vice president and general manager for Server Networking and Virtualization at Cisco Systems. “The performance of InfiniBand coupled with the economic benefits of our consolidation and virtualization capabilities offers our customers an ideal combination as they build out their data center server fabrics to be both high performance and highly flexible.”

InfiniBand DDR Gets the Most Out of Dense Computing and Storage Solutions

The recent introduction of dual-core processors and the ubiquity of PCI Express unleash significant compute power that can only be efficiently balanced with InfiniBand DDR solutions. Not only is InfiniBand DDR the singular I/O technology that can saturate a PCI Express x8 link over a single cable, but with its industry-standard remote direct memory access (RDMA) and built-in hardware transport, InfiniBand DDR ensures the highest utilization of compute resources for application processing rather than network related tasks.

“Innovations that push the I/O performance curve such as InfiniBand DDR technology nicely balance the in-the-box bandwidth capabilities of PCI Express and complements increasingly dense computing platforms based on the 64-bit Intel® Xeon™ processor,” said Tom Macdonald, vice president and general manager, Platform Components Group of Intel Corporation.

In addition, InfiniBand DDR benefits clustered storage systems by providing the interconnect performance to serve applications such as high-throughput data warehouses, streaming video services, and high resolution graphics editing.

“InfiniBand has enabled the Isilon IQ clustered storage system to easily scale past 150 Terabytes of capacity with 3 gigabytes per second of storage performance from a single file system,” said Brett Goodwin, VP of Marketing and Business Development for Isilon Systems. “We picked InfiniBand with its aggressive roadmap to 20Gb/s DDR technology and beyond as it gives us a competitive edge over all other interconnect solutions and allows us to scale our clustered storage solutions for many generations to come. We applaud Mellanox for driving this technology into the marketplace.”

InfiniBand DDR Facilitates a Unified Network at the Lowest Cost of Ownership

With compute clusters growing in size from hundreds to thousands of nodes, demand has increased for a unified network to contain costs and improve manageability. Driving 20Gb/s on a single 4X InfiniBand cable and 60Gb/s on a single 12X cable reduces cable requirements by over 40 % and increases cluster manageability for multi-thousand node, non-blocking clusters.

In addition, native InfiniBand DDR storage and compute solutions deliver the performance and cost necessary to support the most I/O hungry computing and storage applications over a single network.

“DDR technology further increases the strategic importance of using InfiniBand to directly attach servers to storage through a unified fabric,” said Bret Weber, Engenio Director of Architecture. “A 20Gb/s IB connection provides the highest performance available on a single industry standard I/O Storage link, ultimately delivering world-class performance for high-throughput HPC storage applications.”

InfiniBand DDR Product Family Offers Complete I/O Solutions

Mellanox offers six InfiniBand DDR products for use in rack-mounted servers, server blades, embedded applications, storage, and networking switch/router platforms. Single-port and dual-port 20Gb/s InfiniBand HCA silicon devices of the InfiniHost III Lx and Ex families featuring MemFree technology are ideal for Landed-on-Motherboard (LOM) designs and draw approximately 3,3W per 20Gb/s InfiniBand port - nearly 5 times less power than 10GbE iWARP solutions that deliver only half the data throughput. Three InfiniBand DDR HCA adapter cards are also available that feature the single-chip HCA devices and plug directly into any standard server with a PCI-Express x8 slot.

In addition, a DDR version of the InfiniScale III 24-port, single-chip switch that supports 20Gb/s per port delivers nearly a Terabit (1000 Gigabits) per second of aggregate switching bandwidth and consumes only 1,4 Watts per port. Similar to the HCA devices, Mellanox’s own 5Gb/s SerDes technology is integrated directly into the switch chip—an impressive 96 total SerDes ports in each device. InfiniBand DDR switch systems are available from Flextronics, while several other switch vendors plan to release systems over the coming year.

Pricing and Availability

All InfiniBand DDR products are available today in production and priced at less than a 30 % premium over 10Gb/s InfiniBand solutions. High-volume OEM pricing for the DDR version of the InfiniHost III Lx single-port HCA silicon device is 85 $ and the DDR version of the InfiniScale III 24-port silicon switch device is less than 25 $ per port for OEMs in high volume.

About InfiniBand

InfiniBand is the only 10Gb/s and 20Gb/s interconnect solution with 30Gb/s and 60Gb/s switch interconnect solutions shipping in volume today that addresses low-latency compute clustering, communication, storage and embedded applications. An industry-standard technology, InfiniBand provides reliability, availability, serviceability and manageability features to address the needs of both data centers and performance computing environments. With leading price/performance, low power, and small footprint combined with proven, open-source software support and a well defined roadmap to 120Gb/s, InfiniBand is the ideal in-the-box and out-of-the-box performance interconnect technology.

About Mellanox

Mellanox field-proven offering of interconnect solutions for communications, storage and compute clustering are changing the shape of both business and technical computing. As the leader of industry-standard, InfiniBand-based silicon and card-based solutions, Mellanox is the driving force behind the most cost-effective, highest performance interconnect solutions available. For more information, visit

For more information:
Mellanox Technologies, Inc.
Ted Rado, Vice-President Marketing

NVIDIA Mellanox Cookie Policy

This website uses cookies which may help to deliver content tailored to your preferences and interests, provide you with a better browsing experience, and to analyze our traffic. You may delete and/or block out cookies from this site, but it may affect how the site operates. Further information can be found in our Privacy Policy.