Mellanox Showcases Microsoft Windows Server 2003 Compute Cluster Edition over InfiniBand

Low-latency InfiniBand fabric enables performance workgroup and personal compute clusters with the simplicity of Windows

SAN FRANCISCO, CA – Aug. 22, 2005 – Mellanox™ Technologies Ltd, the leader in business and technical computing interconnects, announced InfiniBand is the only industry-standard, low-latency 20 gigabit-per-second (Gb/s) fabric supported by Microsoft Windows Server 2003 Compute Cluster Edition, scheduled to be available in the first half of 2006. InfiniBand’s proven scalability and leading price/performance combined with Windows Server 2003 Compute Cluster Edition is designed to drive supercomputing into mainstream commercial applications including financial, clustered databases, engineering, life sciences and geosciences. As the only interconnect that experienced growth of new cluster deployments on the June 2005 list of the world’s most powerful supercomputers, InfiniBand is the fastest growing fabric used in high-performance computing applications. A technology demonstration of a cluster of servers running Windows Server 2003 Compute Cluster Edition over an InfiniBand fabric will be on display in Mellanox’s booth (#315) during Intel Developer Forum exhibition hours.

“Microsoft’s entrance into the high performance computing market is an important milestone for the expansion of InfiniBand fabrics into departmental, workgroup and personal compute clusters,” said Ted Rado, vice president of marketing for Mellanox Technologies. “Scientists and financial analysts will speed up their job run times with their own affordable and easy-to-use compute cluster that sits in a closet or even underneath a desk.”

Windows and InfiniBand – An Industry-wide Effort

In an April 2005 press release, the OpenIB Alliance disclosed that a high performance, reliable InfiniBand driver for Windows is in development. The OpenIB Alliance is an industry association chartered with the development of unified InfiniBand software stacks. members include both technology vendors and end-user organizations that all contribute to code development to accelerate InfiniBand cluster deployment and interoperability.

“Microsoft is working to bring high performance computing to the mainstream and thus we’re interested in the price/performance benefits that industry-standard hardware, such as InfiniBand, can deliver to customers,” said Kyril Faenov, director of high performance computing, Microsoft Corp. “The broad support that InfiniBand has received from hardware vendors is an additional testament to the level of market interest in fast interconnects for high performance computing.”

“We appreciate companies such as Mellanox helping to initiate the evolution from proprietary solutions to open standards through their contributions to the OpenIB stack,” noted Jim Pappas, Director of Technology Initiatives, Server Platforms Group, Intel Corporation.  “The industry collaboration in support of OpenIB will solidify the maturity and adoption of the technology.”

The Simplicity and Ubiquity of Windows and the Performance of InfiniBand

Windows Server 2003 Compute Cluster Edition is designed to provide a complete platform to run Message Passing Interface (MPI)-based applications, integrated cluster setup and management capabilities, and secure programmable job scheduling tools. The InfiniBand fabric enables this package to utilize industry-standard, remote direct memory access-enabled InfiniBand solutions that are proven to optimize performance of a wide selection of parallel MPI and sockets-based solutions.

With Mellanox’s recently announced InfiniBand DDR solutions enabling server-to-server and server-to-storage connections at 20Gb/s and switch-to-switch connections at 60Gb/s, the performance advantage of InfiniBand continues to extend past all other proprietary and standard interconnect solutions. In fact, utilizing the DDR InfiniBand adapters with Intel’s latest PCI Express chipset due to hit production in 1Q 2006, the bi-directional low-level data throughput over a single link achieves 3 GB/s, which is 330 percent faster than PCI-X based adapters.

Come See Mellanox @ IDF, Booth #315

The Windows development effort of the OpenIB group will be shown in Mellanox’s booth at the Intel Developer Forum, August 23-25. A cluster of Windows Server 2003 Compute Cluster Edition servers, connected over an InfiniBand fabric that utilize a standard driver, will be on display. In addition, Mellanox will demonstrate InfiniBand DDR adapters generating 3 GB/s of I/O performance with the latest Intel processor and server technology. Lastly, Mellanox’s InfiniBand technology, in partnership with Diversified Technology, Inc. will be on display in an embedded, robust Advanced-TCA form factor, and native IB storage will be shown using solutions from DataDirect Networks and Supermicro.

About InfiniBand

InfiniBand is the only 10Gb/s and 20Gb/s interconnect solution with 30Gb/s and 60Gb/s switch interconnect solutions shipping in volume today that addresses low-latency compute clustering, communication, storage and embedded applications. An industry-standard technology, InfiniBand provides reliability, availability, serviceability and manageability features to address the needs of both data centers and performance computing environments. With leading price/performance, low power, and small footprint combined with proven, open-source software support and a well defined roadmap to 120Gb/s, InfiniBand is the ideal in-the-box and out-of-the-box performance interconnect technology.

About Mellanox

Mellanox field-proven offering of interconnect solutions for communications, storage and compute clustering are changing the shape of both business and technical computing. As the leader of industry-standard, InfiniBand-based silicon and card-based solutions, Mellanox is the driving force behind the most cost-effective, highest performance interconnect solutions available. For more information, visit

Product and company names herein may be trademarks of their registered owners

For more information:
Mellanox Technologies, Inc.
Ted Rado, Vice-President Marketing

NVIDIA Mellanox Cookie Policy

This website uses cookies which may help to deliver content tailored to your preferences and interests, provide you with a better browsing experience, and to analyze our traffic. You may delete and/or block out cookies from this site, but it may affect how the site operates. Further information can be found in our Privacy Policy.