Mellanox and AMD Introduce “Neptune” High Performance Cluster for Applications Development and Testing

Mellanox 20Gb/s InfiniBand and AMD Dual-Core Opteron-based servers empower an advanced development environment for industry partners and customers

SANTA CLARA, CA. - August 14, 2006 – Mellanox ™ Technologies Ltd, the leader in business and technical computing interconnects, today announced the availability of Neptune, a high-performance 20Gb/s InfiniBand cluster for industry partners, customers and end-users. Neptune, powered with AMD dual-core Opteron processors and Mellanox InfiniBand technology, provides a world-class system for testing, benchmarking and development. Multiple commercial applications will be available, including the capability to run homegrown applications.

“InfiniBand’s low-latency, 20Gb/s performance unleashes the compute power in multi-core environments and eliminates storage I/O bottlenecks,” said Thad Omura, vice president of product marketing at Mellanox Technologies. “We are pleased to provide our customers and partners a testing platform that showcases how Mellanox’s leading-edge InfiniBand technology benefits their solutions.”

“AMD’s low-latency, multi-core architecture is a perfect complement to Mellanox’s InfiniBand solutions, providing the important elements for high performance applications,” said Terri Hall, vice president of software alliances and solutions, AMD. “By eliminating traditional bottlenecks inherent in legacy architectures, AMD Opteron processor’s Direct Connect Architecture allows the Neptune cluster to offer users a significant improvement in system performance and greater application efficiency.”

Cluster Configuration
Neptune is comprised of 32 Colfax International servers, each containing two Dual-Core AMD Opteron processors model 275, for a total of 128 cores. Mellanox 20Gb/s InfiniBand PCI Express adapters, with MemFree technology, connect the servers together using Flextronics’ 24-port 20Gb/s switches in a full Fat-Tree non-blocking network architecture. Light-weight, 30 AWG InfiniBand cables from W. L. Gore & Associates, Inc. interconnect the servers and switches. Linpack benchmark results, when using full 128-core Neptune configuration, achieved 425,9GFlops with network efficiency of more than 95 %.

“As Mellanox continues to offer innovative solutions for high-performance computing, Colfax Internationals’ server platforms help offer the clustering performance to achieve such impressive results,” said Gautam Shah, CEO, Colfax International. “As Mellanox successfully responds to the need for multi-core testing and development in compute-intensive environments, Colfax International is fully aligned with and proud to be part of the Mellanox Neptune cluster.”

“In most cases, simple benchmarks do not represent real application behavior,” said Gilad Shainer, senior technical marketing manager at Mellanox Technologies. “Instead, our customers and partners can test their applications and observe the inherent low-latency, high-bandwidth, with optimal CPU utilization benefits of Mellanox InfiniBand.”

Multiple Environments Supported 
Neptune provides multiple configuration options for testing operating system environments that can be rapidly configured and brought-up. New operating system distributions from Microsoft, Novell and RedHat will be available with the OpenFabrics Enterprise Distribution (OFED). As an example, Microsoft Compute Cluster Server 2003 can be used for testing a broad set of high-performance computing applications. Microsoft message passing interface (MPI) utilizes Mellanox InfiniBand for latency-sensitive and high-bandwidth applications.

“Microsoft is working with Mellanox Technologies to provide an optimal computing environment to our customers running a variety of custom and packaged applications on Windows Compute Cluster Server 2003,” said John Borozan, group product manager, Windows Server Division at Microsoft Corp.

“Novell has been collaborating with Mellanox on various fronts, including support of Mellanox InfiniBand solutions and OFED with SUSE® Linux Enterprise Server and SUSE® Linux Enterprise Real Time,” said Moiz Kohari, director of Real-Time Computing at Novell. “The Neptune Cluster is a great vehicle for customers and ISVs to experience the benefits of the integrated Mellanox and Novell solutions and testing their applications above them.”

Located in Santa Clara California, the Mellanox Cluster facility provides on-site technical support and enabling scheduled session onsite or remotely. Click here to find out more.

About Mellanox

Mellanox Technologies is the leader in high-performance interconnect solutions that consolidate communications, computing, management, and storage onto a single fabric. Based on InfiniBand technology, Mellanox adapters and switch silicon are the foundation for virtualized data centers and high-performance computing fabrics that deliver optimal performance, scalability, reliability, manageability and total cost of ownership.

Founded in 1999, Mellanox Technologies is headquartered in Santa Clara, California and Yokneam, Israel. For more information on Mellanox’s solutions, please visit www.mellanox.com.

Mellanox is a registered trademark of Mellanox Technologies, Inc. and InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are trademarks of Mellanox Technologies, Inc. Sun, Sun Microsystems and Sun Blade are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries.  All other trademarks are property of their respective owners.

For more information:
Mellanox Technologies, Inc.
Brian Sparks
408-970-3400
media@mellanox.com


Mellanox Cookie Policy

This website uses cookies which may help to deliver content tailored to your preferences and interests, provide you with a better browsing experience, and to analyze our traffic. You may delete and/or block out cookies from this site, but it may affect how the site operates. Further information can be found in our Privacy Policy.