The data centers that today's IT professionals are asked to manage are more than just tools in a larger operation; they are critical to their company's business. For many, the data center is the business itself. It must deliver the best performance at the lowest possible total cost of ownership and scale as the business grows. As such, the majority of IT organizations have either already adopted or are in the process of adopting virtualization technologies. This enables a company to run various applications over the same servers, maximizing server utilization and achieving a dramatic improvement in the total cost of ownership, elastic provisioning, and scalability.
However, the adoption of a virtualization architecture creates some very significant data center interconnect challenges. To overcome these challenges, the majority of IT organizations are deploying advanced interconnect technologies to enable a faster, flatter, and fully virtualized data center infrastructure. Using the right interconnect technology, connecting servers to servers and servers to storage, reduces cost while providing the ability to support an increasingly virtualized and agile data center. This is exactly what Mellanox end-to-end interconnect solutions deliver.
Highest I/O Performance
Mellanox products and solutions are uniquely designed to address the virtualized infrastructure challenges, delivering best-in-class and highest performance server and storage connectivity to various demanding markets and applications, combining true hardware-based I/O isolation and network convergence with unmatched scalability and efficiency. Mellanox solutions are designed to simplify deployment and maintenance through automated monitoring and provisioning and seamless integration with the major cloud frameworks.
Figure 1: By using the ConnectX-3 40GbE adapter, a user can deliver much faster I/O traffic than by using multiple 10GbE ports from competitors.
Hardware Based I/O Isolation
Mellanox ConnectX® adapters and Mellanox switches provide a high degree of traffic isolation in hardware, allowing true fabric convergence without compromising service quality and without taking additional CPU cycles for the I/O processing. Mellanox solutions provide end-to-end traffic and congestion isolation for fabric partitions, and granular control of allocated fabric resources.
Figure 2: Mellanox ConnectX provides hardware-enforced I/O virtualization, isolation, and Quality of Service (QoS)
Every ConnectX adapter can provide thousands of I/O channels (Queues) and more than a hundred virtual PCI (SR-IOV) devices, which can be assigned dynamically to form virtual NICs and virtual storage HBAs. The channels and virtualized I/O can be controlled by an advanced multi-stage scheduler, controlling the bandwidth and priority per virtual NIC/HBA or group of virtual I/O adapters. This ensures that traffic streams are isolated and that traffic is allocated and prioritized according to application and business needs.
Accelerating Storage Access
In addition to providing better network performance, ConnectX's RDMA capabilities can be used to accelerate hypervisor traffic such as storage access, VM migration, data, and VM replication. The use of RDMA pushes the task of moving data from node-to-node to the ConnectX hardware, yielding much faster performance, lower latency/access-time, and lower CPU overhead.
Figure 3: Using RDMA-based iSCSI (iSER), users can achieve 10X faster performance compared to traditional TCP/IP-based iSCSI.
In today's virtualized data center, I/O is the key bottleneck leading to degraded application performance and poor service levels. Exacerbating the issue is infrastructure consolidation and a cloud model mandate that I/O and network resources be partitioned, secured, and automated.
Mellanox products and solutions enable high-performance and an efficient cloud infrastructure. With Mellanox, users do not need to compromise their performance, application service level, security, or usability in virtualized environments. Mellanox provides the most cost effective cloud infrastructure.
Our solutions deliver the following features:
- Fastest I/O adapters with 10/25/40/50/100Gb/s per port and sub-600ns latency
- Low-latency and high-throughput VM-to-VM performance with full OS bypass and RDMA
- Hardware-based I/O virtualization and network isolation
- I/O consolidation of LAN, IPC, and storage over a single wire
- Cost-effective, high-density switches and fabric architecture
- End-to-end I/O and network provisioning, with native integration into key cloud frameworks
AI Composability and Virtualization: Mellanox Network Attached GPUs
VSAN™ Networking Done Right
vSphere® Networking Done Right
MySQL Database Acceleration over Mangstor Low Latency Storage and Mellanox Low Latency Networking
Achieving vMotion Acceleration Over Efficient Virtualized Network (EVN)
- Solution Brief:
Virtual High Performance Computing - HPC for the traditional data center
- Reference Architecture:
Iron Networks Microsoft Fast Track Architecture
Edgenet Case Study:
Private Cloud Provider Boosts Applications Performance at a Lower Cost of Ownership
- Solution Brief: Accelerating Virtual Machine Migration
Implementing VMware's Virtual SAN with Micron SSDs and the Mellanox Interconnect
Next generation VDI appliance Powered by EMC2 ScaleIO
- Solution Brief:
Virtual Desktop Infrastructure (VDI) Acceleration Using LSI and Mellanox Low Latency Interconnect Technologies
- White Paper:
Faster Interconnects for Next-Generation Data Centers
- White Paper:
Solving I/O Bottlenecks to Enable Superior Cloud Efficiency
- Ecommerce Hosting Provider Drastically Cuts Server Provisioning Costs Using Virtualized InfiniBand-based Solution.
- Virtualized Hadoop Performance
on VMware vSphere® 5
- Introduction to Cloud Design
- RDMA Performance in Virtual Machines using QDR InfiniBand on VMware vSphere® 5
- Virtual Machine Migration Acceleration