Mellanox - NVIDIA GPUDirect

Report
Paving The Road to Exascale Computing
Highest Performing, Most Efficient
End-to-End Connectivity for Servers and Storage
Peter Waxman
VP of HPC Sales
April 2011
[email protected]
Company Overview
Ticker: MLNX
 Leading connectivity solutions provider for
data center servers and storage systems
Recent Awards
• Foundation for the world’s most powerful and energyefficient systems
• >7.0M ports shipped as of Dec.’10
 Company headquarters:
• Yokneam, Israel; Sunnyvale, California
• ~700 employees; worldwide sales & support
 Solid financial position
• Record Revenue in FY’10; $154.6M
• Q4’10 revenue = $40.7M
 Completed acquisition of Voltaire, Ltd.
© 2011 MELLANOX TECHNOLOGIES
- MELLANOX CONFIDENTIAL -
2
Connectivity Solutions for Efficient Computing
Enterprise HPC
High-end HPC
HPC Clouds
Mellanox Interconnect Networking Solutions
ICs
Adapter Cards
Host/Fabric
Software
Switches/Gateways
Cables
Leading Connectivity Solution Provider For Servers and Storage
© 2011 MELLANOX TECHNOLOGIES
- MELLANOX CONFIDENTIAL -
3
Combining Best-in-Class Systems Knowledge and
Software with Best-in-Class Silicon
+
 Mellanox Brings
 Voltaire Brings
Ethernet/VPI
InfiniBand
• InfiniBand and 10GbE Silicon
Technology & Roadmap
• Adapter Leadership
• Advanced HW Features
• End to End Experience
• Strong OEM Engagements
 InfiniScale & ConnectX
•
•
•
•
HCA and Switch Silicon
HCA Adapters, FIT
Scalable Switch Systems
Dell, HP, IBM, and Oracle
© 2011 MELLANOX TECHNOLOGIES
• Silicon, Adapters and Systems
Leadership
• IB Market Share Leadership
• Full Service Offering
• Strong Customer and OEMs
Engagements
 InfiniBand Market Leader
 Grid Directors & Software
+
•
•
•
•
UFM Fabric Management SW
Applications Acceleration SW
Enterprise Class Switches
HP, IBM
=
 10GbE Vantage Switches & SW
 10GbE and 40GbE Adapters
• Highest performance
Ethernet Silicon
• 10GbE LOM and Mezz
Adapters at Dell, HP and IBM
 Combined Entity
• InfiniBand and 10GbE Switch
Systems Experience
• IB Switch Market Share Leadership
• End to End SW & Systems Solutions
• Strong Enterprise Customer
Engagements
+
•
•
•
•
UFM Fabric Management SW
Applications Acceleration SW
24, 48, 288 Port 10GbE Switches
HP, IBM
- MELLANOX CONFIDENTIAL -
• End to End Silicon, Systems, Software
Solutions
• FDR/EDR Roadmap
• Application Acceleration and
Fabric Management Software
• Full OEM Coverage
 Ethernet Innovator
=
• End to End Silicon, Systems, Software
Solutions
• 10GbE, 40GbE and 100GbE Roadmap
• Application Acceleration and
Fabric Management Software
• Strong OEM Coverage
4
Connecting the Data Center Ecosystem
Hardware OEMs
Software Partners
End Users
Enterprise Data Centers
Servers
High-Performance Computing
Storage
Embedded
© 2011 MELLANOX TECHNOLOGIES
Embedded
- MELLANOX CONFIDENTIAL -
5
Most Complete End-to-End InfiniBand Solutions
 Adapter market and performance leadership
• First to market with 40Gb/s (QDR) adapters
– Roadmap to end-to-end 56Gb/s (FDR) in 2011
• Delivers next-gen application efficiency capabilities
• Global Tier-1 server and storage availability
- Bull, Dawning, Dell, Fujitsu, HP, IBM, Oracle, SGI, T-Platforms
 Comprehensive, performance-leading switch family
• Industry’s highest density and scalability
• World’s lowest port-to-port latency (25-50% lower than competitors)
 Comprehensive and feature-rich management/acceleration
software
• Enhancing application performance and network ease-of-use
 High-performance converged I/O gateways
• Optimal scaling, consolidation, energy efficiency
• Lowers space and power and increases application performance
 Copper and Fiber Cables
• Exceeds IBTA mechanical & electrical standards
• Ultimate reliability and signal integrity
© 2011 MELLANOX TECHNOLOGIES
- MELLANOX CONFIDENTIAL -
6
Expanding End-to-End Ethernet Leadership
 Industry’s highest performing Ethernet NIC
• 10/40GigE w/FCoE with hardware offload
• Ethernet industry’s lowest1.3μs end-to-end latency
• Faster application completion, better server utilization
 Tremendous ecosystem support momentum
• Multiple Tier-1 OEM design wins (Dell, IBM, HP)
– Servers, LAN on Motherboard (LOM), and storage systems
• Comprehensive OS Support
- VMware, Citrix, Windows, Linux
 High capacity, low latency 10GigE switches
• 24 to 288 ports with 600-1200ns latency
• Sold through multiple Tier-1 OEMs (IBM, HP)
• Consolidation over shared fabrics
 Integrated, complete management offering
• Service Oriented Infrastructure Management, with Open APIs
© 2011 MELLANOX TECHNOLOGIES
- MELLANOX CONFIDENTIAL -
7
Mellanox in the TOP500
Top500 InfiniBand Trends
Number of Clusters
250
215
200
182
150
142
100
50
0
Nov 08
Nov 09
Nov 10
 Mellanox InfiniBand builds the most powerful clusters
• Connects 4 out of the Top 10 and 61 systems in the Top 100
 InfiniBand represents 43% of the TOP500
• 98% of the InfiniBand clusters use Mellanox solutions
 Mellanox InfiniBand enables the highest utilization on the TOP500
• Up to 96% system utilization
 Mellanox 10GigE is the highest ranked 10GigE system (#126)
© 2011 MELLANOX TECHNOLOGIES
- MELLANOX CONFIDENTIAL -
88
Mellanox Accelerations for Scalable HPC
GPUDirect
10s-100s% Boost
30+% Boost
Scalable Offloading for MPI/SHMEM
Accelerating GPU Communications
80+% Boost
Maximizing Network Utilization Through
Routing & Management (3D-Torus, Fat-Tree)
© 2011 MELLANOX TECHNOLOGIES
Highest Throughput and Scalability
(Paving to Road to Exascale Computing)
- MELLANOX CONFIDENTIAL -
9
Software Accelerators
MPI
Performance
iSCSI
Storage
Highest
Performance
Messaging
Latency
© 2011 MELLANOX TECHNOLOGIES
- MELLANOX CONFIDENTIAL -
10
UFM Fabric Management
 Provides Deep Visibility
• Real-time and historical monitoring of fabric health and performance
• Central fabric dashboard
• Unique fabric-wide congestion map
 Optimizes performance
• Quality of Service
• Traffic Aware Routing Algorithm (TARA)
• Multicast routing optimization
 Eliminates Complexity
• One pane of glass to monitor and configure fabrics of thousand of nodes
• Enable advanced features like segmentation and QoS by automating provisioning
• Abstract the physical layer into logical entities such as jobs and resource groups
 Maximizes Fabric Utilization
• Threshold based alerts to quickly identify fabric faults
• Performance optimization for maximum link utilization
• Open architecture for integration with other tools in-context actions and fabric database
© 2011 MELLANOX TECHNOLOGIES
- MELLANOX CONFIDENTIAL -
11
LLNL Hyperion Cluster
 1152 nodes, dedicated cluster for development testing
 Open Environment
 CPUs: mix of Intel 4-core Xeon L5420 and 4-core Xeon E5530
 Mellanox InfiniBand QDR switches and adapters
© 2011 MELLANOX TECHNOLOGIES
- MELLANOX CONFIDENTIAL -
12
Mellanox MPI Optimizations – MPI Natural Ring
© 2011 MELLANOX TECHNOLOGIES
- MELLANOX CONFIDENTIAL -
13
Mellanox MPI Optimization – MPI Random Ring
© 2011 MELLANOX TECHNOLOGIES
- MELLANOX CONFIDENTIAL -
14
Mellanox MPI Optimization – Highest Scalability at LLNL
 Mellanox MPI optimization enable linear strong scaling for LLNL application
World Leading Performance and Scalability
© 2011 MELLANOX TECHNOLOGIES
- MELLANOX CONFIDENTIAL -
15
Summary
Financial
Academic
Research
Clustered
Computational Aided
Database
Engineering
Cloud &
Web
2.0
Bioscience
Oil and Gas
Weather
 Performance: Lowest latency, highest throughput , highest message rate
 Scalability: highest applications scalability through network accelerations
 Reliability: from silicon to system, highest signal/data integrity
 Efficiency: highest CPU/GPU availability through advanced offloading
Financial
Mellanox Connectivity Solutions
© 2011 MELLANOX TECHNOLOGIES
- MELLANOX CONFIDENTIAL -
16
Software Accelerators
MPI
Performance
iSCSI
Storage
Highest
Performance
Messaging
Latency
© 2011 MELLANOX TECHNOLOGIES
- MELLANOX CONFIDENTIAL -
17
Thank You
[email protected]
© 2011 MELLANOX TECHNOLOGIES
- MELLANOX CONFIDENTIAL -
18
Mellanox Scalable InfiniBand Solutions
 Mellanox InfiniBand solutions are Petascale-proven
•
•
•
•
Connecting 4 of 7 WW Petascale systems
Delivering highest scalability, performance, robustness
Advanced offloading/acceleration capabilities for MPI/SHMEM
Efficiency, congestion-free networking solutions
 Mellanox InfiniBand solutions enable flexible HPC
• Complete hardware offloads – transport, MPI
• Allows CPU interventions and PIO transactions
• Latency: ~1us ping pong; Bandwidth: 40Gb/s with QDR, 56Gb/s with FDR per port
 Delivering advanced HPC technologies and solutions
• Fabric Collectives Acceleration (FCA) MPI/SHMEM collectives offload
• GPUDirect for GPU accelerations
• Congestion control and adaptive routing
 Mellanox MPI optimizations
• Optimize and accelerate the InfiniBand channel interface
• Optimize resource management and resource utilization (HW, SW)
© 2011 MELLANOX TECHNOLOGIES
- MELLANOX CONFIDENTIAL -
19
Mellanox Advanced InfiniBand Solutions
Host/Fabric Software Management
- UFM, Mellanox-OS
- Integration with job schedulers
- Inbox Drivers
- Collectives Accelerations (FCA/CORE-Direct)
- GPU Accelerations (GPUDirect)
- MPI/SHMEM
- RDMA
- Quality of Service
Application Accelerations
- Adaptive Routing
- Congestion Management
- Traffic aware Routing (TARA)
Networking Efficiency/Scalability
Server and Storage High-Speed Connectivity
- Latency
- Bandwidth
© 2011 MELLANOX TECHNOLOGIES
- CPU Utilization
- Message rate
- MELLANOX CONFIDENTIAL -
20
Scalable Performance
© 2011 MELLANOX TECHNOLOGIES
- MELLANOX CONFIDENTIAL -
21
LLNL Hyperion Cluster
 1152 nodes, dedicated cluster for development testing
 Open Environment
 CPUs: mix of Intel 4-core Xeon L5420 and 4-core Xeon E5530
 Mellanox InfiniBand QDR switches and adapters
© 2011 MELLANOX TECHNOLOGIES
- MELLANOX CONFIDENTIAL -
22
Mellanox MPI Optimizations – MPI Natural Ring
© 2011 MELLANOX TECHNOLOGIES
- MELLANOX CONFIDENTIAL -
23
Mellanox MPI Optimization – MPI Random Ring
© 2011 MELLANOX TECHNOLOGIES
- MELLANOX CONFIDENTIAL -
24
Mellanox MPI Optimization – Highest Scalability at LLNL
 Mellanox MPI optimization enable linear strong scaling for LLNL application
World Leading Performance and Scalability
© 2011 MELLANOX TECHNOLOGIES
- MELLANOX CONFIDENTIAL -
25
Leading End-to-End Connectivity Solution Provider for
Servers and Storage Systems
Storage
Front / Back-End
Switch / Gateway
Server / Compute
Virtual Protocol Interconnect
Virtual Protocol Interconnect
40G IB & FCoIB
40G InfiniBand
10/40GigE
10/40GigE & FCoE
Fibre Channel
Industries Only End-to-End InfiniBand and Ethernet Portfolio
ICs
© 2011 MELLANOX TECHNOLOGIES
Adapter Cards
Host/Fabric
Software
- MELLANOX CONFIDENTIAL -
Switches/Gateways
Cables
26

similar documents