Emulex 16GFC for IBM Flex

Report
Emulex I/O Solutions
Storage Trends 2014
Hilmar Beck
Senior Sales Engineer
Next Generation Data Center Trends
Enterprise
Virtualization
Cloud
Network
Security
Application
Delivery
Virtualization and
consolidation growth
continues unabated
and drives I/O
bandwidth
Cloud strategies and
converged networks
require technology
and tools to measure
quality of service and
guarantee it
Increasing
predatory attacks
on IT portals press
need for security
solutions
and forensics
requires capture
solutions
Application specific
performance
demands are
driving latencies
and movement to
10/40/100Gb
converged fabrics
Flash
Storage
Deployment of
SSDs in storage
arrays is pushing
networking
bandwidth and
latency
IT Trends Driving New Networking Requirements
2
Emulex Confidential - © 2013 Emulex Corporation
What is Gen5 Fibre Channel?
Speed-based naming changed to generational-based naming
New advanced protocol services and features running multiple
speeds
– Not just bandwidth improvement
– 16GFC, 8GFC, 4GFC speeds
– Maintains backward compatibility to previous FC generations
1Gb
2Gb
4Gb
8Gb
Gen 5
Gen 6
Fibre Channel
Fibre Channel
Fibre Channel
Fibre Channel
Fibre Channel
Fibre Channel
Gen 1
Gen 2
Gen 3
Gen 4
16Gb
32Gb
3
Emulex Confidential - © 2013 Emulex Corporation

Advanced features

Multi-speed

16GFC/8GFC/4GFC
Emulex ExpressLane™
SSD Latency Challenge
•
Flash storage shares the SAN
with traditional rotating media
•
•
4
QoS Solution
•
ExpressLane creates separate
queues for low latency storageIdentified by LUN
Mission-critical requests are stuck
behind requests to slow storage
•
Individual queues are coalesced
for latency not overall bandwidth
Current queuing mechanisms are
optimized for throughput, not
latency
•
Queue associations are made
from OneCommand Manager
Emulex Confidential - © 2013 Emulex Corporation
Emulex ExpressLane™
1. VM1 on Server 1
requests low priority
I/O from Disk Array 1
VM1
VM2
Server
2
Server
1
Congestion Point
3
VM1 Traffic
1
Disk
Array
5
VM2 Traffic
2. VM2 requests high
priority IO from Disk
flash array
3. Congestion occurs at
HBA - VM2 traffic is
stuck behind slower
traffic

2
Disk/
Flash Array
Congestion at the HBA
(multiple VMs requesting I/0)
Emulex Confidential - © 2013 Emulex Corporation

ExpressLane
prioritizes VM2 traffic
over VM1
Provides Quality of
Service, I/Os have
consistent latency
performance
Emulex CrossLink™
SSD Isolation
Challenge
•
Flash requires coordination between nodes
•
Current solutions (TCP/IP and UDP) suffer from
“roundabout” stack-hopping
•
•
•
6
SSD Coordination
Solution
•
In-band FC Messaging solves
latency & “stack-hopping” for
cache or device coordination
Latency & QoS issues over Ethernet hamper
coordination with storage devices
•
Uses standard & proven FC-CT
protocol for FC and FCoE
Trust issues (storage networks deemed
implicitly secure)
•
Simple interface- kernel or API
Separate Ethernet connectivity requires
additional wiring & management
Emulex Confidential - © 2013 Emulex Corporation
Emulex CrossLink™
Example: VM Migration Cache Prefill
Server B
(Destination
VM)
Server A
(Source VM)
3
1.
Cache Prefill
1
2. Server A also sends
cache meta data via
Crosslink
3.
Caching software in
Server B processes
metadata to create a
“to do” list of pre-fill
operations
4.
Caching software
issues standard read
operations to load
cache
2
4
Tiered
Array
7
VM migrates from
Server A to Server B
Cache Prefill
(Virtual Machine Migration)
Emulex Confidential - © 2013 Emulex Corporation
Emulex & Brocade ClearLink (D_Port) Support
ClearLink is a rich SAN diagnostic mode from
Brocade Gen 5 FC switches (16GFC)
ClearLink is now supported by all 16G (only) host-based Emulex
LightPulse Gen 5 FC HBAs (XE201 based)
Identifies and isolates marginal link level failures and performance issues:
SFP, port, & cable
Emulex # 1 Gen 5 HBAs + Brocade #1 Gen 5 switches,
Together Providing Superior SAN-wide Diagnostics
8
Emulex Confidential - © 2013 Emulex Corporation
ClearLink D_Port
Saves Time & Money Troubleshooting
A cable is faulty, but
how do you find it?
Can be hundreds of
cables & SFPs in the
SAN. You could start
replacing
cables and
SFPs, one by one
using trial & error
 Wastes time
– Or replace all
cables and SFPs
in the environment
and try again
 Expensive &
wastes time
9
Emulex Confidential - © 2013 Emulex Corporation
Emulex Gen 5 Fibre Channel
Advanced Features for Best Flash/Cache, VMs
Emulex ExpressLane™
Emulex CrossLink ™
ClearLink Enablement
Priority Queuing
Quality of Service (QoS)
& performance to meet
SLAs for application
sensitive data,
flash/cache
In-band
Message-passing
Rich Diagnostic
D_port
Significantly reduces
latency, improves CPU
utilization
Reduces downtime,
saves time & money
troubleshooting
problems
Alleviates congested
networks
Increases reliability
Simplifies management
Maximizes ROI on
Flash/cache systems
10
Emulex Confidential - © 2013 Emulex Corporation
Industry-leading
reliability
Emulex Gen5 Accelerates Application Performance
Versus 8GFC
41%
33%
Faster workload
completion for
SQLServer
Database Applications
Data Warehousing Workload
Virtualization / Cloud
Exchange Workload
75%
11
More Transactions per
Second for SQLServer
(100-400 users)
Better throughput
VMware ESXi
3x
Emulex Confidential - © 2013 Emulex Corporation
IOPS vs. 8GFC
for Exchange
Ethernet Connectvity
Discrete
Networking
Converged
Networking
Software Defined
Convergence
– 3X Cost
– FCoE Driving 10GbE
– RDMA over CE (RoCE)
– 3X Management
– Virtual Networking
– Application Acceleration
– 3X Cables
– Led by Blade Servers
– Virtual I/O (OVN, SR-IOV)
– 3X Switching
– Telco and Web Giants
– Driving 40 and 100GbE
– Cloud, HPC, Big Data
12
Unique “switch agnostic” positioning
Emulex Confidential - © 2013 Emulex Corporation
Performance - What is New with the OCe14000?
Cloud,
Big Data & SDN
VNeX
Virtualization
70% Faster Hybrid Cloud
50% Better CPU Efficiency
Secure Multi-Tenant Clouds
SDN & Workload Optimization
Hybrid Cloud NVGRE/VXLAN
4X Small Packet Performance
50% Better IOPS
Lowest CPU Utilization
Save up to 50W/Server
Highest Bandwidth & IOPS / Watt
Web-scale
Performance
13
Operational
Efficiency
Emulex Confidential - © 2013 Emulex Corporation
High Performance Networking
Skyhawk RoCE
What is Remote Direct
Memory Management
(RDMA)?
What is RDMA over
Converged Ethernet
(RoCE)?
Benefits of Skyhawk-R
with RoCE

Ability to remotely manage memory enabling server to server data
movement directly between application memory without any CPU
involvement

A mechanism to provide this efficient data transfer with very low
latencies on Ethernet
Basically IB Over Ethernet (with PFC, etc.)
Classic Ethernet is a Best Effort Protocol



Delivers low latency for performance-critical and transaction intensive
applications

Better OPEX vs. Infiniband infrastructure which is requires a unique
fabric which is difficult to deploy and manage
RoCE is IB Over Existing Network Infrastructure
14
Emulex Confidential - © 2013 Emulex Corporation
Emulex Confidential © 2013 Emulex Corporation
14
Enterprise Cloud Needs RDMA
RDMA Read Efficiency Gains vs. TCP/IP
1200%
1000%
800%
600%
400%
200%
0%
512B
1K
2K
32K
64k
128k
256K
Internal testing - Emulex OCe14000 using SMB 3.0 on Windows Server 2012 R2
15
Emulex Confidential - © 2013 Emulex Corporation
512K
1M
Importance of File Transfer Performance
RDMA delivers 77% faster transfers
Our pockets are
generating Big Data
– Growing amount of digital
content on mobile devices
35
30
25
20
15
10
5
0
Transfer Time for 12GB file
RDMA
TCP/IP
Internal Testing: Emulex OCe14000 Using OFED 3.5.2
16
April 2-3, 2014
Emulex Confidential
- © 2013 Emulex Corporation
#2014IBUG
Move to Software-defined
Convergence
Multi-Fabric Block I/O
(FC, FCoE & iSCSI)
 6% CAGR Through 2016*
Software-Defined
Convergence
 25% 10/40G CAGR Through 2016*
Sources: *Crehan Research: Server-class Adapter and LOM Controller Long-range Forecast, July 2013 and **Dell’Oro Group: Fibre Channel Adapter Vendor Report 2Q2013, Aug. 2013
17
Emulex Confidential - © 2013 Emulex Corporation
Emulex RoCE Offerings
XE100 Series 10/40GbE
Network Controller
OCe14101/2 10GbE SFP+
Ethernet Adapters
OCe14401 40GbE QSFP+
Ethernet Adapters
18
Emulex Confidential - © 2013 Emulex Corporation
Where to use RDMA
Initial Applications:
– Windows 2012 SMB/Direct
• Which implies SQL/Server, Hyper-V VM migration etc.
– Linux NFS/RDMA
Microsoft will actively market Windows R2 and SMB/Direct
– Better CPU efficiency, faster VM migrations, etc.
– Convince the E/U they need or might want the option
Others will be added in the future
– Based on OEM, Partners and E/U requests and appropriate business
case
19
Emulex Confidential - © 2013 Emulex Corporation
RoCE in Action
Latency
Application
Application
User Buffer
User Buffer
TCP
TCP
UDP
UDP
Bypassed
Latency
IP
IP
Latency
Standard
10GbE Adapter
Outgoing
Data
Incoming
Data
RDMA enabled
10GbE Adapter
Outgoing
Data
Without RoCE
20
20
Incoming
Data
With RoCE
Emulex Confidential - © 2013 Emulex Corporation
Thank You!
21
Emulex Confidential - © 2013 Emulex Corporation

similar documents