Selecting the Correct Hypervisor - Home

Report
Selecting the Correct Hypervisor
Boston Virtualization Deep Dive Day 2011
Tim Mackey XenServer Evangelist
What to Expect Today ….
• Balanced representation of each hypervisor
• Where the sweet spots are for each vendor
• No discussion of performance
• No discussion of ROI and TCO
• What you should be thinking of with cloud
The Land Before Time …
• Virtualization meant mainframe/mini
• x86 was “real mode”
• Until 1986 and the 80386DX changed the world
• Now “protected mode” and rings of execution (typically ring 0 and ring 3)
• Real mode OS vs. Protected mode
• x86 always boots to real mode (even today)
• Kernel takes power on and enables protection models
• Early kernels performed poorly in protected mode
• Focus was on application virtualization not OS virtualization
VMware Creates Mainstream x86 Virtualization
• Early 2001 ESX released as first type-1 for x86
• ESX uses an emulation model known as “binary translation”
to trap protected mode operations and execute protected
operations cleanly in the VMkernel
• Heavily tuned over years of experience
• Leverages 80386 protection rings and exception handlers
• Can result in FASTER code execution
Enter Hardware Assist
• 2005-2006 Intel and AMD introduce hardware assist
• Idea was to take non-trappable privileged CPU OP codes and isolate them
• Introduced “user mode” and “kernel mode”
• Introduced “Ring -1”
• Binary translation could still be faster
• 2008-2009 Intel and AMD introduce memory assist
• CPU Op code only addressed part of the problem
• Memory paging seen as key to future performance
• Hardware + Moore’s Law > Software + Tuning
What About IO?
• Shared IO bottlenecks
• VM density magnifies problem
• Throughput demands impact peer VMs
• Enter SR-IOV in 2010
• Hardware is virtualized in hardware
• Virtual Function presented to guest
The Core Architectures
vSphere Hypervisor
• ESX
• VMkernel provides hypervisor
• Service console is for management
• IO is managed through emulated devices
• ESX is EOL long live ESXi
• Service console is gone
• Management via API/CLI
• VMkernel now includes management,
agents and support consoles
• Security vastly improved over ESX
XenServer
• Based on Open Source Xen
• Requires hardware assist
• Management through Linux control
domain (dom0)
• IO managed using split drivers
Hyper-V
• Requires hardware assist
• Management through Windows
2008 “Parent partition”
• VMs run as child partitions
• Linux enabled using “Xenified”
kernels
• IO is managed through parent
partition and enlightened drivers
KVM
• Requires hardware assist
• KVM modules part of Linux kernel
• Converts Linux into type-1
• Each VM is a process
• Defined as “guest mode”
• IO managed via Linux and VirtIO
Commercial Free Contenders for Your Budget
VMware vSphere Hypervisor (ESXi)
Manageability
Scalability
Key Features
Guest Support
Costs
• Single server management via vSphere client
• 256 GB Host RAM
• 2 physical cores
• Thin provisioning
• Very broad OS support
• Edition and feature based licensing
• Support a percentage of sale
Microsoft Hyper-V Server R2 SP1
Manageability
Scalability
Key Features
Guest Support
Costs
• Single server management via Remote Server Admin Tools
• 1TB host RAM
• 8 Logical CPUs per host
• Host clustering
• Live migration
• Windows Vista and Windows Server 2003 and higher
• RHEL 5.2 and SLES 10 and higher
• Edition and VM based pricing
• Support and SA extra
Red Hat Enterprise Virtualization (KVM)
Manageability
Scalability
Key Features
Guest Support
Costs
• Centralized multi-server management
• Resource pools
• 1TB host RAM – 256 GB guest RAM
• 96 Logical CPUs per host – 16 vCPUs per guest
• All RHEL 5 devices and storage types
• Memory overcommit (KSM)
• Windows XP and Windows Server 2003 and higher
• RHEL 3 and higher
• Annual support options priced per six sockets
Oracle VM
Manageability
Scalability
Key Features
Guest Support
Costs
• Centralized multi-server management
• Resource pools
• 1TB host RAM – 32 GB guest RAM
• 128 Logical CPUs per host – 32 vCPUs per guest
• Secure live migration using shared storage (NFS, OCFS32 iSCSI)
• Load balancing and Cluster High Availability
• Windows 2000 and higher
• Oracle Linux, RHEL
• Annual per host support options priced per socket
Citrix XenServer
Manageability
Scalability
Key Features
Guest Support
Costs
• Centralized multi-server management
• Resource pools
• 512 GB host RAM – 128 GB guest RAM
• 64 logical CPUs per host – 16 vCPUs per guest
• Live migration using shared storage (NFS, iSCSI, Fiber)
• VM snapshot and revert
• Windows XP and higher
• CentOS, Debian,Oracle, SuSE, RHEL
• Edition based per host licensing
• Support is incident based
Hypervisor is now a commodity!!
Maximizing Your Budget
• Single hypervisor model is flawed
• Wasted dollars, wasted performance
• Spend your resources where you need to
• OS compatibility
• VM density
• IO performance
• Application support models
• Application availability
Deconstructing Key Functionality
Memory Over Commit
• Objective: Increase VM density and efficiently use host RAM
• Risks: Performance and Security
• Options: Ballooning, Page sharing, Compression, Swap
Ballooning Method
Page sharing
Compression
Performance/Security
vSphere 4.1
•Starts large
•Windows and Linux
4k pages only with
hash; latent
coalesce with CoW
Compression of •Hash collisions
memory during •Recovery from swap
oversubscribe
•Compatible page scans
XenServer 5.6
•Starts large
•Windows and Linux
None
None
•Doesn’t resize up
Hyper-V SP1
•Starts small
•Windows only
None
None
•Memory space growth
RHEV (KVM)
•Linux only
Kernel Samepage
Merging; CoW
None
•B-tree collisions
•Can use swap
Load Balancing
• Objective: Ensure optimal performance of guests and hosts
• Risks: Performance and Security
• Options: Input metrics, reporting, variable usage models
Feature name
Input metrics
Reporting
Control points
vSphere 4.1
Dynamic Resource
Scheduling
•CPU
•Memory
None
•Host affinity/anti-affinity
•Initial placement 100%
XenServer 5.6
Workload Balancing
•CPU
•Memory
•Disk IO R/W
•Network IO R/W
•Pool/Host
•VM
•Audit
•Consolidation
•Schedulable
•Historical placement
Hyper-V R2
PRO (SCVMM)
•CPU
•Memory
SCVMM + SCOM
•Initial placement 100%
RHEV (KVM)
Load Balancing
None
None
N/A
Virtual Networking
• Objective: Support data center and cloud networking
• Risks: Data leakage and performance
• Requirement: Make server virtualization compatible with networking
Feature name
Key features
Reporting
Dependencies
Virtual Distributed
Switch
•Centralized management
•Full Cisco Nexus features
NetFlow v9
Cisco Nexus 1000V
XenServer 5.6 FP1 Distributed Virtual
Switch
•Centralized management
•RSPAN
•QoS
•ACLs
NetFlow v5
None
Hyper-V R2
Windows network stack
N/A
N/A
N/A
RHEV (KVM)
Linux bridge
N/A
N/A
N/A
vSphere 4.1
The Sweet Spots
VMware vSphere 4.1
Key play: Legacy server virtualization
• Large operating system support
• Large eco-system => experienced talent readily available
Bonus opportunities
• Feature rich data center requirements
• Cloud consolidation through Cisco Nexus 1000V
Weaknesses
• Complex licensing model
• Reliance on SQL Server management database
Microsoft Hyper-V R2 SP1
Key play: Desktop virtualization
• VM density is key
• Memory over commit + deep understanding of Windows 7 => success
Bonus opportunities
• Microsoft Server software
• Ease of management for System Center customers
Weaknesses
• Complex desktop virtualization licensing model
• Complex setup at scale
• “Patch Tuesday” reputation
RedHat KVM
Key plays: Linux virtualization
• RHEL data centers
Weaknesses
• Limited enterprise level feature set
• Niche deployments and early adopter syndrome
• Support only model may limit feature set
Oracle VM
Key play: Hosted Oracle Applications
• Oracle only supports its products on OVM
Bonus opportunities
• Server virtualization
• Applications requiring application level high availability
• Data centers requiring secure VM motion
Weaknesses
• Limited penetration outside of Oracle application suite
• Support only model may limit future development
Citrix XenServer 5.6 FP1
Key play: Cloud platforms
• Largest public cloud deployments
Bonus opportunities
• Citrix infrastructure
• Linux data centers
• General purpose virtualization
• Windows XP/Vista desktop virtualization
Weaknesses
• Application support statements
• HCL gaps
Beyond the Data Center and into the Cloud
Hybrid Cloud
Traditional
Datacenter
• On premise
• High fixed cost
• Full control
• Known security
Hybrid Cloud
• On/off premise
• Low utility cost
• Self-service
• Fully elastic
• Trusted security
• Corporate control
Public
Cloud
• Off premise
• Low utility cost
• Self-service
• Fully elastic
Transparency is a Key Requirement
Traditional
Datacenter
• On premise
• High fixed cost
• Full control
• Known security
Traditional
Datacenter
Hybrid Cloud
• On/off
premise
Issues
• Low utility
cost
• Disparate
Networks
• Self-service
• Disjoint
User Experience
• Fully elasticSLAs
• Unpredictable
• Trusted
security
• Different
Locations
• Corporate control
Hybrid Cloud
Public
Cloud
• Off premise
• Low utility cost
• Self-service
• Fully elastic
Enabling Transparency Enables Hybrid Cloud
Traditional
Datacenter
Cloud
Provider
OpenCloud Bridge
• Network transparency for Disparate Networks
• Latency transparency to preserve the same User Experience
• Services transparency to make SLAs predictable
• Location transparency to allow Anywhere Access
OpenCloud Bridge Use-Case
Premise Datacenter
Hypervisor
Cloud
Hypervisor
IP: 192.168.1.100
Subnet: 255.255.254.0
Reqs: DB, Web and LDAP
Private Public
Public Private
Switch
Switch
vSwitch
vSwitch
Storage
LDAP
Network: 10.2.1.0
Subnet: 255.255.254.0
DB Server
= Netscaler VPX
It’s Your Budget … Spend it Wisely
Single Vendor
ROI Can be
Manipulated
Understand Support
Model
Use Correct Tool
Leverage Costly
Features as Required
• Vendor lock-in great for vendor
• Beware product lifecycles and tool set changes
• ROI Calculators always show vendor author as best
• Use your own numbers
• Over buying is costly; get what you need
• Support call priority with tiered models
• Some projects have requirements best suited to specific tool
• Understand deployment and licensing impact
• Blanket purchases benefit only vendor
• Chargeback to project for feature requirements
Shameless XenServer Plug
• Social Media
• Twitter: @XenServerArmy
• Facebook: http://www.facebook.com/CitrixXenServer
• LinkedIn: http://www.linkedin.com/groups?mostPopular=&gid=3231138
• Major Events
• XenServer Master Class – March 23rd next edition
• Citrix Synergy – San Francisco May 25-27 2011 (http://citrixsynergy.com)

similar documents