Hitachi Data Systems PowerPoint (PPT) Template

Report
Innovation storage
technology for Business
Continuity
Andrey Pilipenko
Senior Solutions Consultant
November 2014
Customer Challenges
and Market Dynamics
Customer Challenges
 Site Failure
‒ Catastrophic events like an
airplane taking down a datacenter
& recovery issues
 Business Challenge
‒ Non-Stop Operation
‒ Reaching zero RPO & RTO
 Limited Failure Scenarios
 Gaining Connectivity from
Surviving Cluster
BUSINESS
CONTINUITY
Market Dynamics
 Game changer for customers needing active-active compute and
storage configurations
 Global-active device will allow customers to move to an activeactive fully fault tolerant storage configuration
‒ All that is required is global-active device, multipathing software, and
VSP G1000 systems, excluding the quorum.
‒ Enterprise-class customers seeking the strictest IT availability
requirements will gain from global-active device when failure is not an
option
New HDS Storage
Main directions
NONDISRUPTIVE
MIGRATION
Active
NEW
Continuous
Seamless identity
absorption and transfer
3X Performance
UNIFIED
MANAGEMENT
Active - Active
Virtual Storage
Workload
Mobility
Machine
Agility
Seamless
Availability
OLD
OLD
INTEGRATED
ACTIVE-ACTIVE
Site A
Site B
Extensible
Self-Managing
Policy-Driven
Streamlined
App Level SLA and
Business Customization
Zero Downtime
Flexible Mobility
Unified File and Block
Continuous Cloud Infrastructure Foundation:
Powered By Hitachi
Storage Virtualization
Operating System
Virtual Storage
Platform G1000
Redefining Mission-Critical
Storage Virtualization
Command Suite
V8
Hardware Product Packaging
Unified Block and File Configurations
File Modules
Add up to 8 nodes
to address customer’s
file sharing needs
• Up to 2 controller
chassis
Block-Only Configurations
Entry, Small,
Medium, Large,
Extra-Large
Models
Flash-Only
Configuration
• Up to 6 racks
• Same upgrade
flexibility options as
with VSP
Storage Virtualization Operating System
Mainframe-Only
Package
VSP G1000: Block Storage Specifications
VSP G1000
VSP G1000
Single Chassis
Dual Chassis
Maximum flash devices
96 FMD
192 SSD
192 FMD
384 SSD
192 FMD
256 SSD
Maximum internal drives
1152 2.5-inch disks
576 3.5-inch disks
2304 2.5-inch disks (2.7PB max)
1152 3.5-inch disks (4.5PB max)
2048 2.5-inch disks (2.4PB max)
1280 3.5-inch disks (5.0PB max)
Data Path
384GB/second
768GB/second
128GB/second
Control Path
64GB/second
128GB/second
64GB/second
Maximum Virtual Storage Director pairs
and total # of cores
4 VSD pairs
Total of 64 cores
8 VSD pairs
Total of 128 cores
4 VSD pairs
Total of 32 cores
Maximum Host ports (if no BEDs)
64 (96) Fibre Channel
64 (80) IBM® FICON®
80 FCoE*
128 (192) Fibre Channel
128 (176) FICON
176 FCoE*
192 Fibre Channel
176 FICON
88 FCoE
Maximum Cache Memory
1TB
2TB
1TB
Power comparison: Dual controller, (4) VSD
pairs, 1TB cache, (2) BEDs, (2048) 900GB
drives, 160 Fibre Channel ports
-
30.1 KVA
34.0 KVA
Max Local copy pairs
32K
32K
16K
Max Remote copy pairs
64K
64K
32K
LUNs/LDEVs
64K
64K
64K
Maximum
Bandwidth
VSP
Simplified Data Center Planning
AGILE
Deploy to Fit Data Center Requirements
 Increase floor space efficiency
 Eliminate data center hotspots
 Flexibly scale performance and capacity
Traditional
System
Layout
Today:
VSP G1000
Flexible
Deployment
Separate
Storage
Controllers
5M, 30M,
100M Options
Coming Soon:
Ultimate Deployment
Flexibility
Separate Controller Racks
and Disks Racks
New storage
functionality
Virtual Storage Machine
CONTINUOUS
Distributed Multi-Tenancy
Virtual setting
Resources
Group
Resources
Group
Resources
Group
System 1
System 2
System 3
VSP
Virtual Storage Machine
System 1
System 2
System 3
VSP G1000
Virtual disk controller which enables multi-tenancy and collaboration of
distributed resources group across physical storage system boundaries
N-Way Scale-Out
EXTENSIBLE
Dynamic and Intelligent
Virtual Storage Machine
Virtual Storage Machine
(expansion)
System
Add new system
Growth
System 1
System 2
System 1
Virtual Storage Machine
System 2
System 3
Virtual Storage Machine
Load
Balancing
Data
migration
System 1
System 2
System 1
System 2
Non-Disruptive Migration
CONTINUOUS
Maintain application and data copies relationship
Virtual Storage Machine
SIMPLEX
&
PAIRED DEVICES
System 1(Source)
Data
migration
SIMPLEX
&
PAIRED DEVICES
System 2(Target)
• Support LDEV migration
with replication works
• Fault protection in the
event of failure of Target
storage during migration
• Support SCSI-3
reservations
• Possibility of NDA
customers / partners and
(not GSS)
Hitachi Innovation: Global Storage
Virtualization
Global Storage Virtualization
Create
Virtual Storage
Machines
Abstract
addresses from
physical storage
controllers
global-active device
Virtual LDEVs:
00:01
00:02
Virtual Storage
Identity 12345
Virtual Storage
Identity 12345
Resource
Group 1
Resource
Group 2
LDEVs
ID:
10:00
10:01
10:01
10:02
10:02
LDEVs
ID:
20:00
20:01
20:01
20:02
20:02
VSP G1000 S/N 12345
Hitachi VSP G1000 with
Storage Virtualization Operating System
Leverage
Global-Active
Device
Functionality
Deliver shared
read-write
volumes
VSP G1000 S/N 23456
What is Global Active Device
 Synchronous remote copy of a data volume
 Active / Active configuration
 Campus Distance configuration a host is connected to both
storage systems
 Metro Distance configuration a host has a primary path and a
non-preferred path to insure the shortest path to the data is
always used
What is Global Active Device (cont.)
 Host sees a single volume on a single storage system
‒ Secondary storage system has a virtual storage machine (VSM) that
looks like the primary storage system
‒ The primary and secondary volumes are assigned the same virtual
LDEV number in the VSM
‒ Pair volumes are seen by the host as a single volume on a single
storage system
‒ Both volumes receive the same data from the host
Global Active Device

Global Active Device achieves concurrent references/updates of
mirrored volumes, while keeping data consistency
 Main DKC/Reserve DKC
Prod. Server-1
(Active)
App/
DBMS
App/DBMS
clustering
Prod. Server-2
(Active)

HA mirrored volumes for production servers
App/
DBMS

Mirrored volumes accept Read/Write I/O on
both sides

Requires 2 G1000 storage systems
 Quorum DKC
HA
mirroring
Main
DKC
Reserve
DKC
QRM
Quorum
DKC

Quorum disk, determines HA owner node in
the case of a failure

Any storage system can be used as long as it
is supported by UVM
 Production Servers

Clustered App/DBMS

Active-Active processing with sharing the
same data
Global-Active Device with clustering management

Storage Mgt
Server(Active)
HCS
Prod. Server-1
(Active)
HDvM
Agent
CCI
App/
DBMS
HCS offers efficient management of Global-Active Devices
While providing central control of multiple DKCs
App/DBMS
clustering
 Storage Management Server
Storage Mgt
Server(Passive)
HCS
clustering
HCS
Prod. Server-2
(Active)
HDvM
Agent
CCI
App/
DBMS
Pair Mgt
Server
Pair Mgt
Server
CMD
HCS
DB
HA
mirroring
TC/HA
mirroring
MDKC
CMD
HCS
DB
RDKC
QRM
Quorum
DKC
 Clustered HCS server is running,
where the local HCS server enables
HA management
 In the case of local site failure, the
remote HCS server takes over the
HA management
 HCS DB should be replicated with
either TC or HA
 Pair Management Servers
 Runs HDvM Agent/CCI
 HCS management requests to
configure/operate the HA mirrors via
the command device
GAD Failure Cases
 Failure Cases and Failback procedures
‒ Single path Failure
‒ Primary Storage System Failure
‒ All Active path to P-Vol fail for a
Prod.
Server-1
Prod.
Server-2
single host in the cluster
‒ Quorum Disk Failure
‒ Storage Replication link failure
‒ WAN Storage Connection Failure
‒ Primary Site Failure
‒ Secondary Site Failure
Primary
Storage
System
Quorum
Secondary
Storage
System
System Configuration
- Planning Data Centers Site Configuration
3 Data Centers


Each storage system is located on the
separate site
Provides maximum level of business
continuity for any type of storage system
failures, or site failures (Local site, Remote
site, Quorum site)
Primary
Storage
System
2 Data Centers


Primary Storage System and Quorum
are located on the local site and
Secondary Storage System is located
on the remote site.
Provides moderate level of business
continuity for any type of storage
system failures, or Remote site failure
Secondary
Storage
System
Primary
Storage
System
Secondary
Storage
System
A Single Data Center


All the storage systems are
located on the same site
Provides business
continuity for storage
system failures, but cannot
continue the business for a
site failure
Primary
Storage
System
Secondary
Storage
System
Quorum
QRM
Local site
Quorum
Remote site
QRM
Quorum site
Remote site
QRM
Quorum
Local site
Local site
New management
functionality
Hitachi Command Suite V8
Operational Efficiency Through Simplified, Integrated Management
Hitachi Command Suite v8
STREAMLINED
Management efficiency for the modern data center
• Hitachi VSP G1000 support
• HNAS support enhancements
• Multipath management enhancements
• 64-bit Architecture
GLOBAL STORAGE VIRTUALIZATION REALIZED
Create Virtual Storage Machine
Global-Active Device
Setting Up GAD – Set up global-active device dialog
 Launching “Set up global-active device”
We have two launching points to invoke the landing page.
- General Tasks in the Replication Tab
- Shortcut in the [Actions] menu
Add a launching point in General
Tasks in the Replication Tab.
Replication Tab
Add shortcut in [Actions]
[Actions] Menu
Global-Active Device
Implementation Service
Global-Active Device Implementation Service from
Hitachi Data Systems - Scope of the Service
 Implementation based on global-active device configuration:
‒ Consulting service to arrive at optimum solution design
‒ Integrate global-active device configuration into customer’s environment and
validate the system recovery
‒ Configure global-active device, zoning for global-active device cluster, multipathing
software, and production pairs
‒ Create design documentation, perform on a test server and demonstrate
recovering from various paths and quorum array failures
‒ Provide best practices on platform to limited staff, demonstrate features,
functionality, and knowledge transfer
‒ Provide project management and technical oversight
Thank You

similar documents