Coding for Modern Distributed Storage Systems: Part 1. Locally

Report
Coding for Modern Distributed
Storage Systems: Part 1.
Locally Repairable Codes
Parikshit Gopalan
Windows Azure Storage, Microsoft.
The data deluge …
Problem Statement:
We generate insane amounts of digital data.
We expect it to be stored reliably and accessible anytime, anywhere.
And for free!
 Total data in the cloud is of the order of few hundred Exabytes
 A terabyte hard drive costs $100.
 Even storing raw data costs hundreds of millions.
Hardware is no longer cheap.
Currently, data centers consume up to 3 percent of all global electricity
production while producing 200 million metric tons of carbon dioxide.
Data Storage: the basics
Goal: Tolerate one disk failure.
 Duplication
2X overhead.
Quick recovery.
 Simple XOR [RAID5]
Treat each disk as a bit vector.
1.2X overhead.
Slower recovery.
Data Storage: the basics
Goal: Tolerate two disk failures.
 Triplication
3X overhead.
Quick recovery.
 [6,4] Reed Solomon Code [RAID6]
1.5X overhead.
Slower recovery.
Need a larger field: each disk is a byte-vector.
Data storage: the basics
Reed Solomon codes
  data symbols,  −  parity checks.
 Field size O().
 Any k symbols suffice for full data recovery (MDS).
How many parity checks do you need?
o(k) redundancy seems to be sufficient.
[G.-Huang-Jenkins-Yekhanin’13]

Failure rate  is tiny (assume  ⋅  < 0.5).

Goal is (only) to be as reliable as 3-way replication.
Should be getting overheads close to 0!
Data storage: the basics
Reed Solomon codes
  data symbols,  −  parity checks.
 Field size O().
 Any k symbols suffice for full data recovery (MDS).
How many parity checks do you need?
o(k) redundancy seems to be sufficient.
Should be getting overheads close to 0!
Recovery cost would be prohibitively high:
 Need to read  other disks (MDS).
 Limits us to small values of  (< 25).
Degraded Reads
Typical failure scenario: a single disk fails or is prohibitively slow.
Degraded Reads
Typical failure scenario: a single disk fails or is prohibitively slow.
Reed Solomon:
• Need to read  disks.
• Any  suffice.
Disk Failure
Typical failure scenario: a single disk fails and needs to be replaced.
Reed Solomon:
• Need to read  disks.
• Any  suffice.
Can we do better?
Regenerating Codes [Dimakis-Godfrey-Wu-Wainwright-Ramachandran’10]
Metric: Network bandwidth.
Optimize the amount of data communicated to repair a single lost node.
Locally Repairable Codes [G.-Huang-Simitci-Yekhanin’12]
Metric: Number of disk reads.
Optimize the number of disk reads needed to repair a single lost node.
Part 1 of this Tutorial: LRCs
Part 1.1: Locality
1. Locality of codeword symbols.
2. Rate-distance-locality tradeoffs.
Part 1.2: Reliability
1. Beyond minimum distance: Maximum recoverability.
2. Constructions of Maximally Recoverable LRCs.
Locality
[Chen-Huang-Li’07, Oggier-Datta’11, G.-Simitci-Huang-Yekhanin’12,
Papailiopoulos-Luo-Dimakis-Huang-Li’12]
[G.-Simitci-Huang-Yekhanin’12] A coordinate in a linear code has locality  if
it can be expressed as a linear combination of  other coordinates.
If  is lost, it can be recovered by reading just  other symbols.
 Data locality r: all data symbols have locality r.
 All-symbol locality r: all symbols have locality r.
Decouples typical decoding complexity  from length .
  reads for single failures, degraded reads.
 No guarantees for more worst-case failures.
Locally Decodable/Testable Codes
Locally Decodable Codes [Goldreich-Levin’89, Katz-Trevisan’00, Yekhanin’12]
 Implicit in early work on Majority Logic Decoding [Reed’52].
 Aims for locality up to the minimum distance.
 Super-linear length lower bounds known for  =  1 and constant relative
distance [Katz-Trevsian’00].
Such codes would have given fault-tolerant storage with unlimited
scalability.
 Best constructions are super-polynomial [Yekhanin’07, Efremeko’09].
 Can get high rate with larger locality [Kopparty-Saraf-Yekhanin’11, …
Meir’14].
Locally Testable Codes [Blum-Luby-Runbinfeld’90, Rubinfeld-Sudan’92].
Codes with data locality
Def: A , ,   code has data locality  if each information symbol
can be expressed as a linear combination of  other coordinates.
Pyramid Codes [Chen-Huang-Li’07]:
Take an [ +  − 1, , ] Reed-Solomon code, split the first parity.
This gives
1
 =  1 +  +  − 2.
Is the linear overhead necessary?
Codes with all-symbol locality
Def: An , ,   linear code has locality  if each co-ordinate can be
expressed as a linear combination of  other coordinates.
Add a local parity to every group of parity symbols of size r.
This gives
1
 = ( +  − 2)(1 +  ).
Rate-distance-locality tradeoffs
What are the tradeoffs between , , , ?
[G.-Huang-Simitci-Yekhanin’12]: In any linear code with information locality r,

 − ≥
+  − 2.

Rate-distance-locality tradeoffs
What are the tradeoffs between , , , ?
[G.-Huang-Simitci-Yekhanin’12]: In any linear code with information locality ,
+1
 ≥
 +  − 2.

Rate-distance-locality tradeoffs
What are the tradeoffs between , , , ?
[G.-Huang-Simitci-Yekhanin’12]: In any linear code with information locality r,
+1
 ≥
 +  − 2.

Moreover, any code achieving this bound for | and small  looks like
Rate-distance-locality tradeoffs
What are the tradeoffs between , , , ?
[G.-Huang-Simitci-Yekhanin’12]: In any linear code with all-symbol locality r,
+1
 ≥
 +  − 2.

Let |. For equality, ( + 1)|. Hence  ≥  + 3.
Codes that achieve equality could look like
Such codes do exist for  ≥  (non-explicit construction).
Explicit codes with all-symbol locality.
[Tamo-Papailiopoulos-Dimakis’13]
 Optimal length codes with all-symbol locality for  = exp().
 Construction based on RS code, analysis via matroid theory.
[Silberstein-Rawat-Koyluoglu-Vishwanath’13]
 Optimal length codes with all-symbol locality for  = 2 .
 Construction based on Gabidulin codes (aka linearized RS codes).
[Barg-Tamo’ 14]
 Optimal length codes with all-symbol locality for  = ().
 Construction based on Reed-Solomon codes.
Rate-distance-locality tradeoffs
What are the tradeoffs between , , , ?
[G.-Huang-Simitci-Yekhanin’12]: In any linear code with all-symbol locality r,
+1
 ≥
 +  − 2.

For equality, ( + 1)|. Hence  ≥  + 3.
 Algorithmic proof using linear algebra.
 [Papailiopoulus-Dimakis’12] Replace linear algebra with information theory.
 [Prakash-Lalitha-Kamath-Kumar’12] Generalized Hamming weights.
 [Barg-Tamo’13] Graph theoretic proof.
Generalizations
 Non-linear codes
[Papailiopoulos-Dimakis, Forbes-Yekhanin].
 Vector codes
[Papailoupoulos-Dimakis, Silberstein-Rawat-Koyluoglu-Vishwanath, KamathPrakash-Lalitha-Kumar]
 Codes over bounded alphabets
[Cadambe-Mazumdar]
 Codes with short local MDS codes
[Prakash-Lalitha-Kamath-Kumar, Silberstein-Rawat-Koyluoglu-Vishwanath]
 Codes with local Regeneration
[Silberstein-Rawat-Koyluoglu-Vishwanath, Kamath-Prakash-Lalitha-Kumar…]
Towards locally decodable codes
 Codes with short local MDS codes [Prakash-Lalitha-Kamath-Kumar,
Silberstein-Rawat-Koyluoglu-Vishwanath]
Avoids the slowest node bottleneck [Shah-Lee-Ramachandran]
 Sequential local recovery [Prakash-Lalitha-Kumar]
 Multiple disjoint local parities [Wang-Zhang, Barg-Tamo]
Can serve multiple read requests in parallel.
Problem: Consider an ,   linear code where even after  arbitrary failures,
every (information) symbol has locality . How large does  need to be?
[Barg-Tamo’14] bound might be a good starting point.

similar documents