Depot: Cloud Storage with Minimal Trust

Depot: Cloud Storage with Minimal Trust
OSDI 2010
Prince Mahajan, Srinath Setty, Sangmin Lee,
Allen Clement, Lorenzo Alvisi,
Mike Dahlin, and Michael Walfish
The University of Texas at Austin
Presented by: Masoud SAEIDA ARDEKANI
• Cloud services are:
– Fault-prone
– Black-box
• Clients
– Hesitate to trust cloud services
– Rely on end-to-end checks of properties
What is Depot?
A cloud storage with minimal trust
• Eliminate trust for:
Put availability
Eventual consistency
Dependency preservation
• Minimizes trust for:
– Get availability
– Durability
Depot Overview
Put (k, )
Get (k)
{nodeID, key, H(value), localClock, History}nodeID
• Consistency Despite Faults!
– Add metadata to Puts
– Add local states to nodes
– Add checks on received metadata
Checks upon receiving an update
• Accept an update u sent by N if:
– u must be properly signed
– There is not omission
• All updates in u’s history are also in local history
– History is not modified
• u is newer than any prior update by N
But, faults can cause forks!
• Forks:
– Node’s local view is consistent!
– Inconsistent views between different nodes!
• Prevent eventual consistency!
Join forks for eventual consistency
• Faulty node  Two (correct) virtual node
Faults vs Concurrency
• Converting faults into concurrency
– Allow correct nodes to converge
• Concurrency can introduce conflicts!
– Already possible due to decentralized servers!
– Applications for high availability allow concurrent
• Depot exposes the conflicts to the application
– GET operation returns set of most recent
concurrent updates
• Causal Consistency (CC)
– If update u1 by a node depends on an update u0
by any node, then u0 becomes observable before
u1 at any node.
• Fork-Join Causal (FJC) Consistency
– If update u1 by a correct node depends on an
update u0 by any node, then u0 becomes
observable before u1 at any correct node.
Ensuring Properties
• Safety (FJC consistency)
– Local checks
• Liveness
– Reduce failures to concurrency
– Joining forks
Evaluation Setup
• 8 Clients + 4 Servers
– Quad-core Intel Xeon 2.4 GHz
– 8 GB RAM
– Two local 7200 RPM disk
– 2 clients are connected to each other!
• 1 Gbps link
• Each client issue 1 request / Minute
• Baseline
– Clients trust the servers (no local data, no checks)
• B+hash
– Clients attach hashes of values and verify hashes
• B+hash+Signing
– Clients sign the values and verifies signatures
• B+hash+Signing+Store
– Like B+hash+signing, plus locally store values that
they put
Latency of Depot
Cost of Depot
Behavior Under Faults
• 50% Put, 50% Get
• Total server failure after 300 seconds
Fork by faulty clients
• 50% Reads, 50% Writes
• Failure after 300 seconds
• No effect on Get or Put
• Depot: Cloud storage with minimal trust
• Any node could fail in any way!
• Eliminate trust for
– Put availability
– Eventual consistency
• Minimize trust for
– Get availability

similar documents