From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: <0972fca12a15f15ebc88e32409d859a1@quanstro.net> From: erik quanstrom Date: Fri, 23 Jan 2009 22:36:39 -0500 To: 9fans@9fans.net In-Reply-To: <1232766921.22808.40.camel@goose.sun.com> MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit Subject: Re: [9fans] Changelogs & Patches? Topicbox-Message-UUID: 862e44b6-ead4-11e9-9d60-3106f5b1d025 > You never know when end-to-end data consistency will start to really > matter. Just the other day I attended the cloud conference where > some Amazon EC2 customers were swapping stories of Amazon's networking > "stack" malfunctioning and silently corrupting data that was written > onto EBS. All of sudden, something like ZFS started to sound like > a really good idea to them. i know we need to bow down before zfs's greatness, but i still have some questions. ☺ does ec2 corrupt all one's data en mass? how do you do meaningful redundency in a cloud where one controls none of the failure-prone pieces. finally, if p is the probability of a lost block, when does p become too large for zfs' redundency to overcome failures? does this depend on the amount of i/o one does on the data or does zfs scrub at a minimum rate anyway. if it does, that would be expensive. maybe ec2 is heads amazon wins, tails you loose? - erik