From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Thu, 6 Mar 2008 13:39:41 +0100 From: Enrico Weigelt To: Fans of the OS Plan 9 from Bell Labs <9fans@cse.psu.edu> Subject: Re: [9fans] thoughs about venti+fossil Message-ID: <20080306123941.GE18329@nibiru.local> References: <20080305040019.GA13663@nibiru.local> <20080305143425.5A0FE1E8C52@holo.morphisms.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080305143425.5A0FE1E8C52@holo.morphisms.net> User-Agent: Mutt/1.4.1i Topicbox-Message-UUID: 70349454-ead3-11e9-9d60-3106f5b1d025 * Russ Cox wrote: > > 1. how stable is the keying ? sha-1 has only 160 bits, while > > data blocks may be up to 56k long. so, the mapping is only > > unique into one direction (not one-to-one). how can we be > > *really sure*, that - even on very large storages (TB or > > even PB) - data to each key is alway (one-to-one) unique ? > > if you change lump.c to say > > int verifywrites = 1; > > then venti will check every block as it is written to make > sure there is no hash collision. this is not the default (anymore). > the snippet that forsyth quoted only applies if the block is > present in the in-memory cache. Okay, for the centralistic approach of venti, this will catch collissions (BTW: what does it do if it detects one ?), but for my idea of an distributed superstorage, this won't help much (we couldn't use hashing for traffic reductions, safely). > > 2. what approx. compression level could be assumed on large > > storages by putting together equal data blocks (on several > > kind of data, eg. typical office documents vs. media) ? > > i don't know of any studies that break it up by data type. > the largest benefit is due to coalescing of exact files, > and that simply depends on the duplicate rate. > it's very workload dependent, and i don't know of any > good studies to point you at. hmm, already suspected that :( Of course, for the fossil case, it the reuse rate will be very high, but that's not the only use of this approach I currently think of. Much more intersting, IMHO, is the idea of using it to compress an large data storage and also build an easily extendible and redundant storage system out of it (maybe even replacing RAID and reducing disk traffic by much more effective caching). > > 3. what happens on the space consumtion if venti is used as > > storage for heavily rw filesystems for a longer time > > (not as permanent archive) - how much space will be wasted ? > > should we add some method for block expiry (eg. timeouts > > of reference counters) ? > > no. venti is for archiving. if you don't want to use it for > archiving, fine. but timeouts and reference counts would > break it for the rest of us (neither is guaranteed correct -- > what if a reference is written on a whiteboard or in a > rarely-accessed safe-deposit box?). Okay, let's assume some venti-2, which has timeouts, then it would require some kind of refresh from time to time. For an eternal archive, this of course would make more throuble than worth it. (would also require some way for reclaiming unused space). But if it should be used as backend for some "normal" filesystem, it would be an interesting feature. Of course the fs on top then MUST refresh from time to time, but this can be done while the system is idle (good for situations with high load peaks and enough idle time on the other hand). Instead an explicit reference counting must act immediately (okay, removing large trees could be moved to background somehow). > you can do snapshots in fossil instead of archives and > those *can* be timed out. but they don't use venti > and don't coalesce storage as much as venti does, > which is why timeouts are possible. hmm, that doesn't satisfy me, I'd really like to have the both benefits together ;-P As I'm already planning an distributed filesystem and going to use an venti-like approach for it, I'll do some exoeriments with some "venti-2", which allows reclaiming expired blocks. > > 4. assuming #1 can be answered 100% yes - would it suit for > > an very large (several PB) heavily distributed storage > > (eg. for some kind of distributed, redundant filesystem) ? > > there are many systems that have been built using > content-addressed storage, just not on top of venti. > Here are a few. > > http://pdos.csail.mit.edu/pastwatch/ > http://pdos.csail.mit.edu/ivy/ > http://en.wikipedia.org/wiki/Distributed_hash_table Thanks for the links. The text about DHT (and linked stuff) is very interesting. My ideas were already near to the PHT and Trie :) I'll have to do some bit more research to get redundancy/fault tolerance and dynaic space allocation together. In an ideal storage network, individual storage areanas (even within some server node) can be added and removed easily on the fly. cu -- --------------------------------------------------------------------- Enrico Weigelt == metux IT service - http://www.metux.de/ --------------------------------------------------------------------- Please visit the OpenSource QM Taskforce: http://wiki.metux.de/public/OpenSource_QM_Taskforce Patches / Fixes for a lot dozens of packages in dozens of versions: http://patches.metux.de/ ---------------------------------------------------------------------