9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
From: Enrico Weigelt <weigelt@metux.de>
To: Fans of the OS Plan 9 from Bell Labs <9fans@cse.psu.edu>
Subject: Re: [9fans] thoughs about venti+fossil
Date: Thu,  6 Mar 2008 13:39:41 +0100	[thread overview]
Message-ID: <20080306123941.GE18329@nibiru.local> (raw)
In-Reply-To: <20080305143425.5A0FE1E8C52@holo.morphisms.net>

* Russ Cox <rsc@swtch.com> wrote:
> > 1. how stable is the keying ? sha-1 has only 160 bits, while
> >    data blocks may be up to 56k long. so, the mapping is only
> >    unique into one direction (not one-to-one). how can we be
> >    *really sure*, that - even on very large storages (TB or
> >    even PB) - data to each key is alway (one-to-one) unique ?
>
> if you change lump.c to say
>
> 	int verifywrites = 1;
>
> then venti will check every block as it is written to make
> sure there is no hash collision.  this is not the default (anymore).
> the snippet that forsyth quoted only applies if the block is
> present in the in-memory cache.

Okay, for the centralistic approach of venti, this will catch
collissions (BTW: what does it do if it detects one ?), but for
my idea of an distributed superstorage, this won't help much
(we couldn't use hashing for traffic reductions, safely).

> > 2. what approx. compression level could be assumed on large
> >    storages by putting together equal data blocks (on several
> >    kind of data, eg. typical office documents vs. media) ?
>
> i don't know of any studies that break it up by data type.
> the largest benefit is due to coalescing of exact files,
> and that simply depends on the duplicate rate.
> it's very workload dependent, and i don't know of any
> good studies to point you at.

hmm, already suspected that :(
Of course, for the fossil case, it the reuse rate will be very high,
but that's not the only use of this approach I currently think of.

Much more intersting, IMHO, is the idea of using it to compress an
large data storage and also build an easily extendible and redundant
storage system out of it (maybe even replacing RAID and reducing disk
traffic by much more effective caching).

> > 3. what happens on the space consumtion if venti is used as
> >    storage for heavily rw filesystems for a longer time
> >    (not as permanent archive) - how much space will be wasted ?
> >    should we add some method for block expiry (eg. timeouts
> >    of reference counters) ?
>
> no.  venti is for archiving.  if you don't want to use it for
> archiving, fine.  but timeouts and reference counts would
> break it for the rest of us (neither is guaranteed correct --
> what if a reference is written on a whiteboard or in a
> rarely-accessed safe-deposit box?).

Okay, let's assume some venti-2, which has timeouts, then it would
require some kind of refresh from time to time. For an eternal
archive, this of course would make more throuble than worth it.
(would also require some way for reclaiming unused space).

But if it should be used as backend for some "normal" filesystem,
it would be an interesting feature. Of course the fs on top then
MUST refresh from time to time, but this can be done while the
system is idle (good for situations with high load peaks and enough
idle time on the other hand). Instead an explicit reference counting
must act immediately (okay, removing large trees could be moved to
background somehow).

> you can do snapshots in fossil instead of archives and
> those *can* be timed out.  but they don't use venti
> and don't coalesce storage as much as venti does,
> which is why timeouts are possible.

hmm, that doesn't satisfy me, I'd really like to have the both
benefits together ;-P

As I'm already planning an distributed filesystem and going to
use an venti-like approach for it, I'll do some exoeriments with
some "venti-2", which allows reclaiming expired blocks.

> > 4. assuming #1 can be answered 100% yes - would it suit for
> >    an very large (several PB) heavily distributed storage
> >    (eg. for some kind of distributed, redundant filesystem) ?
>
> there are many systems that have been built using
> content-addressed storage, just not on top of venti.
> Here are a few.
>
> http://pdos.csail.mit.edu/pastwatch/
> http://pdos.csail.mit.edu/ivy/
> http://en.wikipedia.org/wiki/Distributed_hash_table

Thanks for the links. The text about DHT (and linked stuff) is
very interesting. My ideas were already near to the PHT and Trie :)

I'll have to do some bit more research to get redundancy/fault
tolerance and dynaic space allocation together. In an ideal
storage network, individual storage areanas (even within some
server node) can be added and removed easily on the fly.


cu
--
---------------------------------------------------------------------
 Enrico Weigelt    ==   metux IT service - http://www.metux.de/
---------------------------------------------------------------------
 Please visit the OpenSource QM Taskforce:
 	http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
	http://patches.metux.de/
---------------------------------------------------------------------


  reply	other threads:[~2008-03-06 12:39 UTC|newest]

Thread overview: 67+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-03-05  4:00 Enrico Weigelt
2008-03-05  4:11 ` Roman Shaposhnik
2008-03-05  4:43   ` erik quanstrom
2008-03-05  5:09     ` Roman Shaposhnik
2008-03-05  5:52   ` Enrico Weigelt
2008-03-05  6:24     ` geoff
2008-03-05  6:35     ` Taj Khattra
     [not found]     ` <7f575fa27b41329b9ae24f40e6e5a3cd@plan9.bell-labs.com>
2008-03-06  4:04       ` Enrico Weigelt
2008-03-06  4:13         ` Bruce Ellis
2008-03-06  4:15         ` andrey mirtchovski
2008-03-06  4:31           ` Bruce Ellis
2008-03-06  6:16             ` Enrico Weigelt
2008-03-06 18:50               ` ron minnich
2008-03-06 19:43                 ` Charles Forsyth
2008-03-06 19:45               ` Paul Lalonde
2008-03-06 20:18                 ` Bruce Ellis
2008-03-06 21:39                   ` Paul Lalonde
2008-03-08  9:06                     ` Enrico Weigelt
2008-03-06 22:10                   ` Martin Harriss
2008-03-06  6:40           ` Enrico Weigelt
2008-03-06 14:35             ` erik quanstrom
2008-03-06 14:58             ` Tom Lieber
2008-03-06 15:09             ` Charles Forsyth
2008-03-06 17:09               ` Robert Raschke
2008-03-10 10:19               ` sqweek
2008-03-10 12:29                 ` Gorka Guardiola
2008-03-10 13:20                 ` erik quanstrom
2008-03-10 19:00                   ` Wes Kussmaul
2008-03-10 19:27                     ` erik quanstrom
2008-03-10 20:55                       ` Bakul Shah
2008-03-11  2:04                       ` Wes Kussmaul
2008-03-11  2:10                         ` erik quanstrom
2008-03-11  6:03                           ` Bruce Ellis
2008-03-10 16:18                 ` Russ Cox
2008-03-10 18:06                   ` Bruce Ellis
2008-03-10 18:31                     ` Eric Van Hensbergen
2008-03-10 18:40                       ` Bruce Ellis
2008-03-10 18:46                     ` Geoffrey Avila
2008-03-10 20:28                       ` Charles Forsyth
2008-03-10 21:35                     ` Charles Forsyth
2008-03-06  9:54           ` Wilhelm B. Kloke
2008-03-08  9:37             ` Enrico Weigelt
2008-03-08  9:57               ` Bruce Ellis
2008-03-08 10:46               ` Charles Forsyth
2008-03-08 15:37               ` erik quanstrom
2008-03-06  4:40         ` cummij
2008-03-06  5:15           ` Bruce Ellis
2008-03-06  5:40         ` Uriel
2008-03-06  5:55           ` Bruce Ellis
2008-03-11 18:34             ` Uriel
2008-03-06 12:26           ` erik quanstrom
2008-03-05  5:04 ` geoff
2008-03-05  8:43 ` Charles Forsyth
2008-03-05  9:05   ` Gorka Guardiola
2008-03-05 14:33 ` Russ Cox
2008-03-06 12:39   ` Enrico Weigelt [this message]
2008-03-06 16:58     ` Russ Cox
2008-03-06 18:16       ` andrey mirtchovski
     [not found] ` <a553f487750f88281db1cce3378577c7@terzarima.net>
2008-03-06  5:38   ` Enrico Weigelt
2008-03-06  9:44     ` Joel C. Salomon
2008-03-05 14:03 erik quanstrom
2008-03-05 16:00 ` Russ Cox
2008-03-06 19:09 Brian L. Stuart
2008-03-06 19:50 ` Charles Forsyth
2015-04-21 18:30 hruodr
2015-04-21 19:46 ` Russ Cox
2015-04-23  7:21 hruodr

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080306123941.GE18329@nibiru.local \
    --to=weigelt@metux.de \
    --cc=9fans@cse.psu.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).