From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: <139057a9c235501d50466e05081eb208@quanstro.net> From: erik quanstrom Date: Thu, 12 Feb 2009 11:28:51 -0500 To: 9fans@9fans.net In-Reply-To: <1234455577.4957.393.camel@goose.sun.com> MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Subject: Re: [9fans] source browsing via http is back Topicbox-Message-UUID: 9f221a10-ead4-11e9-9d60-3106f5b1d025 > On Thu, 2009-02-12 at 07:49 -0500, erik quanstrom wrote: > > exactly. the point i was trying to make, and evidently > > was being too coy about, is that 330 odd gb wouldn't > > be as useful a number as the sum of the sizes of all the > > new/changed files from all the dump days. this would > > be a useful comparison because this would give a > > measure of how much space is saved with venti over > > the straightforward algorithm of copying the changed > > blocks, as ken's fileserver does. > > I'm confused. Since when did kenfs entered this conversation? > I thought we were talking about how sources are managed today > and how replica might make you waste some bandwidth (albeit, > not all that much given how infrequently sources themselves > change). exactly. i don't believe that the amount of data replica would transfer is 330gb. 330gb seems more than replica would transfer. it's over 1000x the size of the distribution. i'm pretty sure sources doesn't see that much churn. the algorithm replica uses will result in about the same amount data moved as kenfs would store, as they use similar algorithms. old unix dump would use a similar amount of space. mkfs would use more, as it is file, not block based. - erik