* [9fans] venti
@ 2009-01-09 8:20 Tharaneedharan Vilwanathan
2009-01-09 9:44 ` Sape Mullender
0 siblings, 1 reply; 27+ messages in thread
From: Tharaneedharan Vilwanathan @ 2009-01-09 8:20 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
hi,
i noticed something odd with venti and i am just trying to see if this
is an issue and if this needs to be addressed.
i think, by default, plan9 installation sets localhost as the address
for venti in plan9.ini. but at some point i wanted venti to announce
any (*) address. for some reason, i simply started venti/venti -a
'tcp!*!17034' to make it so. after this, i did vac from another
machine and it archived data. after sometime, i realized i may be
running venti on the same venti data area twice. when i did 'ps', it
looked like that. then i rebooted the machine and then tried
'dumpvacroots.new' command (from contrib). i noticed that venti arena
was corrupted. this is what i remember and if required i can try to
reproduce the issue.
the question i have is, is there any protection that prevents someone
from doing this accidentally? something like the last active venti
arena is marked 'IN_USE' so that if another instance of venti is
started (with read-write access), it can check for this? i agree user
has to be careful but given that venti archive is so critical, we
should avoid any accidental damage.
please pass your views.
thanks
dharani
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [9fans] venti
2009-01-09 8:20 [9fans] venti Tharaneedharan Vilwanathan
@ 2009-01-09 9:44 ` Sape Mullender
2009-01-09 12:18 ` Richard Miller
2009-01-09 20:11 ` Dave Eckhardt
0 siblings, 2 replies; 27+ messages in thread
From: Sape Mullender @ 2009-01-09 9:44 UTC (permalink / raw)
To: 9fans
> i noticed something odd with venti and i am just trying to see if this
> is an issue and if this needs to be addressed.
>
> i think, by default, plan9 installation sets localhost as the address
> for venti in plan9.ini. but at some point i wanted venti to announce
> any (*) address. for some reason, i simply started venti/venti -a
> 'tcp!*!17034' to make it so. after this, i did vac from another
> machine and it archived data. after sometime, i realized i may be
> running venti on the same venti data area twice. when i did 'ps', it
> looked like that. then i rebooted the machine and then tried
> 'dumpvacroots.new' command (from contrib). i noticed that venti arena
> was corrupted. this is what i remember and if required i can try to
> reproduce the issue.
>
> the question i have is, is there any protection that prevents someone
> from doing this accidentally? something like the last active venti
> arena is marked 'IN_USE' so that if another instance of venti is
> started (with read-write access), it can check for this? i agree user
> has to be careful but given that venti archive is so critical, we
> should avoid any accidental damage.
Hmm. Running two ventis on the same data is, of course, bad. It's also
something I haven't seen happen before. A check could be put in by having
venti put some sort of lock somewhere on the disk. But that would lead
to problems if venti doesn't shut down properly: venti would be gone
but the lock would still be there. You run the risk of not being able
to boot your machine. I think you should just learn to be careful.
Sape
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [9fans] venti
2009-01-09 9:44 ` Sape Mullender
@ 2009-01-09 12:18 ` Richard Miller
2009-01-09 12:38 ` Sape Mullender
2009-01-09 13:39 ` erik quanstrom
2009-01-09 20:11 ` Dave Eckhardt
1 sibling, 2 replies; 27+ messages in thread
From: Richard Miller @ 2009-01-09 12:18 UTC (permalink / raw)
To: 9fans
> Hmm. Running two ventis on the same data is, of course, bad. It's also
> something I haven't seen happen before.
I think it may have happened to me. Recently I had a "missing score" error
when I was looking for something in my dump fs. To assess the damage I did
a 'venti/copy -r' to a spare server, which told me 13 pointers had to be
rewritten.
I do sometimes get errors like this from my fossil server:
could not write super block; waiting 10 seconds
blistAlloc: called on clean block
but I've been assuming they are benign. Am I wrong?
My understanding of fossil is that venti writes are ordered such that
the store should not be left in an inconsistent state even after a
sudden shutdown. If that's the case, I'm guessing the most likely
cause of corruption was a careless experiment with venti when there
was already one running.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [9fans] venti
2009-01-09 12:18 ` Richard Miller
@ 2009-01-09 12:38 ` Sape Mullender
2009-01-09 13:39 ` erik quanstrom
1 sibling, 0 replies; 27+ messages in thread
From: Sape Mullender @ 2009-01-09 12:38 UTC (permalink / raw)
To: 9fans
[-- Attachment #1: Type: text/plain, Size: 209 bytes --]
Yes, but even if one venti always leaves the store in a consistent
state, running two may still cause them to wite differemt things to
the same place on disk and this is likely to result in havoc.
Sape
[-- Attachment #2: Type: message/rfc822, Size: 2692 bytes --]
From: Richard Miller <9fans@hamnavoe.com>
To: 9fans@9fans.net
Subject: Re: [9fans] venti
Date: Fri, 9 Jan 2009 12:18:35 +0000
Message-ID: <b146f7b71d1aa58ff06d8d63c3b331b4@hamnavoe.com>
> Hmm. Running two ventis on the same data is, of course, bad. It's also
> something I haven't seen happen before.
I think it may have happened to me. Recently I had a "missing score" error
when I was looking for something in my dump fs. To assess the damage I did
a 'venti/copy -r' to a spare server, which told me 13 pointers had to be
rewritten.
I do sometimes get errors like this from my fossil server:
could not write super block; waiting 10 seconds
blistAlloc: called on clean block
but I've been assuming they are benign. Am I wrong?
My understanding of fossil is that venti writes are ordered such that
the store should not be left in an inconsistent state even after a
sudden shutdown. If that's the case, I'm guessing the most likely
cause of corruption was a careless experiment with venti when there
was already one running.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [9fans] venti
2009-01-09 12:18 ` Richard Miller
2009-01-09 12:38 ` Sape Mullender
@ 2009-01-09 13:39 ` erik quanstrom
1 sibling, 0 replies; 27+ messages in thread
From: erik quanstrom @ 2009-01-09 13:39 UTC (permalink / raw)
To: 9fans
> I do sometimes get errors like this from my fossil server:
> could not write super block; waiting 10 seconds
> blistAlloc: called on clean block
> but I've been assuming they are benign. Am I wrong?
i looked into this.
/sys/src/cmd/fossil/cache.c:1210,1217
if(p->index < 0){
/*
* We don't know how to temporarily undo
* b's dependency on bb, so just don't write b yet.
*/
if(0) fprint(2, "blockWrite skipping %d %x %d %d; need to write %d %x %d\n",
b->part, b->addr, b->vers, b->l.type, p->part, p->addr, bb->vers);
return 0;
}
what if bb == b?
it'd be interesting to see what that outputs with if(1).
i think _cacheLocalLookup is finding the superblock
itself. perhaps no write between snaps meaning that
only the sb itself is dirty?
- erik
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [9fans] venti
2009-01-09 9:44 ` Sape Mullender
2009-01-09 12:18 ` Richard Miller
@ 2009-01-09 20:11 ` Dave Eckhardt
2009-01-09 20:27 ` erik quanstrom
` (2 more replies)
1 sibling, 3 replies; 27+ messages in thread
From: Dave Eckhardt @ 2009-01-09 20:11 UTC (permalink / raw)
To: 9fans
> A check could be put in by having venti put some sort of lock
> somewhere on the disk. But that would lead to problems if
> venti doesn't shut down properly: venti would be gone but the
> lock would still be there.
Post an (ignored) mode 000 fd in /srv? Since nobody could open
it, it would always have one reference, and would go away when
the venti did?
Dave Eckhardt
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [9fans] venti
2009-01-09 20:11 ` Dave Eckhardt
@ 2009-01-09 20:27 ` erik quanstrom
2009-01-09 22:18 ` Dave Eckhardt
2009-01-09 20:34 ` Steve Simon
2009-01-09 21:08 ` Roman V. Shaposhnik
2 siblings, 1 reply; 27+ messages in thread
From: erik quanstrom @ 2009-01-09 20:27 UTC (permalink / raw)
To: 9fans
> > A check could be put in by having venti put some sort of lock
> > somewhere on the disk. But that would lead to problems if
> > venti doesn't shut down properly: venti would be gone but the
> > lock would still be there.
>
> Post an (ignored) mode 000 fd in /srv? Since nobody could open
> it, it would always have one reference, and would go away when
> the venti did?
dns tries a similar trick. unfortunately, this means that dns
doesn't like to be restarted when it's gone haywire as slay dns|rc
doesn't seem to remove dns' entry from srv.
it also assumes direct-attach storage.
- erik
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [9fans] venti
2009-01-09 20:11 ` Dave Eckhardt
2009-01-09 20:27 ` erik quanstrom
@ 2009-01-09 20:34 ` Steve Simon
2009-01-09 21:08 ` Roman V. Shaposhnik
2 siblings, 0 replies; 27+ messages in thread
From: Steve Simon @ 2009-01-09 20:34 UTC (permalink / raw)
To: 9fans
> Post an (ignored) mode 000 fd in /srv? Since nobody could open
> it, it would always have one reference, and would go away when
> the venti did?
Kudos to Mr Eckhardt for a very neat solution.
-Steve
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [9fans] venti
2009-01-09 20:11 ` Dave Eckhardt
2009-01-09 20:27 ` erik quanstrom
2009-01-09 20:34 ` Steve Simon
@ 2009-01-09 21:08 ` Roman V. Shaposhnik
2 siblings, 0 replies; 27+ messages in thread
From: Roman V. Shaposhnik @ 2009-01-09 21:08 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
On Fri, 2009-01-09 at 15:11 -0500, Dave Eckhardt wrote:
> > A check could be put in by having venti put some sort of lock
> > somewhere on the disk. But that would lead to problems if
> > venti doesn't shut down properly: venti would be gone but the
> > lock would still be there.
>
> Post an (ignored) mode 000 fd in /srv? Since nobody could open
> it, it would always have one reference, and would go away when
> the venti did?
Very neat! One question though: since a hostowner can change
permissions on the entry later, this still doesn't offer
a guarantee of a single reference, does it?
Thanks,
Roman.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [9fans] venti
2009-01-09 20:27 ` erik quanstrom
@ 2009-01-09 22:18 ` Dave Eckhardt
2009-01-09 22:27 ` erik quanstrom
0 siblings, 1 reply; 27+ messages in thread
From: Dave Eckhardt @ 2009-01-09 22:18 UTC (permalink / raw)
To: 9fans
>> Post an (ignored) mode 000 fd in /srv? Since nobody could open
>> it, it would always have one reference, and would go away when
>> the venti did?
> dns tries a similar trick.
I think /srv/dns serves an actual file system, so there are
potentially many references to it (not just one).
> it also assumes direct-attach storage.
Not exactly, but you are right that ventis running on two
kernels could mount the same storage.
I seem to recall being surprised at some point to observe
/dev/sdXX/data silently imposing a one-open-at-a-time policy
(I think subsequent opens stalled). Maybe a script run at
system boot time could turn on DMEXCL for appropriate things
in /dev/sd*/*?
Dave Eckhardt
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [9fans] venti
2009-01-09 22:18 ` Dave Eckhardt
@ 2009-01-09 22:27 ` erik quanstrom
0 siblings, 0 replies; 27+ messages in thread
From: erik quanstrom @ 2009-01-09 22:27 UTC (permalink / raw)
To: 9fans
> I seem to recall being surprised at some point to observe
> /dev/sdXX/data silently imposing a one-open-at-a-time policy
> (I think subsequent opens stalled). Maybe a script run at
> system boot time could turn on DMEXCL for appropriate things
> in /dev/sd*/*?
i think you mean the raw file.
; cat fu.c
#include <u.h>
#include <libc.h>
void
main(void)
{
int i, fd[2];
for(i = 0; i < nelem(fd); i++){
fd[i] = open("/dev/sdC0/data", OREAD);
print("fd = %d\n", fd[i]);
}
exits("");
}
; 8c fu.c && 8l fu.8 && 8.out
3
4
- erik
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [9fans] venti
2007-10-04 1:51 Russ Cox
@ 2007-10-07 20:51 ` Steve Simon
0 siblings, 0 replies; 27+ messages in thread
From: Steve Simon @ 2007-10-07 20:51 UTC (permalink / raw)
To: 9fans
> I have fixed the sync livelock bug that anothy and others reported.
I too had the "hangs at conf..." problem but it seemed to go
away when I disabled a cpu.
The latest change to venti has solved my problems and I
am back on two cpus.
Thanks,
-Steve
^ permalink raw reply [flat|nested] 27+ messages in thread
* [9fans] venti
@ 2007-10-04 1:51 Russ Cox
2007-10-07 20:51 ` Steve Simon
0 siblings, 1 reply; 27+ messages in thread
From: Russ Cox @ 2007-10-04 1:51 UTC (permalink / raw)
To: 9fans
I have fixed the sync livelock bug that anothy and others reported.
To make it easier to debug problems in the future (and to inspect
your venti servers from other machines), I suggest changing your
plan9.ini from reading
venti=/dev/sdC0/arenas
to
venti=/dev/sdC0/arenas tcp!127.1!17034 tcp!*!8000
The two additional addresses are where venti listens for venti
and http traffic, respectively. The defaults are tcp!127.1!17034
and tcp!127.1!8000. Using * instead of 127.1 in the latter will
make it so the http server is accessible from other machines.
(If you want to make your venti server accessible too, change
the first address to tcp!*!17034.) The default is 127.1 for
security reasons.
Having done this, you will be able to load pages like
http://venti:8000/proc/threads
to get a list of threads and what code they are blocked in. And
http://venti:8000/proc/stacks
will give the stack for each thread, much like running stacks()
in an "acid -l thread" session.
If you do encounter problems where venti appears to be
hung in some form, I would appreciate if you could save
the output of
http://venti:8000/proc/all
and mail it to me when reporting the problem.
Note that although there are new venti binaries on sources,
the kernels do not yet use the new venti. If you boot from
a combined venti+fossil server, you will need to rebuild
9pccpuf/9pcf yourself in order to get it.
Thanks.
Russ
^ permalink raw reply [flat|nested] 27+ messages in thread
* [9fans] Venti
@ 2007-07-03 4:58 Lucio De Re
0 siblings, 0 replies; 27+ messages in thread
From: Lucio De Re @ 2007-07-03 4:58 UTC (permalink / raw)
To: 9fans
[-- Attachment #1: Type: text/plain, Size: 548 bytes --]
A fresh installation of Fossil+Venti (CD from June 28 - 9pccpuf built
without alterations) seems to cause Venti trouble.
I start the system up in the late evening and shut it down around 05:00,
but I have moved the Venti archiving function to 01:00 (snap -a 0100).
This morning I got the attached errors. Well, I would have attached
them, but the floppy disk failed on me :-( so we get the first few:
The gist is a whole lot of "unwritten clumps" that go away after a
"checkarenas -f". Something fell off the update truck?
++L
[-- Attachment #2: venti.chk --]
[-- Type: text/plain, Size: 4140 bytes --]
unwritten clump info for clump=61490 score=0860e754dc1654ae7e50e48bce44d873e2f5bd9f type=13
unwritten clump info for clump=61491 score=135d0fa4d2ee35fbf725d1d4459605de5ea68565 type=13
unwritten clump info for clump=61492 score=cfa64faf2975bcc2016f8339344d34496f672651 type=13
unwritten clump info for clump=61493 score=179ce20fdb20c895048a55b7b8af431e00dce6da type=2
unwritten clump info for clump=61494 score=02fd78e36bdf30f897489228d7ba591744f9121a type=13
unwritten clump info for clump=61495 score=dbc05aaaeb555b6c8f359bab2c9c5a0449079675 type=13
unwritten clump info for clump=61496 score=c27057e59ca43886fcf799b1f54f9d5564b19a47 type=2
unwritten clump info for clump=61497 score=44499aaab6f13f01b860dd62f68450d34947b78a type=13
unwritten clump info for clump=61498 score=463510b5675611440e0bf665e3c16f38bad63615 type=13
unwritten clump info for clump=61499 score=7f86bb329206d5b0be49fdeda3faa74491dccd24 type=2
unwritten clump info for clump=61500 score=ab6f69145259eee0fc90f633fdc2b4ca3b46b9f5 type=13
unwritten clump info for clump=61501 score=f15b45e306bd8335b8a127f9dfbe32748714f87b type=2
unwritten clump info for clump=61502 score=8920bb02529e41f65937d2defff5589419a5ee53 type=13
unwritten clump info for clump=61503 score=efc693b46063ef62b5c380f8ee29ff7004a35ad9 type=2
unwritten clump info for clump=61504 score=9833974abec1a0617a2261f3597216f7cc4cf03f type=13
unwritten clump info for clump=61505 score=82a4cf34000d1311caa6b0632a7fa7d2e99903df type=2
unwritten clump info for clump=61506 score=6be5fa65dd41f918d611c12ee25e902eba668587 type=13
unwritten clump info for clump=61507 score=6b2eed0ceeec06debae31a8ba75bb2413dd857f5 type=13
unwritten clump info for clump=61508 score=bf58c52de0abed8c693da15f290beba937df85b3 type=13
unwritten clump info for clump=61509 score=42dcb7eac6ca00d94c60ecd55e577170f97a825d type=13
unwritten clump info for clump=61510 score=9eb8621106fde5b827b4a36c68ad8ba5216f60a0 type=13
unwritten clump info for clump=61511 score=a1c88fc8e707cd3efae8eb62ee427fd5d033b9bc type=13
unwritten clump info for clump=61512 score=02cadf200ad0c422a05b2a711629acc77f243e31 type=13
unwritten clump info for clump=61513 score=562a63303eec23ad1e9488855d58769c673d35d9 type=2
unwritten clump info for clump=61514 score=ac69e9fc45030c172c8046bd7f1ad0276e9e0587 type=13
unwritten clump info for clump=61515 score=9c4de45cedf8d1981151bae318af883fe6bca4c0 type=13
unwritten clump info for clump=61516 score=773041b6cd72be21f3f255e41aac3e37926abf2c type=2
unwritten clump info for clump=61517 score=d33c06bc7fa6c50692bd6707a6c3e5d2f1830392 type=3
unwritten clump info for clump=61518 score=781a674e66e3d8002a056b16dbfa31ad74a8fbbd type=13
unwritten clump info for clump=61519 score=ca81cf2d4e98827d6a30471528e523c33b76d04a type=13
unwritten clump info for clump=61520 score=9c34ffb4192da23393682d7250cf10aeb5dc446d type=13
unwritten clump info for clump=61521 score=34f6057f0f839d2580a0c0750762766a8121aa6e type=3
unwritten clump info for clump=61522 score=30c65b53b202a4dcac2de992eb4a409cde24e881 type=2
unwritten clump info for clump=61523 score=41724f3c5016052ff1f1d4ca1faa1299904a4e91 type=13
unwritten clump info for clump=61524 score=6293b09977241d09aaf4c11a2fafc20f14bf1da7 type=13
unwritten clump info for clump=61525 score=bb3ff1d517677dc7ac158140a4416c0c02b589bf type=13
unwritten clump info for clump=61526 score=70fdd1b2179e5d9fbed0862eae4ad922b7e0a678 type=13
unwritten clump info for clump=61527 score=7bc69e87a0db7f5c4435a46dfdb7befc6ce0efaf type=2
unwritten clump info for clump=61528 score=cbc59724623d2c52b3bd7e0eab471694b381d842 type=13
unwritten clump info for clump=61529 score=9bbc6c175331223ef9973b8534b72a4d3050c7ce type=13
unwritten clump info for clump=61530 score=b620b786813702d0d50b186149b08aa64f7eb703 type=13
unwritten clump info for clump=61531 score=e540dc94f339e1963cb499dbaae7b8aa9437846f type=3
unwritten clump info for clump=61532 score=b217dca96e4629595a328ba86b24259420fc1154 type=13
unwritten clump info for clump=61533 score=6d7098719406013797606870c300e5c8b277db84 type=2
unwritten clump info for clump=61534 score=7b3250f73ed703f7bc3
^ permalink raw reply [flat|nested] 27+ messages in thread
* [9fans] venti
@ 2007-03-30 16:32 Steve Simon
0 siblings, 0 replies; 27+ messages in thread
From: Steve Simon @ 2007-03-30 16:32 UTC (permalink / raw)
To: 9fans
Just to put another point of view...
I have been using venti at work and at home, and I have no UPSs (UPI ?).
As yet (touch wood) I have only ever lost a maximum of days work, and this
was due to a disk failure taking out my fossil drive which I don't bother
to mirror.
I have never had any data corruption or loss in my venti archive.
Perhaps this is because I only write to it using fossil snapshots at 4am
and not using vac - so the window when the system is vunerable is smaller.
-Steve
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [9fans] venti
2003-02-06 1:11 Kenji Arisawa
@ 2003-02-06 1:21 ` Russ Cox
0 siblings, 0 replies; 27+ messages in thread
From: Russ Cox @ 2003-02-06 1:21 UTC (permalink / raw)
To: 9fans
that's what we use too,
except we sleep for 10 seconds.
it's a wart yet to be removed.
^ permalink raw reply [flat|nested] 27+ messages in thread
* [9fans] venti
@ 2003-02-06 1:11 Kenji Arisawa
2003-02-06 1:21 ` Russ Cox
0 siblings, 1 reply; 27+ messages in thread
From: Kenji Arisawa @ 2003-02-06 1:11 UTC (permalink / raw)
To: 9fans
Hello,
How can we get information that venti have got worked?
The following is my script in /rc/bin/termrc but I am not satisfied:
v=/usr/arisawa/venti
venti/venti -h tcp!*!8088 -w -c $v/venti.conf >[2] /dev/null &
sleep 5 ### I dislike this sloppy solution but don't know alternative
fossil/fossil -c '. '$v/flproto
How are you doing with this problem?
Kenji Arisawa
E-mail: arisawa@aichi-u.ac.jp
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [9fans] venti
@ 2002-11-29 18:59 Russ Cox
0 siblings, 0 replies; 27+ messages in thread
From: Russ Cox @ 2002-11-29 18:59 UTC (permalink / raw)
To: 9fans, fst
Skip writes [in private mail]:
> Russ, I have been having the same problem for the last
> 1-2 days. The problem seems to be with timeouts and
> not the DNS; Notice the ping and the before/after
> behavior of adding the entry sources.cs. to /net/ndb.
Right, the echo to /net/ndb will make the name resolve at all
(after the real DNS times out).
Srv only gives dial() 5 seconds to establish the connection,
so you're still getting problems. To work around this, you
can establish /srv/sources yourself:
srv 204.178.31.8 sources
Then there will still be a long pause for "9fs sources"
while factotum dials sources for tickets, but at least it
will work.
In summary, the right incantation is:
echo 'dom=sources.cs.bell-labs.com sys=sources ip=204.178.31.2' >>/net/ndb
echo -n refresh >/net/cs
srv 204.178.31.8 sources
9fs sources # takes a while
Sorry for the inconvenience. I have no idea what's wrong
with DNS right now.
Russ
^ permalink raw reply [flat|nested] 27+ messages in thread
* [9fans] venti
@ 2002-11-29 18:27 Russ Cox
0 siblings, 0 replies; 27+ messages in thread
From: Russ Cox @ 2002-11-29 18:27 UTC (permalink / raw)
To: 9fans
[A side note:
A number of people have reported not being able to connect
to sources.cs.bell-labs.com. Our external DNS seems to be
very slow. I'm not sure exactly what's wrong, but until we
fix it you can use:
echo 'dom=sources.cs.bell-labs.com sys=sources ip=204.178.31.8' >>/net/ndb
echo -n refresh >/net/cs
as a workaround.]
There's a new command venti/wrarena, which complements rdarena.
From ventiaux(8):
venti/wrarena [ -o fileoffset ] [ -h host ] arenafile [
clumpoffset ]
Wrarena writes the blocks contained in the arena arenafile
(typically, the output of rdarena) to a Venti server. It is
typically used to reinitialize a Venti server from backups
of the arenas. For example,
venti/rdarena /dev/sdC0/arenas arena.0 >external.media
venti/wrarena -h venti2 external.media
writes the blocks contained in arena.0 to the Venti server
venti2 (typically not the one using /dev/sdC0/arenas).
The -o option specifies that the arena starts at byte
fileoffset (default 0) in arenafile . This is useful for
reading directly from the Venti arena partition:
venti/wrarena -h venti2 -o 335872 /dev/sdC0/arenas
(In this example, 335872 is the offset shown in the Venti
server's index list (344064) minus one block (8192). You
will need to substitute your own arena offsets and block
size.)
Finally, the optional offset argument specifies that the
writing should begin with the clump starting at offset
within the arena. Wrarena prints the offset it stopped at
(because there were no more data blocks). This could be
used to incrementally back up a Venti server to another
Venti server:
last=`{cat last}
venti/wrarena -h venti2 -o 335872 /dev/sdC0/arenas $last >output
awk '/^end offset/ { print $3 }' output >last
(Of course, one would need to add wrapper code to keep track
of which arenas have been processed.)
Russ
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [9fans] venti
2002-06-14 2:19 ` Sean Quinlan
@ 2002-06-14 5:05 ` Tharaneedharan Vilwanathan
0 siblings, 0 replies; 27+ messages in thread
From: Tharaneedharan Vilwanathan @ 2002-06-14 5:05 UTC (permalink / raw)
To: 9fans
hi,
sean's mail was useful.
i figured that i created the arena file with exactly 512MB size and did
fmtarenas. that probably didnt make any useful arenas. i raised the size to
nearly 2GB and then it created about 4 arenas. and that solved the problem.
wow! i think venti is going to be a great tool! all along, i was
uncomfortable using source code control systems. i think venti is going to
be a pain killer.
now, i have some questions:
- when the arena file is too small to create any useful arenas, does
fmtarenas throw an appropriate error? and would it be a good idea to give a
clue (in the man page) on the additional space needed for storing additional
details (for e.g: the space needed is (n * arenasize) + some_constant_size).
- in the arena file i created, the last arena size seemed to be less that
the other arena files. is it fine? if so, is it that arena file should
atleast hold one arena and the last arena could be a smaller one?
- how do i gracefully shutdown venti? i closed the window to shutdown venti.
- the http built-in server didnt seem to start by default. however, when i
used -h option, it worked. bug? feature? and when venti is killed (like the
way i did), the built-in http server didnt seem to die.
dharani
> your are right, the manual page should give you a hint.
> 5% is probably a good choice.
>
> The "no space left" sounds like venti is not detecting your arenas.
> you might try
> hget http://yourmachine/storage
> to get a dump of what the venti server thinks it has for storage.
>
> hope this helps
> seanq
>
> Tharaneedharan Vilwanathan wrote:
> >
> > > I used regular files to try venti and did just like the example in the
man
> > > page says.
> > >
> > > I faced 2 problems. The man page says what should be the size of the
index
> > i meant, the man page doesnt say a reasonable file size for the index
file.
> >
> > > file nor the utility takes a default value (in contrast, fmtarenas
takes a
> > > default value). For 512MB arena file, what should be the sie of the
index
> > > file? I created two 50MB files. Am I right?
> > >
> > > Next, after doing all the steps given in the example, I tried to do
'vac'
> > on
> > > a directory. It said like "no space left". I tried to vac a small
file.
> > > Still the same problem.
> > >
> > > Any clues?
> > >
> > > Thanks
> > > dharani
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [9fans] venti
2002-06-13 22:51 ` [9fans] venti Tharaneedharan Vilwanathan
@ 2002-06-14 2:19 ` Sean Quinlan
2002-06-14 5:05 ` Tharaneedharan Vilwanathan
0 siblings, 1 reply; 27+ messages in thread
From: Sean Quinlan @ 2002-06-14 2:19 UTC (permalink / raw)
To: 9fans
your are right, the manual page should give you a hint.
5% is probably a good choice.
The "no space left" sounds like venti is not detecting your arenas.
you might try
hget http://yourmachine/storage
to get a dump of what the venti server thinks it has for storage.
hope this helps
seanq
Tharaneedharan Vilwanathan wrote:
>
> > I used regular files to try venti and did just like the example in the man
> > page says.
> >
> > I faced 2 problems. The man page says what should be the size of the index
> i meant, the man page doesnt say a reasonable file size for the index file.
>
> > file nor the utility takes a default value (in contrast, fmtarenas takes a
> > default value). For 512MB arena file, what should be the sie of the index
> > file? I created two 50MB files. Am I right?
> >
> > Next, after doing all the steps given in the example, I tried to do 'vac'
> on
> > a directory. It said like "no space left". I tried to vac a small file.
> > Still the same problem.
> >
> > Any clues?
> >
> > Thanks
> > dharani
^ permalink raw reply [flat|nested] 27+ messages in thread
* [9fans] venti
2002-06-13 21:34 ` Tharaneedharan Vilwanathan
@ 2002-06-13 22:51 ` Tharaneedharan Vilwanathan
2002-06-14 2:19 ` Sean Quinlan
0 siblings, 1 reply; 27+ messages in thread
From: Tharaneedharan Vilwanathan @ 2002-06-13 22:51 UTC (permalink / raw)
To: 9fans
> I used regular files to try venti and did just like the example in the man
> page says.
>
> I faced 2 problems. The man page says what should be the size of the index
i meant, the man page doesnt say a reasonable file size for the index file.
> file nor the utility takes a default value (in contrast, fmtarenas takes a
> default value). For 512MB arena file, what should be the sie of the index
> file? I created two 50MB files. Am I right?
>
> Next, after doing all the steps given in the example, I tried to do 'vac'
on
> a directory. It said like "no space left". I tried to vac a small file.
> Still the same problem.
>
> Any clues?
>
> Thanks
> dharani
^ permalink raw reply [flat|nested] 27+ messages in thread
* [9fans] venti
@ 2002-01-30 20:35 George Michaelson
0 siblings, 0 replies; 27+ messages in thread
From: George Michaelson @ 2002-01-30 20:35 UTC (permalink / raw)
To: 9fans
Say you co-elesce the same blockset down to one common instance for
a large number of files, and then lose that block because a fly shits
on the write-once media.
Didn't you just lose that bitpattern in time, across your entire archival
filestore? Oooh. me no like. I suppose that is exactly what happens to
99% of dataloss anyway, if you believe its mostly unique data.
This is precicely why holotype fossils are not on display: they are too
valuable to expose to risk of catastrophic loss.
Protection of an entire venti filestore would thus mean either 2x the data
size to replicate, or some sub-fraction to ECC check it, for some chosen
level of protection. I suppose thats still better than having 300 repeat
instances of the BSD copyright on every manpage. I notice in the paper
the authors say "doable, but we didn't bother" which means until somebody
in the operations space does budget-up for the offsite archive, and some
checks to make sure the copy is 1:1 acceptable, the venti filestore is
a bit more risky than you might want. Even backed on RAID5, it can lose
bigtime from some failures. Might be nice for @home fs, on IDE raid mind you!
This also looks to have resonances with rsync, in a data persistant state
instead of for update of two copies of the same resource. The rsync papers
discuss finding good hash algs and good blocksizes to make this work. No
citation, perhaps if you followed the rsync citation index back they'd wind
up in the same prime sources. I guess the venti authors knew about rsync but
don't see it as having relevance.
Do you get nice COW form savings in memory/load time because every pointer
can walk through the same memory reference to find the string? That could
be really nice!
And, it would make smaller footprints on tiny devices as well.
cheers
-George
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [9fans] venti
2002-01-30 5:52 ` rob pike
2002-01-30 6:23 ` George Michaelson
2002-01-30 8:07 ` paurea
@ 2002-01-30 11:17 ` Boyd Roberts
2 siblings, 0 replies; 27+ messages in thread
From: Boyd Roberts @ 2002-01-30 11:17 UTC (permalink / raw)
To: 9fans
rob pike wrote:
> ... a hash is a universal address space.
Yes the hash buys you a lot. I really like it.
I really like the 'server lied' case :)
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [9fans] venti
2002-01-30 5:52 ` rob pike
2002-01-30 6:23 ` George Michaelson
@ 2002-01-30 8:07 ` paurea
2002-01-30 11:17 ` Boyd Roberts
2 siblings, 0 replies; 27+ messages in thread
From: paurea @ 2002-01-30 8:07 UTC (permalink / raw)
To: 9fans
rob pike writes:
> all those things are made trivial by the realization that
> a hash is a universal address space.
I read some time ago about a filesystem called LBFS which uses the same idea for
a network filesystem. Any opinions on it?.
--
Saludos,
Gorka
"Curiosity sKilled the cat"
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [9fans] venti
2002-01-30 5:52 ` rob pike
@ 2002-01-30 6:23 ` George Michaelson
2002-01-30 8:07 ` paurea
2002-01-30 11:17 ` Boyd Roberts
2 siblings, 0 replies; 27+ messages in thread
From: George Michaelson @ 2002-01-30 6:23 UTC (permalink / raw)
To: 9fans
> Of course, but the beauty of Venti is the ease with which it provides
> the hooks you need to address such issues. Replication, mirroring,
> load balancing, all those things are made trivial by the realization that
> a hash is a universal address space.
Except as the paper says, get above petabytes, and the *choice* of hash
gets harder/slower because it has to be 'better'. ie, too universal makes
the hash function less nice.
I like the comments about certain operations on older data being extremely
expensive. This resonated with my memories of using archival storage on the
TOPS-10 system, where you stalled for Operators to find the tape, load the
temporary pack, copy the tape to the pack, put that content into exposed
filespace in a place you could find it, and get back to you.
ICL CAFS might have been there, except it was the usual UK R&D 20+ years
before viable market/technology. Oh, the joys of doing research to far
ahead of the game...
if they made the hash function for URI/URN compatible, passing around
a referent in the web might wind up going back to the canonical data anyway.
cheers
-George
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [9fans] venti
@ 2002-01-30 5:52 ` rob pike
2002-01-30 6:23 ` George Michaelson
` (2 more replies)
0 siblings, 3 replies; 27+ messages in thread
From: rob pike @ 2002-01-30 5:52 UTC (permalink / raw)
To: 9fans
Of course, but the beauty of Venti is the ease with which it provides
the hooks you need to address such issues. Replication, mirroring,
load balancing, all those things are made trivial by the realization that
a hash is a universal address space.
-rob
^ permalink raw reply [flat|nested] 27+ messages in thread
end of thread, other threads:[~2009-01-09 22:27 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-01-09 8:20 [9fans] venti Tharaneedharan Vilwanathan
2009-01-09 9:44 ` Sape Mullender
2009-01-09 12:18 ` Richard Miller
2009-01-09 12:38 ` Sape Mullender
2009-01-09 13:39 ` erik quanstrom
2009-01-09 20:11 ` Dave Eckhardt
2009-01-09 20:27 ` erik quanstrom
2009-01-09 22:18 ` Dave Eckhardt
2009-01-09 22:27 ` erik quanstrom
2009-01-09 20:34 ` Steve Simon
2009-01-09 21:08 ` Roman V. Shaposhnik
-- strict thread matches above, loose matches on Subject: below --
2007-10-04 1:51 Russ Cox
2007-10-07 20:51 ` Steve Simon
2007-07-03 4:58 [9fans] Venti Lucio De Re
2007-03-30 16:32 [9fans] venti Steve Simon
2003-02-06 1:11 Kenji Arisawa
2003-02-06 1:21 ` Russ Cox
2002-11-29 18:59 Russ Cox
2002-11-29 18:27 Russ Cox
2002-06-13 20:58 [9fans] bug or a feature? Dan Cross
2002-06-13 21:34 ` Tharaneedharan Vilwanathan
2002-06-13 22:51 ` [9fans] venti Tharaneedharan Vilwanathan
2002-06-14 2:19 ` Sean Quinlan
2002-06-14 5:05 ` Tharaneedharan Vilwanathan
2002-01-30 20:35 George Michaelson
[not found] <rob@plan9.bell-labs.com>
2002-01-30 5:52 ` rob pike
2002-01-30 6:23 ` George Michaelson
2002-01-30 8:07 ` paurea
2002-01-30 11:17 ` Boyd Roberts
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).