9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
* [9fans] A potentially useful venti client
@ 2017-12-12  9:33 Ole-Hjalmar Kristensen
  2017-12-12 14:07 ` Steve Simon
  0 siblings, 1 reply; 27+ messages in thread
From: Ole-Hjalmar Kristensen @ 2017-12-12  9:33 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: text/plain, Size: 3636 bytes --]

Based on copy.c and readlist.c, I have cobbled together a venti client to
copy a list of venti blocks from one venti server to another. I am thinking
of using it to incrementally replicate the contents on one site site to
another. It could even be used for two-way replication, since the CAS and
deduplicating properties of venti ensures that you will never have write
conflicts at a block level.

I have tried it out by feeding it with the output from printarenas, and it
seems to work reasonably well. Does anyone have any good ideas about how to
incrementally extract the set of scores that has been added to a venti
server? You could extract the whole set of scores and do a diff with an old
set of course, but that's rather inefficient.

Ole-Hj.


#include <u.h>
#include <libc.h>
#include <thread.h>
#include <venti.h>
#include <bio.h>

enum
{
    // XXX What to do here?
    VtMaxLumpSize = 65535,
};

char *srchost;
char *dsthost;
Biobuf b;
VtConn *zsrc;
VtConn *zdst;
uchar *buf;
void run(Biobuf*);
int nn;

void
usage(void)
{
    fprint(2, "usage: copylist srchost dsthost list\n");
    threadexitsall("usage");
}

int
parsescore(uchar *score, char *buf, int n)
{
    int i, c;

    memset(score, 0, VtScoreSize);

    if(n != VtScoreSize*2){
        werrstr("score wrong length %d", n);
        return -1;
    }
    for(i=0; i<VtScoreSize*2; i++) {
        if(buf[i] >= '0' && buf[i] <= '9')
            c = buf[i] - '0';
        else if(buf[i] >= 'a' && buf[i] <= 'f')
            c = buf[i] - 'a' + 10;
        else if(buf[i] >= 'A' && buf[i] <= 'F')
            c = buf[i] - 'A' + 10;
        else {
            c = buf[i];
            werrstr("bad score char %d '%c'", c, c);
            return -1;
        }

        if((i & 1) == 0)
            c <<= 4;

        score[i>>1] |= c;
    }
    return 0;
}

void
threadmain(int argc, char *argv[])
{
    int fd, i;

    ARGBEGIN{
    default:
        usage();
        break;
    }ARGEND

    if(argc < 2)
        usage();

    fmtinstall('V', vtscorefmt);
    buf = vtmallocz(VtMaxLumpSize);

    srchost = argv[0];
    zsrc = vtdial(srchost);
    if(zsrc == nil)
        sysfatal("could not dial src server: %r");
    if(vtconnect(zsrc) < 0)
        sysfatal("vtconnect src: %r");

    dsthost = argv[1];
    zdst = vtdial(dsthost);
    if(zdst == nil)
        sysfatal("could not dial dst server: %r");
    if(vtconnect(zdst) < 0)
        sysfatal("vtconnect dst: %r");

    if(argc == 2){
        Binit(&b, 0, OREAD);
        run(&b);
    }else{
        for(i=2; i<argc; i++){
            if((fd = open(argv[i], OREAD)) < 0)
                sysfatal("open %s: %r", argv[i]);
            Binit(&b, fd, OREAD);
            run(&b);
        }
    }
    threadexitsall(nil);
}

void
run(Biobuf *b)
{
    char *p, *f[10];
    int nf;
    uchar score[20];
    int type, n;

    while((p = Brdline(b, '\n')) != nil){
        p[Blinelen(b)-1] = 0;
        nf = tokenize(p, f, nelem(f));
        if(nf != 2)
            sysfatal("syntax error in work list");
        if(parsescore(score, f[0], strlen(f[0])) < 0)
            sysfatal("bad score %s in work list", f[0]);
        type = atoi(f[1]);
        n = vtread(zsrc, score, type, buf, VtMaxLumpSize);
        if(n < 0)
            sysfatal("could not read %s %s: %r", f[0], f[1]);
        n = vtwrite(zdst, score, type, buf, n);
        if(n < 0)
            sysfatal("could not write %s %s: %r", f[0], f[1]);
        if(++nn%1000 == 0)
            print("%d...", nn);
    }
}

[-- Attachment #2: Type: text/html, Size: 4834 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-12  9:33 [9fans] A potentially useful venti client Ole-Hjalmar Kristensen
@ 2017-12-12 14:07 ` Steve Simon
  2017-12-12 15:45   ` Steven Stallion
  2017-12-12 18:33   ` Ole-Hjalmar Kristensen
  0 siblings, 2 replies; 27+ messages in thread
From: Steve Simon @ 2017-12-12 14:07 UTC (permalink / raw)
  To: 9fans

printarenas is a script - it walks through all your arenas at each offset.

You could craft another script that remembers the last arena and offset you successfully
transferred and only send those after that.

I think there is a pattern where you can save the last arena,offset in the local
fossil. Then you could mount the remote venti to check that last arena,offset
that actually arrived and stuck to the disk on the remote site.

On a similar subject I have 10 years of backups from a decomissioned work server
that I need to merge into my home venti one of these days...

-Steve



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-12 14:07 ` Steve Simon
@ 2017-12-12 15:45   ` Steven Stallion
  2017-12-12 16:11     ` Steve Simon
  2017-12-12 18:42     ` Ole-Hjalmar Kristensen
  2017-12-12 18:33   ` Ole-Hjalmar Kristensen
  1 sibling, 2 replies; 27+ messages in thread
From: Steven Stallion @ 2017-12-12 15:45 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

Get ready to wait! It took almost a month for me to import about 30GB
from a decommissioned file server. It was well worth the wait though -
if you place the the resulting .vac file under /lib/vac (or
$home/lib/vac) you can just use 9fs to mount with zero fuss.

On a related note, once sources starting having issues with
availability, I started running nightly snaps of my contrib directory
via cron:

contrib=/n/sources/contrib/$user
9fs sources
@{cd $contrib && vac -a $home/lib/vac/contrib.vac .} >[2]/dev/null

Now I have a dump-like history of changes I've made to my contrib
directory without the need to connect to sources:

% 9fs contrib.vac
% lc /n/contrib
2015    2016    2017

Cheers,
Steve

On Tue, Dec 12, 2017 at 8:07 AM, Steve Simon <steve@quintile.net> wrote:
> printarenas is a script - it walks through all your arenas at each offset.
>
> You could craft another script that remembers the last arena and offset you successfully
> transferred and only send those after that.
>
> I think there is a pattern where you can save the last arena,offset in the local
> fossil. Then you could mount the remote venti to check that last arena,offset
> that actually arrived and stuck to the disk on the remote site.
>
> On a similar subject I have 10 years of backups from a decomissioned work server
> that I need to merge into my home venti one of these days...
>
> -Steve
>



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-12 15:45   ` Steven Stallion
@ 2017-12-12 16:11     ` Steve Simon
  2017-12-12 16:23       ` Steven Stallion
  2017-12-12 18:42     ` Ole-Hjalmar Kristensen
  1 sibling, 1 reply; 27+ messages in thread
From: Steve Simon @ 2017-12-12 16:11 UTC (permalink / raw)
  To: 9fans

Interesting.

how did you do the import? did you use vac -q and vac -d previous-score for each
imported day to try and speed things up?

Previously I imported stuff into venti by copying it into fossil first
and then taking a snap. I always wanted a better solution, like being able
to use vac and then installing the score into my main filesystem through
a special fscons command. Sadly I never got around to it.

-Steve



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-12 16:11     ` Steve Simon
@ 2017-12-12 16:23       ` Steven Stallion
  0 siblings, 0 replies; 27+ messages in thread
From: Steven Stallion @ 2017-12-12 16:23 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

It depends - the 30GB I was mentioning before was from an older Ken's
fs that I imported with a modified cwfs. Rather than deal with all of
the history, I just took a snap with vac -s of the latest state of the
file system. I keep the original dump along with the cwfs binary in
case I ever need to dig into the dump (I haven't needed to in years).

The last venti store I needed to move around I was able to just use
rdarena/wrarena to reconstitute the fs on new hardware.

I think it comes down to what you want to preserve - life gets easier
if you don't need to worry about the dump. I don't think it would be
too tough to script the dump though. You probably would just need to
walk through each successive vac and archive it using vac -a. Probably
easier said than done though :-)

Steve

On Tue, Dec 12, 2017 at 10:11 AM, Steve Simon <steve@quintile.net> wrote:
> Interesting.
>
> how did you do the import? did you use vac -q and vac -d previous-score for each
> imported day to try and speed things up?
>
> Previously I imported stuff into venti by copying it into fossil first
> and then taking a snap. I always wanted a better solution, like being able
> to use vac and then installing the score into my main filesystem through
> a special fscons command. Sadly I never got around to it.
>
> -Steve
>



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-12 14:07 ` Steve Simon
  2017-12-12 15:45   ` Steven Stallion
@ 2017-12-12 18:33   ` Ole-Hjalmar Kristensen
  2017-12-12 19:53     ` Steve Simon
  2017-12-12 20:15     ` Steve Simon
  1 sibling, 2 replies; 27+ messages in thread
From: Ole-Hjalmar Kristensen @ 2017-12-12 18:33 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: text/plain, Size: 1489 bytes --]

Hmm. On both my plan9port and on a 9front system I find printarenas.c, but
no script. Maybe you are thinking of the script for backup of individual
arenas to file? Yes, that could be a starting point.

Anyway, printarenas.c doesn't look too scary, basically a loop checking all
(or matching) arenas. It seems possible to modify the logic to start at a
specific offset.

Not running fossil at the moment, btw., my main file server is a Linux box,
but I use vac for backup, both at home and at work. Fossil is definitely on
my todo list, although the reported behavior when running out of space is a
bit scary. Do you know why it does not simply block further requests while
checkpointing to venti, or even better, starts a snapshot before it runs
out of space?

On Tue, Dec 12, 2017 at 3:07 PM, Steve Simon <steve@quintile.net> wrote:

> printarenas is a script - it walks through all your arenas at each offset.
>
> You could craft another script that remembers the last arena and offset
> you successfully
> transferred and only send those after that.
>
> I think there is a pattern where you can save the last arena,offset in the
> local
> fossil. Then you could mount the remote venti to check that last
> arena,offset
> that actually arrived and stuck to the disk on the remote site.
>
> On a similar subject I have 10 years of backups from a decomissioned work
> server
> that I need to merge into my home venti one of these days...
>
> -Steve
>
>

[-- Attachment #2: Type: text/html, Size: 1893 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-12 15:45   ` Steven Stallion
  2017-12-12 16:11     ` Steve Simon
@ 2017-12-12 18:42     ` Ole-Hjalmar Kristensen
  2017-12-12 19:16       ` Steven Stallion
  1 sibling, 1 reply; 27+ messages in thread
From: Ole-Hjalmar Kristensen @ 2017-12-12 18:42 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: text/plain, Size: 1894 bytes --]

Thanks for the tip about mounting with 9fs. I have used vacfs on Linux ,
though.
But why so slow? Did you import a root with lots of backup versions? It was
partly because of that I made this client which can import venti blocks
without needing to traverse a file tree over and over again.

On Tue, Dec 12, 2017 at 4:45 PM, Steven Stallion <sstallion@gmail.com>
wrote:

> Get ready to wait! It took almost a month for me to import about 30GB
> from a decommissioned file server. It was well worth the wait though -
> if you place the the resulting .vac file under /lib/vac (or
> $home/lib/vac) you can just use 9fs to mount with zero fuss.
>
> On a related note, once sources starting having issues with
> availability, I started running nightly snaps of my contrib directory
> via cron:
>
> contrib=/n/sources/contrib/$user
> 9fs sources
> @{cd $contrib && vac -a $home/lib/vac/contrib.vac .} >[2]/dev/null
>
> Now I have a dump-like history of changes I've made to my contrib
> directory without the need to connect to sources:
>
> % 9fs contrib.vac
> % lc /n/contrib
> 2015    2016    2017
>
> Cheers,
> Steve
>
> On Tue, Dec 12, 2017 at 8:07 AM, Steve Simon <steve@quintile.net> wrote:
> > printarenas is a script - it walks through all your arenas at each
> offset.
> >
> > You could craft another script that remembers the last arena and offset
> you successfully
> > transferred and only send those after that.
> >
> > I think there is a pattern where you can save the last arena,offset in
> the local
> > fossil. Then you could mount the remote venti to check that last
> arena,offset
> > that actually arrived and stuck to the disk on the remote site.
> >
> > On a similar subject I have 10 years of backups from a decomissioned
> work server
> > that I need to merge into my home venti one of these days...
> >
> > -Steve
> >
>
>

[-- Attachment #2: Type: text/html, Size: 2424 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-12 18:42     ` Ole-Hjalmar Kristensen
@ 2017-12-12 19:16       ` Steven Stallion
  2017-12-12 20:31         ` hiro
  2017-12-12 23:36         ` Skip Tavakkolian
  0 siblings, 2 replies; 27+ messages in thread
From: Steven Stallion @ 2017-12-12 19:16 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

I ran back through my old notes. Turns out I inflated the numbers a
bit - it was about a week rather than a month. I suspect the main
culprit is the fact that 9p doesn't support multiple outstanding. I
wasn't in much of a hurry at the time, so I'm sure there are more
efficient ways than simply firing up cwfs and using vac -s.

On Tue, Dec 12, 2017 at 12:42 PM, Ole-Hjalmar Kristensen
<ole.hjalmar.kristensen@gmail.com> wrote:
> Thanks for the tip about mounting with 9fs. I have used vacfs on Linux ,
> though.
> But why so slow? Did you import a root with lots of backup versions? It was
> partly because of that I made this client which can import venti blocks
> without needing to traverse a file tree over and over again.
>
> On Tue, Dec 12, 2017 at 4:45 PM, Steven Stallion <sstallion@gmail.com>
> wrote:
>>
>> Get ready to wait! It took almost a month for me to import about 30GB
>> from a decommissioned file server. It was well worth the wait though -
>> if you place the the resulting .vac file under /lib/vac (or
>> $home/lib/vac) you can just use 9fs to mount with zero fuss.
>>
>> On a related note, once sources starting having issues with
>> availability, I started running nightly snaps of my contrib directory
>> via cron:
>>
>> contrib=/n/sources/contrib/$user
>> 9fs sources
>> @{cd $contrib && vac -a $home/lib/vac/contrib.vac .} >[2]/dev/null
>>
>> Now I have a dump-like history of changes I've made to my contrib
>> directory without the need to connect to sources:
>>
>> % 9fs contrib.vac
>> % lc /n/contrib
>> 2015    2016    2017
>>
>> Cheers,
>> Steve
>>
>> On Tue, Dec 12, 2017 at 8:07 AM, Steve Simon <steve@quintile.net> wrote:
>> > printarenas is a script - it walks through all your arenas at each
>> > offset.
>> >
>> > You could craft another script that remembers the last arena and offset
>> > you successfully
>> > transferred and only send those after that.
>> >
>> > I think there is a pattern where you can save the last arena,offset in
>> > the local
>> > fossil. Then you could mount the remote venti to check that last
>> > arena,offset
>> > that actually arrived and stuck to the disk on the remote site.
>> >
>> > On a similar subject I have 10 years of backups from a decomissioned
>> > work server
>> > that I need to merge into my home venti one of these days...
>> >
>> > -Steve
>> >
>>
>



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-12 18:33   ` Ole-Hjalmar Kristensen
@ 2017-12-12 19:53     ` Steve Simon
  2017-12-12 20:03       ` Steve Simon
  2017-12-12 20:07       ` Ole-Hjalmar Kristensen
  2017-12-12 20:15     ` Steve Simon
  1 sibling, 2 replies; 27+ messages in thread
From: Steve Simon @ 2017-12-12 19:53 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: text/plain, Size: 1689 bytes --]

/sys/src/cmd/venti/words/printarenas

no idea why it lived there though.

-Steve


> On 12 Dec 2017, at 18:33, Ole-Hjalmar Kristensen <ole.hjalmar.kristensen@gmail.com> wrote:
> 
> Hmm. On both my plan9port and on a 9front system I find printarenas.c, but no script. Maybe you are thinking of the script for backup of individual arenas to file? Yes, that could be a starting point.
> 
> Anyway, printarenas.c doesn't look too scary, basically a loop checking all (or matching) arenas. It seems possible to modify the logic to start at a specific offset.
> 
> Not running fossil at the moment, btw., my main file server is a Linux box, but I use vac for backup, both at home and at work. Fossil is definitely on my todo list, although the reported behavior when running out of space is a bit scary. Do you know why it does not simply block further requests while checkpointing to venti, or even better, starts a snapshot before it runs out of space?
> 
>> On Tue, Dec 12, 2017 at 3:07 PM, Steve Simon <steve@quintile.net> wrote:
>> printarenas is a script - it walks through all your arenas at each offset.
>> 
>> You could craft another script that remembers the last arena and offset you successfully
>> transferred and only send those after that.
>> 
>> I think there is a pattern where you can save the last arena,offset in the local
>> fossil. Then you could mount the remote venti to check that last arena,offset
>> that actually arrived and stuck to the disk on the remote site.
>> 
>> On a similar subject I have 10 years of backups from a decomissioned work server
>> that I need to merge into my home venti one of these days...
>> 
>> -Steve
>> 
> 

[-- Attachment #2: Type: text/html, Size: 2396 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-12 19:53     ` Steve Simon
@ 2017-12-12 20:03       ` Steve Simon
  2017-12-12 20:07       ` Ole-Hjalmar Kristensen
  1 sibling, 0 replies; 27+ messages in thread
From: Steve Simon @ 2017-12-12 20:03 UTC (permalink / raw)
  To: 9fans

r

sorry I meant /sys/src/cmd/venti/words/dumpvacroots of course.



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-12 19:53     ` Steve Simon
  2017-12-12 20:03       ` Steve Simon
@ 2017-12-12 20:07       ` Ole-Hjalmar Kristensen
  1 sibling, 0 replies; 27+ messages in thread
From: Ole-Hjalmar Kristensen @ 2017-12-12 20:07 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: text/plain, Size: 2779 bytes --]

Same place as I found another useful script, dumpvacroots:

#!/bin/rc
# dumpvacroots - dumps all the vac scores ever stored to the venti server
# if nothing else, this illustrates that you have to control access
# to the physical disks storing the archive!

ventihttp=`{
    echo $venti | sed 's/^[a-z]+!([0-9\.]+)![a-z0-9]+$/\1/
        s/^[a-z]+!([0-9\.]+)/\1/; s/$/:8000/'
}

hget http://$ventihttp/index |
    awk '
         /^index=/ { blockSize = 0 + substr($3, 11) }
         /^arena=/ { arena = substr($1, 7) }
         /^    arena=/ {
            start = (0 + substr($5, 2)) - blockSize
            printf("venti/printarena -o %.0f %s\n", start, $3 "")
        }
    ' |
    rc |
    awk '$3 == 16 { printf("vac:%s\n", $2 "") }'

This definitely looks like it could be hacked to support an incremental
dump of scores.

No printarenas there on my (9front) system, though. I'll have to see on a
proper plan9 system, maybe.

On Tue, Dec 12, 2017 at 8:53 PM, Steve Simon <steve@quintile.net> wrote:

> /sys/src/cmd/venti/words/printarenas
>
> no idea why it lived there though.
>
> -Steve
>
>
> On 12 Dec 2017, at 18:33, Ole-Hjalmar Kristensen <
> ole.hjalmar.kristensen@gmail.com> wrote:
>
> Hmm. On both my plan9port and on a 9front system I find printarenas.c, but
> no script. Maybe you are thinking of the script for backup of individual
> arenas to file? Yes, that could be a starting point.
>
> Anyway, printarenas.c doesn't look too scary, basically a loop checking
> all (or matching) arenas. It seems possible to modify the logic to start at
> a specific offset.
>
> Not running fossil at the moment, btw., my main file server is a Linux
> box, but I use vac for backup, both at home and at work. Fossil is
> definitely on my todo list, although the reported behavior when running out
> of space is a bit scary. Do you know why it does not simply block further
> requests while checkpointing to venti, or even better, starts a snapshot
> before it runs out of space?
>
> On Tue, Dec 12, 2017 at 3:07 PM, Steve Simon <steve@quintile.net> wrote:
>
>> printarenas is a script - it walks through all your arenas at each offset.
>>
>> You could craft another script that remembers the last arena and offset
>> you successfully
>> transferred and only send those after that.
>>
>> I think there is a pattern where you can save the last arena,offset in
>> the local
>> fossil. Then you could mount the remote venti to check that last
>> arena,offset
>> that actually arrived and stuck to the disk on the remote site.
>>
>> On a similar subject I have 10 years of backups from a decomissioned work
>> server
>> that I need to merge into my home venti one of these days...
>>
>> -Steve
>>
>>
>

[-- Attachment #2: Type: text/html, Size: 3996 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-12 18:33   ` Ole-Hjalmar Kristensen
  2017-12-12 19:53     ` Steve Simon
@ 2017-12-12 20:15     ` Steve Simon
  2017-12-12 20:31       ` Ole-Hjalmar Kristensen
  2017-12-12 21:02       ` Steven Stallion
  1 sibling, 2 replies; 27+ messages in thread
From: Steve Simon @ 2017-12-12 20:15 UTC (permalink / raw)
  To: 9fans

Re: fossil

Fossil must not fill up, however I would say that the dropoff was the lack of clear
documentation stating this.

Fossil has two modes of operation.

As a stand alone filesystem, not really intented (I believe) as a production
system, more as a replacement for kfs - for laptops or installation systems.

A full fossil system is when it is combined with a local venti (venti on the same
machine or on a fast, low latency network connection). Here most files are pulled
from venti (in the limit fossil only contains a single score which redirects the root
of the filesystem to a venti score. However as you change files the new version
is stored on fossil.

Every night aty 4 or 5 am (by convention) fossil does a snap, bumps it epoch which
marks all the changed files as readonly and further changes creates a new file.
The readonly files are then written to venti in the background and their space in fossil
reclaimed.

This means the fossil only needs to be big enough to contain all the changes you
are likely to make in a day - in reality 10Gb or fossil will never fill up unless
you decide to archive your entire dvd collection on the same day.
I have been running fossil and venti since 2004. Fossil did have problems doing
ephemerial dumps (short lived dumps every 15 mins which live for a few days).
This bug used to cause occasional fossil crashes but venti never lost a byte.

The bug was fixed before the labs froze and fossil has been solid since.

I used an ssd for venti which helps its performance, though even with this it will
never match liniux filesystem performance (cwfs may well do better), but I know it
and its fast enough for me for now.

-Steve



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-12 20:15     ` Steve Simon
@ 2017-12-12 20:31       ` Ole-Hjalmar Kristensen
  2017-12-12 20:38         ` Steve Simon
  2017-12-12 21:02       ` Steven Stallion
  1 sibling, 1 reply; 27+ messages in thread
From: Ole-Hjalmar Kristensen @ 2017-12-12 20:31 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: text/plain, Size: 2275 bytes --]

I can understand that it cannot fill up. What I do not understand is why
there are no safeguards in place to ensure that it doesn't. (And my inner
geek wants to know)
As you say, in reality it will not fill up unless you dump huge amounts of
data on it at once. Unfortunately, this is just what I intended to do, dump
a 1.5 TB Linux file system on it. :-)

On Tue, Dec 12, 2017 at 9:15 PM, Steve Simon <steve@quintile.net> wrote:

> Re: fossil
>
> Fossil must not fill up, however I would say that the dropoff was the lack
> of clear
> documentation stating this.
>
> Fossil has two modes of operation.
>
> As a stand alone filesystem, not really intented (I believe) as a
> production
> system, more as a replacement for kfs - for laptops or installation
> systems.
>
> A full fossil system is when it is combined with a local venti (venti on
> the same
> machine or on a fast, low latency network connection). Here most files are
> pulled
> from venti (in the limit fossil only contains a single score which
> redirects the root
> of the filesystem to a venti score. However as you change files the new
> version
> is stored on fossil.
>
> Every night aty 4 or 5 am (by convention) fossil does a snap, bumps it
> epoch which
> marks all the changed files as readonly and further changes creates a new
> file.
> The readonly files are then written to venti in the background and their
> space in fossil
> reclaimed.
>
> This means the fossil only needs to be big enough to contain all the
> changes you
> are likely to make in a day - in reality 10Gb or fossil will never fill up
> unless
> you decide to archive your entire dvd collection on the same day.
> I have been running fossil and venti since 2004. Fossil did have problems
> doing
> ephemerial dumps (short lived dumps every 15 mins which live for a few
> days).
> This bug used to cause occasional fossil crashes but venti never lost a
> byte.
>
> The bug was fixed before the labs froze and fossil has been solid since.
>
> I used an ssd for venti which helps its performance, though even with this
> it will
> never match liniux filesystem performance (cwfs may well do better), but I
> know it
> and its fast enough for me for now.
>
> -Steve
>
>

[-- Attachment #2: Type: text/html, Size: 2667 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-12 19:16       ` Steven Stallion
@ 2017-12-12 20:31         ` hiro
  2017-12-12 23:36         ` Skip Tavakkolian
  1 sibling, 0 replies; 27+ messages in thread
From: hiro @ 2017-12-12 20:31 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

"the fact that 9p doesn't support multiple outstanding"

that's not a sentence, but i'm not sure it's thus a joke.



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-12 20:31       ` Ole-Hjalmar Kristensen
@ 2017-12-12 20:38         ` Steve Simon
  2017-12-12 21:40           ` Ole-Hjalmar Kristensen
  0 siblings, 1 reply; 27+ messages in thread
From: Steve Simon @ 2017-12-12 20:38 UTC (permalink / raw)
  To: 9fans

The best solution (imho) for what you want to do is the feature I never added.

It would be great if you could vac up your linux fs and then just cut and past the
vac score into fossil's console with a command like this:

main import -v 7478923893289ef928932a9888c98b2333 /active/usr/ole/linux

the alternative is a 1.6Tb fossil.

-Steve



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-12 20:15     ` Steve Simon
  2017-12-12 20:31       ` Ole-Hjalmar Kristensen
@ 2017-12-12 21:02       ` Steven Stallion
  2017-12-12 21:55         ` Ole-Hjalmar Kristensen
  1 sibling, 1 reply; 27+ messages in thread
From: Steven Stallion @ 2017-12-12 21:02 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

I have a similar setup. On my file server I have a mirrored pair of
high-endurance SSDs tied together via devfs with two fossil file
systems: main and other. main is a 32GB write cache which is dumped
each night at midnight (this is similar to the labs configuration for
sources). other is the remaining 96GB for data that doesn't need to
survive if both SSDs happen to fail at the same time.

My venti store is run on a large Linux machine (~6TB of RAID6 storage)
and is served via plan9port. Another highly recommended setup is if
you happen to have a Coraid EtherDrive (I'm biased towards the SRX
line) this make fantastic stores via the magic of AoE. Unfortunately I
don't have the rack space, otherwise I'd be using one of those
instead.

If you're curious about the venti-on-linux setup, I have some scripts
and a README posted on sources:
https://9p.io/magic/webls?dir=/sources/contrib/stallion/venti

Somewhat more recently, I wrote a collectd client for plan9 and I also
monitor my file server using nagios. If there's any interest, I'd be
happy to post those sources as well.

Cheers,
Steve

On Tue, Dec 12, 2017 at 2:15 PM, Steve Simon <steve@quintile.net> wrote:
> Re: fossil
>
> Fossil must not fill up, however I would say that the dropoff was the lack of clear
> documentation stating this.
>
> Fossil has two modes of operation.
>
> As a stand alone filesystem, not really intented (I believe) as a production
> system, more as a replacement for kfs - for laptops or installation systems.
>
> A full fossil system is when it is combined with a local venti (venti on the same
> machine or on a fast, low latency network connection). Here most files are pulled
> from venti (in the limit fossil only contains a single score which redirects the root
> of the filesystem to a venti score. However as you change files the new version
> is stored on fossil.
>
> Every night aty 4 or 5 am (by convention) fossil does a snap, bumps it epoch which
> marks all the changed files as readonly and further changes creates a new file.
> The readonly files are then written to venti in the background and their space in fossil
> reclaimed.
>
> This means the fossil only needs to be big enough to contain all the changes you
> are likely to make in a day - in reality 10Gb or fossil will never fill up unless
> you decide to archive your entire dvd collection on the same day.
> I have been running fossil and venti since 2004. Fossil did have problems doing
> ephemerial dumps (short lived dumps every 15 mins which live for a few days).
> This bug used to cause occasional fossil crashes but venti never lost a byte.
>
> The bug was fixed before the labs froze and fossil has been solid since.
>
> I used an ssd for venti which helps its performance, though even with this it will
> never match liniux filesystem performance (cwfs may well do better), but I know it
> and its fast enough for me for now.
>
> -Steve
>



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-12 20:38         ` Steve Simon
@ 2017-12-12 21:40           ` Ole-Hjalmar Kristensen
  2017-12-13  0:03             ` Steve Simon
  0 siblings, 1 reply; 27+ messages in thread
From: Ole-Hjalmar Kristensen @ 2017-12-12 21:40 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: text/plain, Size: 938 bytes --]

Yes, I know. I was thinking along the same lines a while ago, we even
discussed this here on this mailing list. I did some digging, and I found
this interesting comment in vac/file.c:

/*
 <snip>
 *
 * Fossil generates slightly different vac files, due to a now
 * impossible-to-change bug, which contain a VtEntry
 * for just one venti file, that itself contains the expected
 * three directory entries.  Sigh.
 */
VacFile*
_vacfileroot(VacFs *fs, VtFile *r)

Ole-Hj

On Tue, Dec 12, 2017 at 9:38 PM, Steve Simon <steve@quintile.net> wrote:

> The best solution (imho) for what you want to do is the feature I never
> added.
>
> It would be great if you could vac up your linux fs and then just cut and
> past the
> vac score into fossil's console with a command like this:
>
> main import -v 7478923893289ef928932a9888c98b2333 /active/usr/ole/linux
>
> the alternative is a 1.6Tb fossil.
>
> -Steve
>
>

[-- Attachment #2: Type: text/html, Size: 1387 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-12 21:02       ` Steven Stallion
@ 2017-12-12 21:55         ` Ole-Hjalmar Kristensen
  0 siblings, 0 replies; 27+ messages in thread
From: Ole-Hjalmar Kristensen @ 2017-12-12 21:55 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: text/plain, Size: 3651 bytes --]

Yes, you better have high-endurance SSD's. I put the venti index at work on
an ordinary SSD, and it lasted six months. The log itself was fine, of
course, so I only had to rebuild the index to recover. This was plan9port
on Solaris, btw.
Now this venti runs on an ordinary disk, the speed is less, but not that
much, since I moved it to another machine with about 1G allocated to venti
buffer caches.

On Tue, Dec 12, 2017 at 10:02 PM, Steven Stallion <sstallion@gmail.com>
wrote:

> I have a similar setup. On my file server I have a mirrored pair of
> high-endurance SSDs tied together via devfs with two fossil file
> systems: main and other. main is a 32GB write cache which is dumped
> each night at midnight (this is similar to the labs configuration for
> sources). other is the remaining 96GB for data that doesn't need to
> survive if both SSDs happen to fail at the same time.
>
> My venti store is run on a large Linux machine (~6TB of RAID6 storage)
> and is served via plan9port. Another highly recommended setup is if
> you happen to have a Coraid EtherDrive (I'm biased towards the SRX
> line) this make fantastic stores via the magic of AoE. Unfortunately I
> don't have the rack space, otherwise I'd be using one of those
> instead.
>
> If you're curious about the venti-on-linux setup, I have some scripts
> and a README posted on sources:
> https://9p.io/magic/webls?dir=/sources/contrib/stallion/venti
>
> Somewhat more recently, I wrote a collectd client for plan9 and I also
> monitor my file server using nagios. If there's any interest, I'd be
> happy to post those sources as well.
>
> Cheers,
> Steve
>
> On Tue, Dec 12, 2017 at 2:15 PM, Steve Simon <steve@quintile.net> wrote:
> > Re: fossil
> >
> > Fossil must not fill up, however I would say that the dropoff was the
> lack of clear
> > documentation stating this.
> >
> > Fossil has two modes of operation.
> >
> > As a stand alone filesystem, not really intented (I believe) as a
> production
> > system, more as a replacement for kfs - for laptops or installation
> systems.
> >
> > A full fossil system is when it is combined with a local venti (venti on
> the same
> > machine or on a fast, low latency network connection). Here most files
> are pulled
> > from venti (in the limit fossil only contains a single score which
> redirects the root
> > of the filesystem to a venti score. However as you change files the new
> version
> > is stored on fossil.
> >
> > Every night aty 4 or 5 am (by convention) fossil does a snap, bumps it
> epoch which
> > marks all the changed files as readonly and further changes creates a
> new file.
> > The readonly files are then written to venti in the background and their
> space in fossil
> > reclaimed.
> >
> > This means the fossil only needs to be big enough to contain all the
> changes you
> > are likely to make in a day - in reality 10Gb or fossil will never fill
> up unless
> > you decide to archive your entire dvd collection on the same day.
> > I have been running fossil and venti since 2004. Fossil did have
> problems doing
> > ephemerial dumps (short lived dumps every 15 mins which live for a few
> days).
> > This bug used to cause occasional fossil crashes but venti never lost a
> byte.
> >
> > The bug was fixed before the labs froze and fossil has been solid since.
> >
> > I used an ssd for venti which helps its performance, though even with
> this it will
> > never match liniux filesystem performance (cwfs may well do better), but
> I know it
> > and its fast enough for me for now.
> >
> > -Steve
> >
>
>

[-- Attachment #2: Type: text/html, Size: 4390 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-12 19:16       ` Steven Stallion
  2017-12-12 20:31         ` hiro
@ 2017-12-12 23:36         ` Skip Tavakkolian
  2017-12-13 10:17           ` Bakul Shah
  1 sibling, 1 reply; 27+ messages in thread
From: Skip Tavakkolian @ 2017-12-12 23:36 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: text/plain, Size: 344 bytes --]

i think it's not being taken advantage of, rather than ability:

https://github.com/0intro/plan9/blob/7524062cfa4689019a4ed6fc22500ec209522ef0/sys/src/cmd/fcp.c


On Tue, Dec 12, 2017 at 11:38 AM Steven Stallion <sstallion@gmail.com>
wrote:

> I suspect the main
> culprit is the fact that 9p doesn't support multiple outstanding.
>

[-- Attachment #2: Type: text/html, Size: 734 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-12 21:40           ` Ole-Hjalmar Kristensen
@ 2017-12-13  0:03             ` Steve Simon
  2017-12-13  7:29               ` Ole-Hjalmar Kristensen
  0 siblings, 1 reply; 27+ messages in thread
From: Steve Simon @ 2017-12-13  0:03 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: text/plain, Size: 1250 bytes --]

grief, sorry. 

what can i say, too old, too many kids. important stuff gets pushed out of my brain (against my will) to make room for the lyrics of “Let it go”.


> On 12 Dec 2017, at 21:40, Ole-Hjalmar Kristensen <ole.hjalmar.kristensen@gmail.com> wrote:
> 
> Yes, I know. I was thinking along the same lines a while ago, we even discussed this here on this mailing list. I did some digging, and I found this interesting comment in vac/file.c:
> 
> /* 
>  <snip>
>  *
>  * Fossil generates slightly different vac files, due to a now
>  * impossible-to-change bug, which contain a VtEntry
>  * for just one venti file, that itself contains the expected
>  * three directory entries.  Sigh.
>  */
> VacFile*
> _vacfileroot(VacFs *fs, VtFile *r)
> 
> Ole-Hj
> 
>> On Tue, Dec 12, 2017 at 9:38 PM, Steve Simon <steve@quintile.net> wrote:
>> The best solution (imho) for what you want to do is the feature I never added.
>> 
>> It would be great if you could vac up your linux fs and then just cut and past the
>> vac score into fossil's console with a command like this:
>> 
>> main import -v 7478923893289ef928932a9888c98b2333 /active/usr/ole/linux
>> 
>> the alternative is a 1.6Tb fossil.
>> 
>> -Steve
>> 
> 

[-- Attachment #2: Type: text/html, Size: 2056 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-13  0:03             ` Steve Simon
@ 2017-12-13  7:29               ` Ole-Hjalmar Kristensen
  2017-12-13  9:44                 ` hiro
  2017-12-13 11:00                 ` Steve Simon
  0 siblings, 2 replies; 27+ messages in thread
From: Ole-Hjalmar Kristensen @ 2017-12-13  7:29 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: text/plain, Size: 1503 bytes --]

No need to be sorry. I've been looking at the code now and then, but
haven't really got the hang of the difference between the vac and venti
formats.

On Wed, Dec 13, 2017 at 1:03 AM, Steve Simon <steve@quintile.net> wrote:

> grief, sorry.
>
> what can i say, too old, too many kids. important stuff gets pushed out of
> my brain (against my will) to make room for the lyrics of “Let it go”.
>
>
> On 12 Dec 2017, at 21:40, Ole-Hjalmar Kristensen <
> ole.hjalmar.kristensen@gmail.com> wrote:
>
> Yes, I know. I was thinking along the same lines a while ago, we even
> discussed this here on this mailing list. I did some digging, and I found
> this interesting comment in vac/file.c:
>
> /*
>  <snip>
>  *
>  * Fossil generates slightly different vac files, due to a now
>  * impossible-to-change bug, which contain a VtEntry
>  * for just one venti file, that itself contains the expected
>  * three directory entries.  Sigh.
>  */
> VacFile*
> _vacfileroot(VacFs *fs, VtFile *r)
>
> Ole-Hj
>
> On Tue, Dec 12, 2017 at 9:38 PM, Steve Simon <steve@quintile.net> wrote:
>
>> The best solution (imho) for what you want to do is the feature I never
>> added.
>>
>> It would be great if you could vac up your linux fs and then just cut and
>> past the
>> vac score into fossil's console with a command like this:
>>
>> main import -v 7478923893289ef928932a9888c98b2333 /active/usr/ole/linux
>>
>> the alternative is a 1.6Tb fossil.
>>
>> -Steve
>>
>>
>

[-- Attachment #2: Type: text/html, Size: 2526 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-13  7:29               ` Ole-Hjalmar Kristensen
@ 2017-12-13  9:44                 ` hiro
  2017-12-13 11:00                 ` Steve Simon
  1 sibling, 0 replies; 27+ messages in thread
From: hiro @ 2017-12-13  9:44 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

thanks for backing me skip.



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-12 23:36         ` Skip Tavakkolian
@ 2017-12-13 10:17           ` Bakul Shah
  0 siblings, 0 replies; 27+ messages in thread
From: Bakul Shah @ 2017-12-13 10:17 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: text/plain, Size: 745 bytes --]

On Dec 12, 2017, at 3:36 PM, Skip Tavakkolian <skip.tavakkolian@gmail.com> wrote:
> 
> i think it's not being taken advantage of, rather than ability:
> 
> https://github.com/0intro/plan9/blob/7524062cfa4689019a4ed6fc22500ec209522ef0/sys/src/cmd/fcp.c <https://github.com/0intro/plan9/blob/7524062cfa4689019a4ed6fc22500ec209522ef0/sys/src/cmd/fcp.c>
> 
> 
> On Tue, Dec 12, 2017 at 11:38 AM Steven Stallion <sstallion@gmail.com <mailto:sstallion@gmail.com>> wrote:
> I suspect the main
> culprit is the fact that 9p doesn't support multiple outstanding.

{fossil,vac}<==>venti uses venti networking protocol, not 9p.
You can have upto 256 outstanding requests but I don't think
libventi exploits this. It seems to do strict RPC.


[-- Attachment #2: Type: text/html, Size: 1604 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-13  7:29               ` Ole-Hjalmar Kristensen
  2017-12-13  9:44                 ` hiro
@ 2017-12-13 11:00                 ` Steve Simon
  2017-12-13 12:22                   ` Richard Miller
  2017-12-13 13:37                   ` Ole-Hjalmar Kristensen
  1 sibling, 2 replies; 27+ messages in thread
From: Steve Simon @ 2017-12-13 11:00 UTC (permalink / raw)
  To: 9fans

I don't think there is any difference between vac and what fossil uses,
just where it appears in the hierarchy (though maybe I am wrong).

Fossil adds a fixed upper layer of hierarchy

	active
	dump
		<year>
			<month><mday>
	snap
		<year>
			<month><mday>
				<hour><min>

The difficulty is how to convince fossil to install a score into its hierarchy as though
its one that it created.

I am pretty sure this is doable, it just needs a rather deep understanding of how fossil
works and when I tried to do it I discovered fossil is really rather complex.

-Steve



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-13 11:00                 ` Steve Simon
@ 2017-12-13 12:22                   ` Richard Miller
  2017-12-13 14:13                     ` Ole-Hjalmar Kristensen
  2017-12-13 13:37                   ` Ole-Hjalmar Kristensen
  1 sibling, 1 reply; 27+ messages in thread
From: Richard Miller @ 2017-12-13 12:22 UTC (permalink / raw)
  To: 9fans

> The difficulty is how to convince fossil to install a score into its hierarchy as though
> its one that it created.

Wouldn't that cause a problem with the two origin file systems
having overlapping Qid spaces?  I think you would need to walk
and rebuild the directory tree of the vac being inserted, to
assign new Qid.path values.




^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-13 11:00                 ` Steve Simon
  2017-12-13 12:22                   ` Richard Miller
@ 2017-12-13 13:37                   ` Ole-Hjalmar Kristensen
  1 sibling, 0 replies; 27+ messages in thread
From: Ole-Hjalmar Kristensen @ 2017-12-13 13:37 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: text/plain, Size: 2315 bytes --]

I don't know either, but when I tried flfmt with a vac score as an
experiment, I got this:

ole@ole-TECRA-R940 ~/Desktop/plan9 $ bin/fossil/flfmt -h 192.168.0.101 -v
f648dbae0075eb73bc394ad6cd4c059e655e127c fossil.dat
fs header block already exists; are you sure? [y/n]: y
fs file is mounted via devmnt (is not a kernel device); are you sure?
[y/n]: y
0xfb1e734c
0x1d1feaf1
c85978546e4048fce83120d3992cfc2f57ff2f8c
bin/fossil/flfmt: bad root: no qidSpace

    /*
     * Maximum qid is recorded in root's msource, entry #2 (conveniently in
e).
     */
    ventiRead(e.score, VtDataType);
    if(!mbUnpack(&mb, buf, bsize))
        sysfatal("bad root: mbUnpack");
    meUnpack(&me, &mb, 0);
    if(!deUnpack(&de, &me))
        sysfatal("bad root: dirUnpack");
    if(!de.qidSpace)
        sysfatal("bad root: no qidSpace");
    qid = de.qidMax;

It seems that the vac archive does not contain the max qid that
flfmt needs. This seems strange to me, as vac -a should need this info just
as much as fossil needs it. Maybe it's tucked away somewhere else. Guess I
need to look some more at the code.

Digging further, I found the comment in file.c, but did not pursue the matter:

 * Fossil generates slightly different vac files, due to a now
 * impossible-to-change bug, which contain a VtEntry
 * for just one venti file, that itself contains the expected
 * three directory entries.  Sigh.
 */
VacFile*
_vacfileroot(VacFs *fs, VtFile *r)


On Wed, Dec 13, 2017 at 11:00 AM, Steve Simon <steve@quintile.net> wrote:

> I don't think there is any difference between vac and what fossil uses,
> just where it appears in the hierarchy (though maybe I am wrong).
>
> Fossil adds a fixed upper layer of hierarchy
>
>         active
>         dump
>                 <year>
>                         <month><mday>
>         snap
>                 <year>
>                         <month><mday>
>                                 <hour><min>
>
> The difficulty is how to convince fossil to install a score into its
> hierarchy as though
> its one that it created.
>
> I am pretty sure this is doable, it just needs a rather deep understanding
> of how fossil
> works and when I tried to do it I discovered fossil is really rather
> complex.
>
> -Steve
>
>

[-- Attachment #2: Type: text/html, Size: 2924 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] A potentially useful venti client
  2017-12-13 12:22                   ` Richard Miller
@ 2017-12-13 14:13                     ` Ole-Hjalmar Kristensen
  0 siblings, 0 replies; 27+ messages in thread
From: Ole-Hjalmar Kristensen @ 2017-12-13 14:13 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: text/plain, Size: 731 bytes --]

Here is a pointer to a discussion on comp.os.plan9, but I did not really
get a clear understanding of whether it was possible or not. It seems to me
that it was possible at some time, but based on my own findings, changes to
the format may have made vac and fossil incompatible.

On Wed, Dec 13, 2017 at 12:22 PM, Richard Miller <9fans@hamnavoe.com> wrote:

> > The difficulty is how to convince fossil to install a score into its
> hierarchy as though
> > its one that it created.
>
> Wouldn't that cause a problem with the two origin file systems
> having overlapping Qid spaces?  I think you would need to walk
> and rebuild the directory tree of the vac being inserted, to
> assign new Qid.path values.
>
>
>

[-- Attachment #2: Type: text/html, Size: 1070 bytes --]

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2017-12-13 14:13 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-12  9:33 [9fans] A potentially useful venti client Ole-Hjalmar Kristensen
2017-12-12 14:07 ` Steve Simon
2017-12-12 15:45   ` Steven Stallion
2017-12-12 16:11     ` Steve Simon
2017-12-12 16:23       ` Steven Stallion
2017-12-12 18:42     ` Ole-Hjalmar Kristensen
2017-12-12 19:16       ` Steven Stallion
2017-12-12 20:31         ` hiro
2017-12-12 23:36         ` Skip Tavakkolian
2017-12-13 10:17           ` Bakul Shah
2017-12-12 18:33   ` Ole-Hjalmar Kristensen
2017-12-12 19:53     ` Steve Simon
2017-12-12 20:03       ` Steve Simon
2017-12-12 20:07       ` Ole-Hjalmar Kristensen
2017-12-12 20:15     ` Steve Simon
2017-12-12 20:31       ` Ole-Hjalmar Kristensen
2017-12-12 20:38         ` Steve Simon
2017-12-12 21:40           ` Ole-Hjalmar Kristensen
2017-12-13  0:03             ` Steve Simon
2017-12-13  7:29               ` Ole-Hjalmar Kristensen
2017-12-13  9:44                 ` hiro
2017-12-13 11:00                 ` Steve Simon
2017-12-13 12:22                   ` Richard Miller
2017-12-13 14:13                     ` Ole-Hjalmar Kristensen
2017-12-13 13:37                   ` Ole-Hjalmar Kristensen
2017-12-12 21:02       ` Steven Stallion
2017-12-12 21:55         ` Ole-Hjalmar Kristensen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).