9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
From: Anthony Sorace <a@9srv.net>
To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
Subject: Re: [9fans] RAID box with plan9 filesystem
Date: Sat, 24 Dec 2011 11:47:02 -0500	[thread overview]
Message-ID: <C760DC80-03EB-4C04-B017-B9F6B8A3C000@9srv.net> (raw)
In-Reply-To: <CAEAzY38nfrD698DPbjP=sd=9XTRtNWUxw0d7u=XgEis21vTkgQ@mail.gmail.com>

> I want to use venti for permanent storage.  For temporary storage that
> might or might not be saved to venti, I don't know, I've been thinking
> of cwfs.  I've heard it's as reliable as kfs, though I never had any
> issue with any Plan9 filesystem.

Make sure you're not confusing kfs with Ken's file server, sometimes
(confusingly) called "kenfs". kenfs formed the basis for cwfs, and the
code and function is closely related; kfs is essentially unrelated, and is a
much more conventional disk file system.

Ken's file server is extremely solid, and I believe cwfs is comparable
(although I don't have nearly as much experience with it). kfs is much less
so; I've had it become unrecoverable after unexpected shutdown several
times. My experience with fossil has been somewhere between the two,
but it's very easy to recover if you're using it with venti.

> Does it make sense for cwfs to dump to venti?  I wish to use venti
> because I'll be using it for other things as well, like backing up my
> Windows and Mac machines, not only for cwfs dumps.  Should I just use
> fossil?

Using fossil with venti would give you the most seamless integration. You
can dump any file system to venti using vac, though. Using cwfs with venti
would seem a bit redundant, as you'd now have two unrelated systems
providing archival, write-once storage (unless cwfs grew some form of
venti integration while I wasn't watching).

> I've heard Plan9 software RAID doesn't notify on hardware failure. Should
> I use a true, hardware RAID card (one that doesn't require drivers)?

It's true that fs(3), which provides RAID-like functionality, doesn't do failure
notification in a particularly useful way. As per the man page:
          Mirrors are RAID-like but not RAID.  There is no fancy
          recovery mechanism and no automatic initial copying from a
          master drive to its mirror drives.
I use it and have been happy with it, but it's not a complete solution for
genuinely critical data. I don't know much about RAID cards, but the main
thing I'd want to avoid there is proprietary on-disk formats. It might be a bit
much for home use, but if I had a little bit of a budget I'd use Coraid's AoE
stuff as the basis for my storage.


  parent reply	other threads:[~2011-12-24 16:47 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-12-21 16:14 tlaronde
2011-12-24 13:42 ` Aram Hăvărneanu
2011-12-24 15:03   ` Steve Simon
2011-12-24 16:47   ` Anthony Sorace [this message]
2011-12-25 16:30     ` Aram Hăvărneanu
2011-12-26 11:34       ` Yaroslav
2011-12-27 14:34         ` Aram Hăvărneanu
2011-12-28 13:16       ` Charles Forsyth

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=C760DC80-03EB-4C04-B017-B9F6B8A3C000@9srv.net \
    --to=a@9srv.net \
    --cc=9fans@9fans.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).