9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
From: Roman V Shaposhnik <rvs@sun.com>
To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
Subject: Re: [9fans] ceph
Date: Mon,  3 Aug 2009 18:32:02 -0700	[thread overview]
Message-ID: <1249349522.479.15191.camel@work.SFBay.Sun.COM> (raw)
In-Reply-To: <13426df10908010847v5f4f891fq9510ad4b671660ea@mail.gmail.com>

On Sat, 2009-08-01 at 08:47 -0700, ron minnich wrote:
> > What are their requirements as
> > far as POSIX is concerned?
>
> 10,000 machines, working on a single app, must have access to a common
> file store with full posix semantics and it all has to work like it
> were one machine (their desktop, of course).
>
> This gets messy. It turns into an exercise of attempting to manage a
> competing set of race conditions. It's like tuning
> a multi-carburated enging from years gone by, assuming we ever had an
> engine with 10,000 cylinders.

Well, with Linux, at least you have a benefit of a gazillions of FS
clients being available either natively or via FUSE. With Solaris...
oh well...

> > How much storage are talking about?
> In  round numbers, for the small clusters, usually a couple hundred T.
> For anyhing else, more.

Is all of this storage attached to a very small number of IO nodes, or
is it evenly spread across the cluster?

In fact, I'm interested in both scenarios, so here come two questions:
  1. do we have anybody successfully managing that much storage (lets
     say ~100T) via something like humongous fossil installation (or
     kenfs for that matter)?

  2. do we have anybody successfully managing that much storage that is
     also spread across the nodes? And if so, what's the best practices
     out there to make the client not worry about where does the storage
     actually come from (IOW, any kind of proxying of I/O, etc)

I'm trying to see how the life after NFSv4 or AFS might look like for
the clients still clinging to the old ways of doing things, yet
trying to cooperatively use hundreds of T of storage.

> > I'd be interested in discussing some aspects of what you're trying to
> > accomplish with 9P for the HPC guys.
>
> The request: for each of the (lots of) compute nodes, have them mount
> over 9p to, say 100x fewer io nodes, each of those to run lustre.

Sorry for being dense, but what exactly is going to be accomplished
by proxying I/O in such a way?

Thanks,
Roman.




  reply	other threads:[~2009-08-04  1:32 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-07-30  0:43 Roman V Shaposhnik
2009-07-30 16:31 ` sqweek
2009-08-01  5:24   ` Roman Shaposhnik
2009-08-01  5:41     ` ron minnich
2009-08-01  5:53       ` Roman Shaposhnik
2009-08-01 15:47         ` ron minnich
2009-08-04  1:32           ` Roman V Shaposhnik [this message]
2009-08-04  2:56             ` ron minnich
2009-08-04  5:07               ` roger peppe
2009-08-04  8:23                 ` erik quanstrom
2009-08-04  8:52                   ` roger peppe
2009-08-04  9:55                 ` C H Forsyth
2009-08-04 10:25                   ` roger peppe
2009-08-04 14:45                   ` ron minnich
2009-08-04 23:56                   ` Roman V Shaposhnik
2009-08-04 23:51               ` Roman V Shaposhnik
2009-08-04  7:23             ` Tim Newsham
2009-08-05  0:27               ` Roman V Shaposhnik
2009-08-05  3:30                 ` Tim Newsham
2009-08-04  8:43             ` Steve Simon
2009-08-05  0:01               ` Roman V Shaposhnik

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1249349522.479.15191.camel@work.SFBay.Sun.COM \
    --to=rvs@sun.com \
    --cc=9fans@9fans.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).