9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
From: Roman V Shaposhnik <rvs@sun.com>
To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
Subject: Re: [9fans] ceph
Date: Tue,  4 Aug 2009 17:27:06 -0700	[thread overview]
Message-ID: <1249432026.479.16637.camel@work.SFBay.Sun.COM> (raw)
In-Reply-To: <Pine.BSI.4.64.0908032121460.20861@malasada.lava.net>

On Mon, 2009-08-03 at 21:23 -1000, Tim Newsham wrote:
> >  2. do we have anybody successfully managing that much storage that is
> >     also spread across the nodes? And if so, what's the best practices
> >     out there to make the client not worry about where does the storage
> >     actually come from (IOW, any kind of proxying of I/O, etc)
>
> http://labs.google.com/papers/gfs.html
> http://hadoop.apache.org/common/docs/current/hdfs_design.html
>
> > I'm trying to see how the life after NFSv4 or AFS might look like for
> > the clients still clinging to the old ways of doing things, yet
> > trying to cooperatively use hundreds of T of storage.
>
> the two I mention above are both used in conjunction with
> distributed map/reduce calculations.  Calculations are done
> on the nodes where the data is stored...

Hadoop and GFS are good examples and they work great for the
single distributed application that is *written* with them
in mind.

Unfortunately, I can not stretch my imagination hard enough
to see them as general purpose filesystems backing up data
for gazillions of non-cooperative applications. The sort
of thing NFS and AFS were built to accomplish.

In that respect, ceph is more what I have in mind: it
assembles storage from clusters of unrelated OSDs into a
a hierarchy with a single point of entry for every
user/application.

The question, however, is how to avoid the complexity of
ceph and still have it look like a humongous kenfs or
fossil from the outside.

Thanks,
Roman.




  reply	other threads:[~2009-08-05  0:27 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-07-30  0:43 Roman V Shaposhnik
2009-07-30 16:31 ` sqweek
2009-08-01  5:24   ` Roman Shaposhnik
2009-08-01  5:41     ` ron minnich
2009-08-01  5:53       ` Roman Shaposhnik
2009-08-01 15:47         ` ron minnich
2009-08-04  1:32           ` Roman V Shaposhnik
2009-08-04  2:56             ` ron minnich
2009-08-04  5:07               ` roger peppe
2009-08-04  8:23                 ` erik quanstrom
2009-08-04  8:52                   ` roger peppe
2009-08-04  9:55                 ` C H Forsyth
2009-08-04 10:25                   ` roger peppe
2009-08-04 14:45                   ` ron minnich
2009-08-04 23:56                   ` Roman V Shaposhnik
2009-08-04 23:51               ` Roman V Shaposhnik
2009-08-04  7:23             ` Tim Newsham
2009-08-05  0:27               ` Roman V Shaposhnik [this message]
2009-08-05  3:30                 ` Tim Newsham
2009-08-04  8:43             ` Steve Simon
2009-08-05  0:01               ` Roman V Shaposhnik

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1249432026.479.16637.camel@work.SFBay.Sun.COM \
    --to=rvs@sun.com \
    --cc=9fans@9fans.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).