9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
From: Eric Van Hensbergen <ericvh@gmail.com>
To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
Subject: Re: [9fans] Interested in improving networking in Plan 9
Date: Mon, 31 Aug 2009 11:12:51 -0500	[thread overview]
Message-ID: <a4e6962a0908310912o5f462a38ta0e87afb9e783dfa@mail.gmail.com> (raw)
In-Reply-To: <41f21b94249accd4df903782d3ed243e@coraid.com>

On Mon, Aug 31, 2009 at 10:52 AM, erik quanstrom<quanstro@coraid.com> wrote:
>
> so plunkers like us with a few hundred machines are just "casual users"?
> i'd hate for plan 9 to become harder to use outside a hpc environment.
> it would be good to be flexable enough to support fairly degnerate cases
> like just flat files.
>

I don't disagree.  Worst case, this is a complementary platform for
large-scale deployments, best case is that its an alternative
interface that also improves the experience for the "casual" user -- I
think the main benefit here will be in establishing better mechanisms
for collaboration amongst "casual" users.  If any aspect of this makes
things more complicated, we are doing something wrong.  The whole
point of going zeroconf is to make configuration simple.

>> > i also don't know what you mean by "transient, task specific services".
>> > i can only think of things like ramfs or cdfs.  but they live in my
>> > namespace so ndb doesn't enter into the picture.
>> >
>>
>> There is the relatively mundane configuration examples of publishing
>> multiple file servers, authentication servers, and cpu servers.
>
> how many file servers and authentication servers are you running?
>

>From a file server perspective on blue gene, a full scale rack will
have thousands.  we're currently operating without auth (in part due
to configuration issues), so I don't know how well it will scale.  The
other aspect here is that in current configurations, every "run" has a
different machine configuration based on what you request from the job
scheduler and what you actually get.  We pretty much get different IP
addresses every time, with different front ends, different file
servers, etc. etc.
Again though - the idea is to use file systems more pervasively within
the applications as well -- so there may be multiple file servers per
node providing different services depending on workload needs at the
particular point of computation.  Read our MTAGS paper from last
year's supercomputing conference to get a bigger picture view on how
we view services coming, going, migrating, and adapting to changing
application usage and failure.

      -eric



  reply	other threads:[~2009-08-31 16:12 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-08-30  7:21 Vinu Rajashekhar
2009-08-30 14:21 ` ron minnich
2009-08-30 16:19   ` Eric Van Hensbergen
2009-08-30 17:07   ` erik quanstrom
2009-08-30 17:40     ` Venkatesh Srinivas
2009-08-30 17:53       ` erik quanstrom
     [not found]   ` <c81a07350908300842w3df8321aidf55096b65eb9b77@mail.gmail.com>
2009-08-30 18:17     ` ron minnich
2009-08-30 18:35       ` ron minnich
2009-08-31  3:34         ` erik quanstrom
2009-08-31 12:45           ` Eric Van Hensbergen
2009-08-31 12:51             ` erik quanstrom
2009-08-31 12:59               ` Vinu Rajashekhar
2009-08-31 13:05                 ` erik quanstrom
2009-08-31 13:48                   ` Anthony Sorace
2009-08-31 14:51                 ` Devon H. O'Dell
2009-08-31 15:03                   ` Vinu Rajashekhar
2009-08-31 13:42               ` Eric Van Hensbergen
2009-08-31 14:04                 ` erik quanstrom
2009-08-31 14:25                   ` Eric Van Hensbergen
2009-08-31 14:36                     ` erik quanstrom
2009-08-31 14:57                       ` Eric Van Hensbergen
2009-08-31 15:52                         ` erik quanstrom
2009-08-31 16:12                           ` Eric Van Hensbergen [this message]
2009-08-31 14:55                     ` Francisco J Ballesteros
2009-08-31 15:09                       ` Eric Van Hensbergen
2009-08-31 15:56                         ` erik quanstrom
2009-08-31 16:16                     ` Bakul Shah
2009-08-31 16:33                       ` Eric Van Hensbergen
2009-09-02 16:50                         ` Bakul Shah
2009-09-02 17:10                           ` Robert Raschke
2009-09-03 11:21                           ` matt
2009-09-03 12:50                             ` erik quanstrom
2009-09-03 16:13                               ` Anthony Sorace
2009-09-03 16:29                                 ` erik quanstrom
2009-09-03 16:34                                 ` Robert Raschke
2009-09-03 21:50                                 ` Daniel Lyons
2009-09-03 19:08                               ` Steve Simon
2009-08-31 17:09                       ` Devon H. O'Dell
2009-08-31 17:20                         ` erik quanstrom
2009-08-31 17:32                           ` Devon H. O'Dell
2009-08-31 18:00                             ` erik quanstrom
2009-08-31  4:17 ` Federico G. Benavento
2009-08-31  4:53   ` Vinu Rajashekhar
2009-08-30 17:22 erik quanstrom
2009-08-30 17:36 ` Vinu Rajashekhar
2009-08-30 17:56   ` erik quanstrom

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a4e6962a0908310912o5f462a38ta0e87afb9e783dfa@mail.gmail.com \
    --to=ericvh@gmail.com \
    --cc=9fans@9fans.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).