9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
From: Eric Van Hensbergen <ericvh@gmail.com>
To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
Subject: Re: [9fans] Interested in improving networking in Plan 9
Date: Mon, 31 Aug 2009 09:57:33 -0500	[thread overview]
Message-ID: <a4e6962a0908310757m6aadf842rff3b0c725f13234b@mail.gmail.com> (raw)
In-Reply-To: <5c420c57e47d4277e80d51801186f929@quanstro.net>

On Mon, Aug 31, 2009 at 9:36 AM, erik quanstrom<quanstro@quanstro.net> wrote:
>
> i can see in principle how this could be a good idea (no more
> comments, though).  could you elaborate, though.  i have found
> editing /lib/ndb/local works well at the scales i see.
>

I think the main issue with just editing /lib/ndb/local is a
combination of scale and number of writers.  Writing static config
files could work fine in a "typical" plan 9 network of hundreds of
machines, even with multiple admins.  I have a feeling it starts to
break down with thousands of machines, particularly in an environment
where machines are appearing and disappearing at regular intervals
(clouds, HPC partitioning, or Blue Gene).  Hundreds of thousands of
nodes with this sort of behavior probably makes it impractical.  Of
course -- this won't effect the casual user, but its something that
effects us.  Its also possible that such a dynamic environment would
better support a 9grid style environment.

>
> i also don't know what you mean by "transient, task specific services".
> i can only think of things like ramfs or cdfs.  but they live in my
> namespace so ndb doesn't enter into the picture.
>

There is the relatively mundane configuration examples of publishing
multiple file servers, authentication servers, and cpu servers.
There's a slightly more interesting example of more pervasive sharing
(imagine sharing portions of your ACME file system to collaborate with
several co-authors, or for code review).  The more applications which
export synthetic file systems, the more opportunity there is for
sharing and requiring a broader pub/sub interface.

There is another option here which I'm side-stepping because its
something I'm actively working on -- which is instead of doing such a
pub/sub interface within ndb and CS, extending srv to provide a
registry for cluster/grid/cloud.  However, underneath it may still be
nice to have zeroconf as a pub/sub for interoperation with non-Plan 9
systems.

>
>> When I publish a service, in the Plan 9 case primarily by exporting a
>> synthetic file system.  I shouldn't have to have static configuration
>> for file servers, it should be much more fluid.  I'm not arguing for a
>> microsoft style registry -- but the network discovery environment on
>> MacOSX is much nicer than what we have today within Plan 9.  An even
>> better example is the environment on the OLPC, where many of the
>> applications are implicitly networked and share resources based on
>> Zeroconf pub/sub interfaces.
>
> sounds interesting.  but i don't understand what you're talking about
> exactly.  maybe you're thinking that cpu could be rigged so that
> cpu with no host specifier would be equivalent to cpu -h '$boredcpu'
> where '$boredcpu' would be determined by cs via dynamic mapping?
> or am i just confused?
>

Actually, the idea would be that cpu's default behavior would be to go
grab $boredcpu -- but that's part of the idea.  That would make adding
cpu servers as easy as booting them -- they'd publish their service
with zeroconf and everyone would automatically pick it up and be able
to query them for additional attributes related to utilization,
authentication, or even fee-structures.  Of course, as a I alluded to
earlier, I think its much more interesting in the presence of
pervasive network services exported through file systems.  I'd suggest
those who haven't to go grab "sugar on a stick" from the OLPC folks
and run it under vmware or whatever.  I'm not broadly endorsing every
aspect of their environment, but I really liked their loosely coupled
sharing/collaboration framework built upon zeroconf and their mesh
networks (it may be a bit hard to see this on sugar on a stick, but
there were in the past several public OLPC zeroconf servers you could
attach to and play around.

       -eric



  reply	other threads:[~2009-08-31 14:57 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-08-30  7:21 Vinu Rajashekhar
2009-08-30 14:21 ` ron minnich
2009-08-30 16:19   ` Eric Van Hensbergen
2009-08-30 17:07   ` erik quanstrom
2009-08-30 17:40     ` Venkatesh Srinivas
2009-08-30 17:53       ` erik quanstrom
     [not found]   ` <c81a07350908300842w3df8321aidf55096b65eb9b77@mail.gmail.com>
2009-08-30 18:17     ` ron minnich
2009-08-30 18:35       ` ron minnich
2009-08-31  3:34         ` erik quanstrom
2009-08-31 12:45           ` Eric Van Hensbergen
2009-08-31 12:51             ` erik quanstrom
2009-08-31 12:59               ` Vinu Rajashekhar
2009-08-31 13:05                 ` erik quanstrom
2009-08-31 13:48                   ` Anthony Sorace
2009-08-31 14:51                 ` Devon H. O'Dell
2009-08-31 15:03                   ` Vinu Rajashekhar
2009-08-31 13:42               ` Eric Van Hensbergen
2009-08-31 14:04                 ` erik quanstrom
2009-08-31 14:25                   ` Eric Van Hensbergen
2009-08-31 14:36                     ` erik quanstrom
2009-08-31 14:57                       ` Eric Van Hensbergen [this message]
2009-08-31 15:52                         ` erik quanstrom
2009-08-31 16:12                           ` Eric Van Hensbergen
2009-08-31 14:55                     ` Francisco J Ballesteros
2009-08-31 15:09                       ` Eric Van Hensbergen
2009-08-31 15:56                         ` erik quanstrom
2009-08-31 16:16                     ` Bakul Shah
2009-08-31 16:33                       ` Eric Van Hensbergen
2009-09-02 16:50                         ` Bakul Shah
2009-09-02 17:10                           ` Robert Raschke
2009-09-03 11:21                           ` matt
2009-09-03 12:50                             ` erik quanstrom
2009-09-03 16:13                               ` Anthony Sorace
2009-09-03 16:29                                 ` erik quanstrom
2009-09-03 16:34                                 ` Robert Raschke
2009-09-03 21:50                                 ` Daniel Lyons
2009-09-03 19:08                               ` Steve Simon
2009-08-31 17:09                       ` Devon H. O'Dell
2009-08-31 17:20                         ` erik quanstrom
2009-08-31 17:32                           ` Devon H. O'Dell
2009-08-31 18:00                             ` erik quanstrom
2009-08-31  4:17 ` Federico G. Benavento
2009-08-31  4:53   ` Vinu Rajashekhar
2009-08-30 17:22 erik quanstrom
2009-08-30 17:36 ` Vinu Rajashekhar
2009-08-30 17:56   ` erik quanstrom

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a4e6962a0908310757m6aadf842rff3b0c725f13234b@mail.gmail.com \
    --to=ericvh@gmail.com \
    --cc=9fans@9fans.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).