From mboxrd@z Thu Jan 1 00:00:00 1970 MIME-Version: 1.0 In-Reply-To: <5c420c57e47d4277e80d51801186f929@quanstro.net> References: <5c420c57e47d4277e80d51801186f929@quanstro.net> Date: Mon, 31 Aug 2009 09:57:33 -0500 Message-ID: From: Eric Van Hensbergen To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Subject: Re: [9fans] Interested in improving networking in Plan 9 Topicbox-Message-UUID: 5d3b9a3a-ead5-11e9-9d60-3106f5b1d025 On Mon, Aug 31, 2009 at 9:36 AM, erik quanstrom wrot= e: > > i can see in principle how this could be a good idea (no more > comments, though). =A0could you elaborate, though. =A0i have found > editing /lib/ndb/local works well at the scales i see. > I think the main issue with just editing /lib/ndb/local is a combination of scale and number of writers. Writing static config files could work fine in a "typical" plan 9 network of hundreds of machines, even with multiple admins. I have a feeling it starts to break down with thousands of machines, particularly in an environment where machines are appearing and disappearing at regular intervals (clouds, HPC partitioning, or Blue Gene). Hundreds of thousands of nodes with this sort of behavior probably makes it impractical. Of course -- this won't effect the casual user, but its something that effects us. Its also possible that such a dynamic environment would better support a 9grid style environment. > > i also don't know what you mean by "transient, task specific services". > i can only think of things like ramfs or cdfs. =A0but they live in my > namespace so ndb doesn't enter into the picture. > There is the relatively mundane configuration examples of publishing multiple file servers, authentication servers, and cpu servers. There's a slightly more interesting example of more pervasive sharing (imagine sharing portions of your ACME file system to collaborate with several co-authors, or for code review). The more applications which export synthetic file systems, the more opportunity there is for sharing and requiring a broader pub/sub interface. There is another option here which I'm side-stepping because its something I'm actively working on -- which is instead of doing such a pub/sub interface within ndb and CS, extending srv to provide a registry for cluster/grid/cloud. However, underneath it may still be nice to have zeroconf as a pub/sub for interoperation with non-Plan 9 systems. > >> When I publish a service, in the Plan 9 case primarily by exporting a >> synthetic file system. =A0I shouldn't have to have static configuration >> for file servers, it should be much more fluid. =A0I'm not arguing for a >> microsoft style registry -- but the network discovery environment on >> MacOSX is much nicer than what we have today within Plan 9. =A0An even >> better example is the environment on the OLPC, where many of the >> applications are implicitly networked and share resources based on >> Zeroconf pub/sub interfaces. > > sounds interesting. =A0but i don't understand what you're talking about > exactly. =A0maybe you're thinking that cpu could be rigged so that > cpu with no host specifier would be equivalent to cpu -h '$boredcpu' > where '$boredcpu' would be determined by cs via dynamic mapping? > or am i just confused? > Actually, the idea would be that cpu's default behavior would be to go grab $boredcpu -- but that's part of the idea. That would make adding cpu servers as easy as booting them -- they'd publish their service with zeroconf and everyone would automatically pick it up and be able to query them for additional attributes related to utilization, authentication, or even fee-structures. Of course, as a I alluded to earlier, I think its much more interesting in the presence of pervasive network services exported through file systems. I'd suggest those who haven't to go grab "sugar on a stick" from the OLPC folks and run it under vmware or whatever. I'm not broadly endorsing every aspect of their environment, but I really liked their loosely coupled sharing/collaboration framework built upon zeroconf and their mesh networks (it may be a bit hard to see this on sugar on a stick, but there were in the past several public OLPC zeroconf servers you could attach to and play around. -eric