9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
* [9fans] message passing.. sci programming
@ 2001-03-29  4:24 Nehal N. Desai
  2001-03-29  4:31 ` Nehal N. Desai
  2001-03-29  6:14 ` Jim Choate
  0 siblings, 2 replies; 6+ messages in thread
From: Nehal N. Desai @ 2001-03-29  4:24 UTC (permalink / raw)
  To: 9fans

hi,
has anyone looked into using message passing on
plan9.  We are looking into building a 
smallish plan9 cluster -- between 128 and 512 processors
that we would like run some
physics codes on (eg. weather modeling, QCD,etc)...
right now. those most codes here use MPI or OpenMP.
but is there is a better way that uses the plan
9 architecture in a more optimal (scalable) way.  

nehal


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [9fans] message passing.. sci programming
  2001-03-29  4:24 [9fans] message passing.. sci programming Nehal N. Desai
@ 2001-03-29  4:31 ` Nehal N. Desai
  2001-03-29  5:30   ` Andrey A Mirtchovski
  2001-03-29  6:14 ` Jim Choate
  1 sibling, 1 reply; 6+ messages in thread
From: Nehal N. Desai @ 2001-03-29  4:31 UTC (permalink / raw)
  To: 9fans

sorry for the elliptic (and incorrect)  nature of the previous email.
here's a corrected one.. 

stupidly yours,
nehal 
> 
> hi,
> has anyone looked into using message passing on
> plan9.  We are looking into building a 
> smallish plan9 cluster -- between 128 and 512 processors
> that we would like to run some
> physics codes on (eg. weather modeling, QCD,etc)...
> right now, most codes here use MPI or OpenMP.
> but is there a better way that uses the plan
> 9 architecture in a more optimal (scalable) way.  
> 
> nehal
> 



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [9fans] message passing.. sci programming
  2001-03-29  4:31 ` Nehal N. Desai
@ 2001-03-29  5:30   ` Andrey A Mirtchovski
  0 siblings, 0 replies; 6+ messages in thread
From: Andrey A Mirtchovski @ 2001-03-29  5:30 UTC (permalink / raw)
  To: 9fans

I was looking at the 9p protocol for implementing some sort of message
passing in my project.. i had to abandon it for lack of time though...

depending on how close one keeps to the 'all is file' paradigm 9p could
really save the day :)

andrey

On Wed, 28 Mar 2001, Nehal N. Desai wrote:

> sorry for the elliptic (and incorrect)  nature of the previous email.
> here's a corrected one.. 
> 



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [9fans] message passing.. sci programming
  2001-03-29  4:24 [9fans] message passing.. sci programming Nehal N. Desai
  2001-03-29  4:31 ` Nehal N. Desai
@ 2001-03-29  6:14 ` Jim Choate
  1 sibling, 0 replies; 6+ messages in thread
From: Jim Choate @ 2001-03-29  6:14 UTC (permalink / raw)
  To: 9fans; +Cc: hangar18, sci-tech, Remotely Piloted - Unmanned Autonomous Vehicles


I'm also curious about such things. Please keep me in the loop...

http://einstein.ssz.com/hangar18

On Wed, 28 Mar 2001, Nehal N. Desai wrote:

> hi,
> has anyone looked into using message passing on
> plan9.  We are looking into building a 
> smallish plan9 cluster -- between 128 and 512 processors
> that we would like run some
> physics codes on (eg. weather modeling, QCD,etc)...
> right now. those most codes here use MPI or OpenMP.
> but is there is a better way that uses the plan
> 9 architecture in a more optimal (scalable) way.  
> 
> nehal
> 



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [9fans] message passing.. sci programming
  2001-03-29 17:05 presotto
@ 2001-03-29 17:17 ` Ronald G Minnich
  0 siblings, 0 replies; 6+ messages in thread
From: Ronald G Minnich @ 2001-03-29 17:17 UTC (permalink / raw)
  To: 9fans

On Thu, 29 Mar 2001 presotto@plan9.bell-labs.com wrote:

> What semanitcs are you looking for with the messages?  If it's
> just n to 1 with in order delivery and message boundaries, you
> can just use pipes bound into the file system.  The reader does
> the bind and then exports (via exportfs) his name space to every
> system that wants to send him messages.  Each party can do the same
> to affect 2 way communication.  No special tools necessary.


I've been wondering about this. Does anyone know how much you lose in
bandwidth over raw IL or TCP links?


ron



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [9fans] message passing.. sci programming
@ 2001-03-29 17:05 presotto
  2001-03-29 17:17 ` Ronald G Minnich
  0 siblings, 1 reply; 6+ messages in thread
From: presotto @ 2001-03-29 17:05 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 464 bytes --]

What semanitcs are you looking for with the messages?  If it's
just n to 1 with in order delivery and message boundaries, you
can just use pipes bound into the file system.  The reader does
the bind and then exports (via exportfs) his name space to every
system that wants to send him messages.  Each party can do the same
to affect 2 way communication.  No special tools necessary.

Of course, you have to provide your own pickling/marshaling routines...

[-- Attachment #2: Type: message/rfc822, Size: 1931 bytes --]

From: "Nehal N. Desai" <nehal@acl.lanl.gov>
To: 9fans@cse.psu.edu
Subject: [9fans] message passing.. sci programming
Date: Wed, 28 Mar 2001 21:24:12 -0700 (MST)
Message-ID: <200103290424.VAA06811@fred.acl.lanl.gov>

hi,
has anyone looked into using message passing on
plan9.  We are looking into building a 
smallish plan9 cluster -- between 128 and 512 processors
that we would like run some
physics codes on (eg. weather modeling, QCD,etc)...
right now. those most codes here use MPI or OpenMP.
but is there is a better way that uses the plan
9 architecture in a more optimal (scalable) way.  

nehal

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2001-03-29 17:17 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-03-29  4:24 [9fans] message passing.. sci programming Nehal N. Desai
2001-03-29  4:31 ` Nehal N. Desai
2001-03-29  5:30   ` Andrey A Mirtchovski
2001-03-29  6:14 ` Jim Choate
2001-03-29 17:05 presotto
2001-03-29 17:17 ` Ronald G Minnich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).