9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
* [9fans] message passing.. sci programming
@ 2001-03-29  4:24 Nehal N. Desai
  2001-03-29  4:31 ` Nehal N. Desai
  2001-03-29  6:14 ` Jim Choate
  0 siblings, 2 replies; 6+ messages in thread
From: Nehal N. Desai @ 2001-03-29  4:24 UTC (permalink / raw)
  To: 9fans

hi,
has anyone looked into using message passing on
plan9.  We are looking into building a 
smallish plan9 cluster -- between 128 and 512 processors
that we would like run some
physics codes on (eg. weather modeling, QCD,etc)...
right now. those most codes here use MPI or OpenMP.
but is there is a better way that uses the plan
9 architecture in a more optimal (scalable) way.  

nehal


^ permalink raw reply	[flat|nested] 6+ messages in thread
* Re: [9fans] message passing.. sci programming
@ 2001-03-29 17:05 presotto
  2001-03-29 17:17 ` Ronald G Minnich
  0 siblings, 1 reply; 6+ messages in thread
From: presotto @ 2001-03-29 17:05 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 464 bytes --]

What semanitcs are you looking for with the messages?  If it's
just n to 1 with in order delivery and message boundaries, you
can just use pipes bound into the file system.  The reader does
the bind and then exports (via exportfs) his name space to every
system that wants to send him messages.  Each party can do the same
to affect 2 way communication.  No special tools necessary.

Of course, you have to provide your own pickling/marshaling routines...

[-- Attachment #2: Type: message/rfc822, Size: 1931 bytes --]

From: "Nehal N. Desai" <nehal@acl.lanl.gov>
To: 9fans@cse.psu.edu
Subject: [9fans] message passing.. sci programming
Date: Wed, 28 Mar 2001 21:24:12 -0700 (MST)
Message-ID: <200103290424.VAA06811@fred.acl.lanl.gov>

hi,
has anyone looked into using message passing on
plan9.  We are looking into building a 
smallish plan9 cluster -- between 128 and 512 processors
that we would like run some
physics codes on (eg. weather modeling, QCD,etc)...
right now. those most codes here use MPI or OpenMP.
but is there is a better way that uses the plan
9 architecture in a more optimal (scalable) way.  

nehal

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2001-03-29 17:17 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-03-29  4:24 [9fans] message passing.. sci programming Nehal N. Desai
2001-03-29  4:31 ` Nehal N. Desai
2001-03-29  5:30   ` Andrey A Mirtchovski
2001-03-29  6:14 ` Jim Choate
2001-03-29 17:05 presotto
2001-03-29 17:17 ` Ronald G Minnich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).