From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: <13426df10710170939s1efac3ccka61a77a2ecb8aa14@mail.gmail.com> Date: Wed, 17 Oct 2007 09:39:10 -0700 From: "ron minnich" To: "Fans of the OS Plan 9 from Bell Labs" <9fans@cse.psu.edu> Subject: Re: [9fans] parallel/distributed computation In-Reply-To: <3e1162e60710170929k456c3468n95ed0bcfa88238@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <20071017144235.GE5034@gluon> <13426df10710170913wd5f4c3fiad26e5e4ba7e4c70@mail.gmail.com> <3e1162e60710170929k456c3468n95ed0bcfa88238@mail.gmail.com> Topicbox-Message-UUID: d389811e-ead2-11e9-9d60-3106f5b1d025 On 10/17/07, David Leimbach wrote: > > Who's "we"? I've experience implementing MPI against Portals, TCP and funky > shared memory issues on Mac OS X. "we" is jim, charles, eric, and me. I'm porting some of the HPCS apps to Plan 9. After thinking hard about an "MPI lite" approach, I'm taking the "write these apps as though it were NOT 1970" approach. In particular, if you have 1) threads 2) sizeof 3) function pointers (and prototypes, now, there's a concept) 4) a functioning type system There's a lot of MPI you (well, I) don't need. Seen in retrospective, MPI is really a bandaid for the failings of the CM-5 and T3D runtime systems. The application to clusters was actually an afterthought. > Of course for me to do so for Plan 9 would probably violate several > employment non-competes and other IP contracts I have. Ah, c'mon David, it's just a few years in a pound-me-in-the-ass federal prison. They might even allow conjugal visits. ron