From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: <818e82f41eaf90756fdf63b4791ddf4b@plan9.escet.urjc.es> To: 9fans@cse.psu.edu Subject: Re: [9fans] Process distribution? From: Fco.J.Ballesteros Date: Fri, 21 May 2004 16:22:56 +0200 In-Reply-To: <71b00c64409f2dda50adeea8d576ab4f@9srv.net> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="upas-idwsqazpqmspeteuqaztoqqymm" Topicbox-Message-UUID: 8337dfd0-eacd-11e9-9e20-41e7f4b1d025 This is a multi-part message in MIME format. --upas-idwsqazpqmspeteuqaztoqqymm Content-Disposition: inline Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit We do something similar for Plan B, and are in the process of backporting that to Plan 9. See http://lsub.org/ls/export/bman/4/proc.html The man page is out of date but that doesn't matter here. Regarding process migration, please don't, ;-) --upas-idwsqazpqmspeteuqaztoqqymm Content-Type: message/rfc822 Content-Disposition: inline Received: from mail.cse.psu.edu ([130.203.4.6]) by aquamar; Fri May 21 15:34:48 MDT 2004 Received: from psuvax1.cse.psu.edu (localhost [127.0.0.1]) by mail.cse.psu.edu (CSE Mail Server) with ESMTP id 918CA1A106 for ; Fri, 21 May 2004 09:34:44 -0400 (EDT) X-Original-To: 9fans@cse.psu.edu Delivered-To: 9fans@cse.psu.edu Received: from localhost (neuromancer.cse.psu.edu [130.203.4.2]) by mail.cse.psu.edu (CSE Mail Server) with ESMTP id C70CA1A0D5 for <9fans@cse.psu.edu>; Fri, 21 May 2004 09:34:39 -0400 (EDT) Received: from mail.cse.psu.edu ([130.203.4.6]) by localhost (neuromancer [130.203.4.2]) (amavisd-new, port 10024) with LMTP id 17121-01-93 for <9fans@cse.psu.edu>; Fri, 21 May 2004 09:34:38 -0400 (EDT) Received: from 9srv.net (h-66-134-100-77.mclnva23.covad.net [66.134.100.77]) by mail.cse.psu.edu (CSE Mail Server) with ESMTP id 6D1D019A71 for <9fans@cse.psu.edu>; Fri, 21 May 2004 09:34:38 -0400 (EDT) Message-ID: <71b00c64409f2dda50adeea8d576ab4f@9srv.net> Date: Fri, 21 May 2004 09:24:08 -0400 From: a@9srv.net To: 9fans@cse.psu.edu Subject: Re: [9fans] Process distribution? In-Reply-To: <20040521113733.LDBQ14728.mxfep02.bredband.com@mxfep02> MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" X-Virus-Scanned: by amavisd-new at cse.psu.edu X-BeenThere: 9fans@cse.psu.edu X-Mailman-Version: 2.1.4 Precedence: list Reply-To: Fans of the OS Plan 9 from Bell Labs <9fans@cse.psu.edu> List-Id: Fans of the OS Plan 9 from Bell Labs <9fans.cse.psu.edu> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: 9fans-bounces+nemo=lsub.org@cse.psu.edu Errors-To: 9fans-bounces+nemo=lsub.org@cse.psu.edu Content-Transfer-Encoding: quoted-printable i suspect a random distribution of processes would yield worse results than you expect. you really want a scheduler of sorts that'll find the most appropriate system to run your proc on. i think the process dispatch stuff you're looking for can all be done in user mode, with just an app that essentially calls cpu and does a few namespace ops. the slightly trickier part (still not *too* hard) is choosing who to dispatch to, but i think that can be managed properly by having the participating cpu servers provide their own status information and have a dispatch daemon on the client poll them when an app needs to be dispatched. you could even define a way for the app to specify its resource requirements to the dispatch daemon. i really should look at the plan 9 and inferno grid stuff. i forget=00 (and drawterm doesn't work from behind my work firewall right now): does cpu preserve stdout and stderr? things get tricky - downright *hard* - when you start looking for a clean way to do process *migration*. =E3=82=A2 --upas-idwsqazpqmspeteuqaztoqqymm--