From mboxrd@z Thu Jan 1 00:00:00 1970 From: ron minnich To: 9fans@cse.psu.edu Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Subject: [9fans] some #s Date: Tue, 3 Jun 2003 22:21:32 -0600 Topicbox-Message-UUID: c32f1eca-eacb-11e9-9e20-41e7f4b1d025 The geode cluster Andrey mentioned has a clone that etherboots linux and comes up running bproc etc. (see clustermatic.org for info on this). The two clusters are identical in every way. Time from power-on to full cluster ready to roll (i.e ready for bproc equivalent of 'cpu'): 10 seconds Some timing for same hardware with 9load/plan 9: 0->10 seconds: 9load is up and has loaded the plan 9 kernel over ether 10-20 seconds: no real output from plan 9 20-30 seconds (or so): until I get the 'boot from [il]' prompt Obviously we'll try to speed this up just a bit, but still, it's not too terrible. I need to get it to skip the prompt step but that doesn't seem like a big deal (there is no console, really, on these machines, so a prompt is rather unimportant, esp. given there is only one choice) Overall there's lots more procs needed to make the plan 9 nodes work as cluster nodes than the Linux bproc stuff -- about a factor of 5. bproc does remote exec somewhat faster, I'm working on the #s there too. But plan 9 does do some thing better than bproc. So it's a bit of a wash in most ways. Nevertheless I'm going to see what else I can eliminate. Anyway, this is just FYI, I'm hoping to get good ideas from people at usenix on reducing some overheads on this system. ron