9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
From: "ron minnich" <rminnich@gmail.com>
To: "Fans of the OS Plan 9 from Bell Labs" <9fans@cse.psu.edu>
Subject: Re: [9fans] consterm
Date: Tue,  6 Nov 2007 21:29:16 -0800	[thread overview]
Message-ID: <13426df10711062129v12fc7afcgfd7da14060288f79@mail.gmail.com> (raw)
In-Reply-To: <a4e6962a0711061628o1b6e49f3q2295f1bd64d664b2@mail.gmail.com>

I use xcpu for a lot of what you are talking about. I'm using it now
on a small PNFS cluster I built. If I want interactive access ...
xcpu <some nodes> /bin/bash

done. And the bash comes from my machine so, if the nodes are busybox
nodes (they usually are) then I need not worry that they don't have my
binaries.

But, that said, I keep cursing my bad luck in not having a real cpu on
linux. I could use your consterm for things I'm doing. I'm finding
that xcpu is fine for some things, but cpu really is nicer. I'm
considering adding namespace support to xcpu so that when you start
(e.g.) a shell via xcpu, once the shell starts up, its namespace is
there. Hmm, I just recreated cpu -- save that I can run without the
name space stuff -- it's optional. And, cpu doesn't do the nice tree
spawn stuff that Lucho put into xcpu.

Note that we did this on bproc, 5 years ago, and it was pretty nice.
We could define the namespace for a process, and then start the
process up on 1024 nodes and have them all start with the desired
namespace. Handy!

one thing you can't easily do on cpu, though, that xcpu is nice for:
xcpu md,d1,d2,d3 fdisk /dev/sda

(gets me 4 parallel interactive fdisk's on the four nodes md,d1,d2,d3
-- it seems weird but worked wonderfully well. cpu is just not set up
to do this type of thing, and neither would your consterm be).

It seems to me from watching it that the real weight in 'cpu' is in
the all the stuff that gets run when you log in, not necessarily cpu
itself. That's not too hard to get around.

All this points (to me) to the fact that we have not really reached
the final answer on the things we want to do with remote access. Or,
maybe, there's no final answer at all.

ron


  parent reply	other threads:[~2007-11-07  5:29 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-11-06 18:08 Eric Van Hensbergen
2007-11-06 18:15 ` erik quanstrom
2007-11-06 18:23 ` Tim Wiess
2007-11-06 19:40 ` Skip Tavakkolian
2007-11-06 20:00 ` Russ Cox
2007-11-06 21:43   ` Eric Van Hensbergen
2007-11-06 22:39     ` Uriel
2007-11-06 22:54       ` Eric Van Hensbergen
2007-11-06 23:35         ` erik quanstrom
2007-11-06 23:53           ` arisawa
2007-11-07  0:28           ` Eric Van Hensbergen
2007-11-07  3:51             ` matt
2007-11-07  3:55               ` matt
2007-11-07  5:29             ` ron minnich [this message]
2007-11-07  5:48               ` andrey mirtchovski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=13426df10711062129v12fc7afcgfd7da14060288f79@mail.gmail.com \
    --to=rminnich@gmail.com \
    --cc=9fans@cse.psu.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).