From: Bakul Shah <bakul+plan9@bitblocks.com>
To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
Subject: Re: [9fans] channels across machines
Date: Sat, 18 Jul 2009 11:52:23 -0700 [thread overview]
Message-ID: <20090718185223.F2A275B18@mail.bitblocks.com> (raw)
In-Reply-To: Your message of "Sat, 18 Jul 2009 06:25:19 EDT." <158688033fe375ed6e935e472986d876@quanstro.net>
On Sat, 18 Jul 2009 06:25:19 EDT erik quanstrom <quanstro@quanstro.net> wrote:
> On Sat Jul 18 03:46:01 EDT 2009, bakul+plan9@bitblocks.com wrote:
> > Has anyone extended the idea of channels where the
> > sender/receiver are on different machines (or at least in
> > different processes)? A netcat equivalent for channels!
>
> i think the general idea is that if you want to do this between
> arbitrary machines, you provide a 9p interface. you can think
> of 9p as a channel with a predefined set of messages. acme
> does this. kernel devices do this.
>
> however inferno provides file2chan
> http://www.vitanuova.com/inferno/man/2/sys-file2chan.html.
> of course, somebody has to provide the 9p interface, even
> if that's just posting a fd to /srv.
>
> if you wanted to do something like file2chan in plan 9 and c, you're
> going to have to marshal your data. this means that chanconnect
> as specified is impossible. (acme avoids this by using text only.
> the kernel devices keep this under control by using a defined
> byte order and very few binary interfaces.) i don't know how you would
> solve the naming problems e.g. acme would have using dial-string
> addressing (i assume that's what you ment; host-port addressing is
> ip specific.). what would be the port-numbering huristic for allowing
> multiple acmes on the same cpu server?
>
> after whittling away problem cases, i think one is left with pipes,
> and it seems pretty clear how to connect things so that
> chan <-> pipe <-> chan. one could generalize to multiple
> machines by using tools like cpu(1).
First of all, thanks for all the responses!
I should've mentioned this won't run on top of plan9 (or
Unix). What I really want is alt! I brought up RPC but
really, this is just Limbo's "chan of <type>" idea. In the
concurrent application where I want to use this, it would
factor out a bunch of stuff and simplify communication.
The chanconnect("<host>:<port>") syntax was to get the idea
across. A real building block might look something like
this:
int fd = <get a pipe end to something somehow>
Chan c = fd2chan(fd, <marshalling functions>);
Ideally I'd generate relevant struct definitions and
marshalling functions from the same spec.
-- bakul
next prev parent reply other threads:[~2009-07-18 18:52 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-07-18 7:43 Bakul Shah
2009-07-18 10:25 ` erik quanstrom
2009-07-18 10:38 ` Mechiel Lukkien
2009-07-18 11:33 ` erik quanstrom
2009-07-18 18:52 ` Bakul Shah [this message]
2009-07-18 19:32 ` Eric Van Hensbergen
2009-07-18 17:20 ` Skip Tavakkolian
2009-07-18 17:22 ` J.R. Mauro
2009-07-18 21:02 ` Bakul Shah
2009-07-18 8:38 Akshat Kumar
2009-07-18 16:53 ` J.R. Mauro
2009-07-18 17:06 ` erik quanstrom
2009-07-24 10:23 ` maht
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090718185223.F2A275B18@mail.bitblocks.com \
--to=bakul+plan9@bitblocks.com \
--cc=9fans@9fans.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).