From mboxrd@z Thu Jan 1 00:00:00 1970 MIME-Version: 1.0 In-Reply-To: References: <8660x4tiq2.fsf@cmarib.ramside> Date: Thu, 3 Mar 2016 16:04:10 +0000 Message-ID: From: Charles Forsyth To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Content-Type: multipart/alternative; boundary=001a11468f101f3d84052d2726dd Subject: Re: [9fans] Fwd: Multiplexing Styx/9P2000 over TCP connections Topicbox-Message-UUID: 8a07aa0a-ead9-11e9-9d60-3106f5b1d025 --001a11468f101f3d84052d2726dd Content-Type: text/plain; charset=UTF-8 > So, I've been looking at the source code of Inferno, and I've noticed > that, when mount(1) wants to connect to a Styx/9P2000 server on a remote > machine, it generally opens up a new TCP connection... one for each new > mount... even if it's just an additional connection to the same service > on the same remote host. Yes, although as someone observed, you can share that connection by mounting it in different name spaces. Inferno doesn't provide Plan 9's srv but sys->mount takes a file descriptor for any suitable connection (not just a network connection, and not just tcp/ip) and so it can be done there too. > Recalling the specification for 9P2000, the protocol supports the > multiplexing of multiple 9P2000 clients/"sessions" onto a single, > multiplexed, session with the server. In theory, all a 9P2000 > multiplexer would have to do is map tags and fids so that different > clients don't use the same values, and negotiate a common version and > msize in the Tversion/Rversion transactions. All the functionality of > the protocol, including access control using afids, would be preserved. The kernel's "mount driver" does the multiplexing for several clients at one level. Only one Tversion exchange is done per connection, and the kernel generates unique fids and tags as is required. Plan 9's cpu server typically does multiplex many clients, even different client IDs, on the same connection from that cpu server to a shared file server (eg, fossil/venti). Each Tattach fid will typically be authorised to a given uname/aname pair by a Tauth and subsequent authentication exchange. /srv/boot is where the initial connection is posted, and it's then shared by a line such as mount -a #s/boot / in /lib/namespace, connecting the server at the far end to / in a union mount. (The actual line in /lib/namespace is more elaborate, but that's the essence.) There is an assumption that the kernel is trustworthy, and won't deliberately or inadvertently use a fid that's authenticated to one uname/aname in a 9p request resulting from another user's system call. > I'd assumed, since the protocol allows for this, that this sort of > multiplexing was done by the Plan 9 and Inferno kernels. Is that not > the case? And if not, then why not? It is done, by the mount driver in the kernel. Other specialised applications can also multiplex requests, although there aren't many examples. Several provide different forms of shared cache. > To take a stab at answering my own question, I suspect that it might > have something to do with the Station to Station protocol and SSL setup > done on a connection prior to exchanging Styx messages. ... When a connection is made to a remote machine, the connection itself might also be authenticated (usually mutually), but that happens before 9P proper begins. The principal authenticated at that connection level essentially speaks for all users that use that connection, including any that later authenticate over that connection using Tauth inside 9P. Thus, a shared cpu server speaks for all users that share its file server connection. (There is a little mechanism on Plan 9 to control the "speakfor" role.) Put another way, if you share a cpu server with other users, you're relying on the probity of the provider of the service not to cheat. (This isn't different from many other shared services.) Obviously there are other ways for a shared cpu service to cheat because it controls the machine so it's not particularly a 9P problem. > In fact, while complying fully with the 9P2000 specification, it should > also be possible to multiplex sessions in the REVERSE direction > (connections from clients on the remote "server" host BACK to servers > listening on the local "client" host) over the same TCP connection used > to carry the "forward" (local --> remote) sessions. > Now that I've been typing about this for a few paragraphs, it occurs to > me that a multiplexer like this could probably be implemented as a > system service running in userspace, without much (if any) extra support > from the kernel. Yes, you can easily write a 9P multiplexor at user level. In fact, I think Roger Peppe wrote a library to support writing 9P multiplexing in Inferno, but I can't find it now. (9P and Styx are now the same protocol.) A little different: several years ago, I needed a way for a 9P machine exporting a service to become a 9P client for the remote, as a way of getting a certain type of streaming. Note that 9P message types have a low-order bit that gives the direction, and normally one end only sends types with 0 and receives types with 1, while the other only sends types with 1 and receives types with 0. Thus a 9P message multiplexor looking only at the low-order bit can split the stream into two 9P conversations, going the opposite way to each other, so at each end there's a 9P client and server, but part of the same logical conversation (the flipside of the original conversation). As sometimes happens, the need for it went away, so I never did write it, but I thought it might be fun. > > So, do Plan 9 or Inferno already do anything like this? If not, do you > think it would be a smart thing to implement? I'm curious to hear other > people's thoughts on this. On 3 March 2016 at 15:32, Charles Forsyth wrote: > > On 3 March 2016 at 02:09, wrote: > >> I recently posted the following to the Inferno mailing list (but >> received no response). I'm re-posting here, as this applies to Plan 9 >> just as much as to Inferno, anyway... >> > > Sorry. You asked some interesting questions but I was busy with something > else > when I first saw it, and then it slipped my mind. > --001a11468f101f3d84052d2726dd Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
> So, I've been looking at the source code of Inferno, and I'= ;ve noticed
> that,= when mount(1) wants to connect to a Styx/9P2000 server on a remote<= br style=3D"color:rgb(0,0,0);font-size:12.8000001907349px">> machine, it generally op= ens up a new TCP connection... one for each new
> mount... even if it's just an additional= connection to the same service
> on the same remote host.

Yes, although as someone observed, you c= an share that connection by
mounting it in different name spaces. Infer= no doesn't provide Plan 9's srv
but sys->mount takes a= file descriptor for any suitable connection
(not just a network connect= ion, and not just tcp/ip) =C2=A0and so it can be done there too.

> Recalling the specification fo= r 9P2000, the protocol supports the
> multiplexing of multiple 9P2000 clients/"sessions&q= uot; onto a single,
&g= t; multiplexed, session with the server.=C2=A0 In theory, all a 9P2000
> multiplexer would h= ave to do is map tags and fids so that different
> clients don't use the same values, and = negotiate a common version and
> msize in the Tversion/Rversion transactions.=C2=A0 All the fu= nctionality of
> th= e protocol, including access control using afids, would be preserved.

The kernel= 's "mount driver" does the multiplexing for several clients
at one level. Only one Tversion exchange is done per connection,
and the kernel generates unique fids and tags as is required.

Plan 9's cpu server typically does multiplex many = clients, even different client IDs,
on the same connection from t= hat cpu server to a shared file server (eg, fossil/venti).
Each T= attach fid will typically be authorised to a given uname/aname pair by a Ta= uth
and subsequent authentication exchange.

<= div>/srv/boot is where the initial connection is posted, and it's then = shared by a line such as

mount -a #s/boot /

in /lib/namespace, connecting the server at t= he far end to / in a union mount.
(The actual line in /lib/namesp= ace is more elaborate, but that's the essence.)

There is an assumption that the kernel is trustworthy, and won't deli= berately
or inadvertently use a fid that's authenticated to o= ne uname/aname in a 9p request
resulting from another user's = system call.

> = I'd assumed, since the protocol allows for this, that this sort of
> multiplexing was do= ne by the Plan 9 and Inferno kernels.=C2=A0 Is that not
> the case?=C2=A0 And if not, then why= not?
It is= done, by the mount driver in the kernel. Other specialised applications
can also multiplex requests, although there aren't many= examples.
Several provide different forms of shared cach= e.

> To take a stab at answering my ow= n question, I suspect that it might
> have something to do with the Station to Station protoco= l and SSL setup
> d= one on a connection prior to exchanging Styx messages. =C2=A0...

When a connection is made to a remote machine, the co= nnection itself
might also be authenticated (usually mutually), b= ut that happens before
9P proper begins. The principal authentica= ted at that connection level essentially speaks for all
users tha= t use that connection, including any that later authenticate
over= that connection using Tauth inside 9P. Thus, a shared cpu server
speaks for all users that share its file server connection. (There is a li= ttle
mechanism on Plan 9 to control the "speakfor" role= .) Put another way,
if you share a cpu server with other users, y= ou're relying on the probity
of the provider of the service n= ot to cheat. (This isn't different from many
other shared ser= vices.) Obviously there are other ways for a shared
cpu service t= o cheat because it controls the machine so it's not particularly a 9P p= roblem.

> In fact, while complying fully with the 9P2000 spe= cification, it should
= > also be possible to multiplex sessions in the REVERSE direction=
> (connections from clien= ts on the remote "server" host BACK to servers
> listening on the local "client= " host) over the same TCP connection used
> to carry the "forward" (local -->= ; remote) sessions.

> Now that I've been typing about this for a= few paragraphs, it occurs to

> me that a multiplexer like this could probably be implemented = as a
> system servi= ce running in userspace, without much (if any) extra support
> from the kernel.

Ye= s, you can easily write a 9P multiplexor at user level.
In fact, = I think Roger Peppe wrote a library to support writing 9P multiplexing in I= nferno, but
I can't find it now. (9P and Styx are now the sam= e protocol.)

A little different: several years ago= , I needed a way for a 9P machine exporting a service
to become a= 9P client for the remote, as a way of getting a certain type of streaming.=
Note that 9P message types have a low-order bit that gives the d= irection,
and normally one end only sends types with 0 and receiv= es types with 1,
while the other only sends types with 1 and rece= ives types with 0.
Thus a 9P message multiplexor looking only at = the low-order bit can split
the stream into two 9P conversations,= going the opposite way to each other,
so at each end there's= a 9P client and server, but part of the same logical conversation
(the flipside of the original conversation).

As = sometimes happens, the need for it went away, so I never did write it,
but I thought it might be fun.

>
> So, do= Plan 9 or Inferno already do anything like this?=C2=A0 If not, do you
> think it would be a= smart thing to implement?=C2=A0 I'm curious to hear other
> people's thoughts on this= .

On 3 March 2016 at 15:32, Charles Forsyth <charles.forsyth@= gmail.com> wrote:

On 3 March 2016 at 02:09, <cigar562hfsp952fans@i= cebubble.org> wrote:
I recently posted the following to the Inferno mai= ling list (but
received no response).=C2=A0 I'm re-posting here, as this applies to Pl= an 9
just as much as to Inferno, anyway...

Sorry. You asked some interesting questions but I was busy with something= else
when I first saw it, and then it slip= ped my mind.

--001a11468f101f3d84052d2726dd--