9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
* [9fans] 9P, procedure calls, kernel, confusion. :(
@ 2008-02-05  9:56 Siddhant
  2008-02-05 10:42 ` Charles Forsyth
  2008-02-05 12:59 ` [9fans] " Siddhant
  0 siblings, 2 replies; 9+ messages in thread
From: Siddhant @ 2008-02-05  9:56 UTC (permalink / raw)
  To: 9fans

Greetings!
I have started reading this documentation of plan9, the main paper.
And I am stuck at one point. It would be really kind if someone could
answer these questions, however silly they might look. :)

>>>" for each fid, the kernel maintains a local representation in a data structure called a channel, so all operations on files performed by the kernel involve a channel connected to that fid"
Q. Can I visualise it like a fid connected to channels which define
operations within them? How exactly can I visualise this 'channel'?

>>>"9P has two forms: RPC messages sent on a pipe or network connection and a procedural interface within the kernel. Since kernel device drivers are directly addressable, there is no need to pass messages to communicate with them; instead each 9P transaction is implemented by a direct procedure call."
Q. Does it mean that every RPC message essentially has to end up being
implemented via that procedural interface?

>>>"A table in the kernel provides a list of entry points corresponding one to one with the 9P messages for each device."
Q. Can I relate this 'table in the kernel' to the 'representation in a
data structure called a channel'? And does 'for each device' refer to
individual fids? Moreover, I am totally confused between the
'conversions' between 9P messages, procedure calls, system calls and
operations on fids. What is being converted to what, and when, and
how.

I'd really appreciate if someone could take time out and reply.
Thanks.
Regards,
Siddhant


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [9fans] 9P, procedure calls, kernel, confusion. :(
  2008-02-05  9:56 [9fans] 9P, procedure calls, kernel, confusion. :( Siddhant
@ 2008-02-05 10:42 ` Charles Forsyth
  2008-02-05 12:59 ` [9fans] " Siddhant
  1 sibling, 0 replies; 9+ messages in thread
From: Charles Forsyth @ 2008-02-05 10:42 UTC (permalink / raw)
  To: 9fans

you can see the structures themselves in /sys/src/9/port/portdat.h

inside the kernel, the traditional integer file descriptor indexes an array of pointers to (possibly shared)
instances of the channel data structure, Chan, which contains the integer fid, an open mode, a current offset,
and most important a pointer (or an index into a table of pointers)
to the Dev structure for the device driver associated with the Chan.  each device driver implements its
own name space (ie, operations such as walk, dirread, stat) including the file i/o
operations (eg, open, read, write).  most drivers use a small library (/sys/src/9/port/dev.c)
to make implementing simple name spaces mainly a matter of defining a table
and some conventional calls for walk, stat, wstat, open, and dirread (which is handled as a special
case inside the device's read operation, not as a separate entry in Dev).

all open files have a Chan, but not all Chans correspond to open files (they can mark
current directories, or more interesting the current location in a walk of a name space).

inside the kernel, operations are done by
	Chan *c = p->fgrp->fd[fd];
	devtab[c->type]->walk(c, ...);
	devtab[c->type]->read(c, va, n, offset);
and so on.  fairly obvious stuff, really.

>Q. Does it mean that every RPC message essentially has to end up being implemented via that procedural interface?

one interesting driver (ie, one Dev) is the `mount driver' (/sys/src/9/port/devmnt.c).  its implementations
of walk, open, stat, read, write, etc build 9P messages representating the corresponding operations and
exchanges them on any given file descriptor (ie, Chan) using devtab[...]->write and devtab[...]->read
so it doesn't care what sort of file descriptor is used (pipe, network, ...).

(an alternative might be to use 9P messages throughout.)

>>>"A table in the kernel provides a list of entry points corresponding one to one with the 9P messages for each device."
>Q. Can I relate this 'table in the kernel' to the 'representation in a

it's the devtab[] that lists pointers to the Dev structures representing configured devices.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [9fans] Re: 9P, procedure calls, kernel, confusion. :(
  2008-02-05  9:56 [9fans] 9P, procedure calls, kernel, confusion. :( Siddhant
  2008-02-05 10:42 ` Charles Forsyth
@ 2008-02-05 12:59 ` Siddhant
  2008-02-05 16:41   ` Charles Forsyth
  1 sibling, 1 reply; 9+ messages in thread
From: Siddhant @ 2008-02-05 12:59 UTC (permalink / raw)
  To: 9fans

On Feb 5, 3:35 pm, fors...@terzarima.net (Charles Forsyth) wrote:
> you can see the structures themselves in /sys/src/9/port/portdat.h
>
> inside the kernel, the traditional integer file descriptor indexes an array of pointers to (possibly shared)
> instances of the channel data structure, Chan, which contains the integer fid, an open mode, a current offset,
> and most important a pointer (or an index into a table of pointers)
> to the Dev structure for the device driver associated with the Chan.  each device driver implements its
> own name space (ie, operations such as walk, dirread, stat) including the file i/o
> operations (eg, open, read, write).  most drivers use a small library (/sys/src/9/port/dev.c)
> to make implementing simple name spaces mainly a matter of defining a table
> and some conventional calls for walk, stat, wstat, open, and dirread (which is handled as a special
> case inside the device's read operation, not as a separate entry in Dev).
>
> all open files have a Chan, but not all Chans correspond to open files (they can mark
> current directories, or more interesting the current location in a walk of a name space).
>
> inside the kernel, operations are done by
>         Chan *c = p->fgrp->fd[fd];
>         devtab[c->type]->walk(c, ...);
>         devtab[c->type]->read(c, va, n, offset);
> and so on.  fairly obvious stuff, really.
>
> >Q. Does it mean that every RPC message essentially has to end up being implemented via that procedural interface?
>
> one interesting driver (ie, one Dev) is the `mount driver' (/sys/src/9/port/devmnt.c).  its implementations
> of walk, open, stat, read, write, etc build 9P messages representating the corresponding operations and
> exchanges them on any given file descriptor (ie, Chan) using devtab[...]->write and devtab[...]->read
> so it doesn't care what sort of file descriptor is used (pipe, network, ...).
>
> (an alternative might be to use 9P messages throughout.)
>
> >>>"A table in the kernel provides a list of entry points corresponding one to one with the 9P messages for each device."
> >Q. Can I relate this 'table in the kernel' to the 'representation in a
>
> it's the devtab[] that lists pointers to the Dev structures representing configured devices.

Thanks a lot for the reply.
For whatever I could make out, does it work like the following?

9P is used universally in plan9 to deal with files. If the file is
present in the kernel resident file system, then a direct procedure
call is enough, else, the mount driver comes into picture.
If its in the kernel resident file system, then the channel of the
file is looked up for the Dev structure (the device driver
corresponding to the file being referred to by the channel) , which
defines all those procedures, that may be finally executed.
If its not.... then....?

One more point, I googled a lot on "kernel resident file systems and
non kernel resident file systems", but I could not find a single
useful link. It would be great if you could specify the difference
between the two. I wish that eases up the situation a bit.

Thanks once again.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [9fans] Re: 9P, procedure calls, kernel, confusion. :(
  2008-02-05 16:41   ` Charles Forsyth
@ 2008-02-05 16:35     ` erik quanstrom
  2008-02-06  9:45     ` Siddhant
  1 sibling, 0 replies; 9+ messages in thread
From: erik quanstrom @ 2008-02-05 16:35 UTC (permalink / raw)
  To: 9fans

> see bind(2):
>
>           Finally, the MCACHE flag, valid for mount only, turns on
>           caching for files made available by the mount.  By default,
>           file contents are always retrieved from the server.  With
>           caching enabled, the kernel may instead use a local cache to
>           satisfy read(5) requests for files accessible through this
>           mount point.  The currency of cached data for a file is ver-
>           ified at each open(5) of the file from this client machine.
>

sleezy hacks notwithstanding.

- erik


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [9fans] Re: 9P, procedure calls, kernel, confusion. :(
  2008-02-05 12:59 ` [9fans] " Siddhant
@ 2008-02-05 16:41   ` Charles Forsyth
  2008-02-05 16:35     ` erik quanstrom
  2008-02-06  9:45     ` Siddhant
  0 siblings, 2 replies; 9+ messages in thread
From: Charles Forsyth @ 2008-02-05 16:41 UTC (permalink / raw)
  To: 9fans

> 9P is used universally in plan9 to deal with files. If the file is
> present in the kernel resident file system, then a direct procedure
> call is enough, else, the mount driver comes into picture.
> If its in the kernel resident file system, then the channel of the
> file is looked up for the Dev structure (the device driver
> corresponding to the file being referred to by the channel) , which
> defines all those procedures, that may be finally executed.

yes, that seems fine.  conventional device drivers are the main
file servers that are built-in, but there are others.  if you look
at section 3 of the programmer's manual you'll see a representative set.
notably, in plan 9, none of them deal with files on discs etc.
that's all done by user-level servers, which are typically found in section 4 of
the manual.

>there is no client-side caching in the plan 9 kernel.

see bind(2):

          Finally, the MCACHE flag, valid for mount only, turns on
          caching for files made available by the mount.  By default,
          file contents are always retrieved from the server.  With
          caching enabled, the kernel may instead use a local cache to
          satisfy read(5) requests for files accessible through this
          mount point.  The currency of cached data for a file is ver-
          ified at each open(5) of the file from this client machine.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [9fans] Re: 9P, procedure calls, kernel, confusion. :(
  2008-02-05 16:41   ` Charles Forsyth
  2008-02-05 16:35     ` erik quanstrom
@ 2008-02-06  9:45     ` Siddhant
  1 sibling, 0 replies; 9+ messages in thread
From: Siddhant @ 2008-02-06  9:45 UTC (permalink / raw)
  To: 9fans

On Feb 5, 9:36 pm, quans...@quanstro.net (erik quanstrom) wrote:
> > see bind(2):
>
> >           Finally, the MCACHE flag, valid for mount only, turns on
> >           caching for files made available by the mount.  By default,
> >           file contents are always retrieved from the server.  With
> >           caching enabled, the kernel may instead use a local cache to
> >           satisfy read(5) requests for files accessible through this
> >           mount point.  The currency of cached data for a file is ver-
> >           ified at each open(5) of the file from this client machine.
>
> sleezy hacks notwithstanding.
>
> - erik

Thanks people! For your replies! Thanks a lot! I'm new to these kernel
thingies. I think I've got the point now. (Almost... :) )
I'll sum it up. Please tell me if I am right or wrong.

9P clients execute remote procedure calls to establish fid in the
remote (9P) file server.
This fid HAS to get mapped from a channel to a driver in devtab[].
If the file is present in kernel resident filesystem, well and good.
Else, mount driver gets in (from devtab[]), converts the local
procedure call to a 9P message.
(and then the mount driver is supposed to attach a namespace, so it
ends there...)


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [9fans] Re: 9P, procedure calls, kernel, confusion. :(
  2008-02-05 14:22 ` erik quanstrom
@ 2008-02-05 15:18   ` roger peppe
  0 siblings, 0 replies; 9+ messages in thread
From: roger peppe @ 2008-02-05 15:18 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

> there is no client-side caching in the plan 9 kernel.

*cough*, MCACHE?


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [9fans] Re: 9P, procedure calls, kernel, confusion. :(
  2008-02-05 13:43 Anant Narayanan
@ 2008-02-05 14:22 ` erik quanstrom
  2008-02-05 15:18   ` roger peppe
  0 siblings, 1 reply; 9+ messages in thread
From: erik quanstrom @ 2008-02-05 14:22 UTC (permalink / raw)
  To: 9fans

one thing of note, linux vfs implements a dcache.  this connects the
virtual memory system to the filesystem.  (but oddly in linux network
buffers are handled seperately.) there is no client-side caching in
the plan 9 kernel.

there is a notable exception.  there is an executable cache.

>> One more point, I googled a lot on "kernel resident file systems and
>> non kernel resident file systems", but I could not find a single
>> useful link. It would be great if you could specify the difference
>> between the two. I wish that eases up the situation a bit.

since the devtab[] functions map 1:1 with 9p, all the mount driver
needs to do for calls outside the kernel is to marshal/demarshal
9p messages.

it's important to remember that most in-kernel file servers could easily
exist outside the kernel.  the entire ip stack can be implemented from
user space.  (and it has been in the past.)

> "Kernel resident filesystem" in this context simply means a filesystem
> which was created for use by the kernel; this may or may not be
> visible to user-space applications - I'm not too sure.

every element of mounttab[] has an associated device letter.  to
mount the device, one does (typically)
	bind -a '#'^$letter /dev

for example, to bind a second ip stack on /net.alt,
	bind -a '#I1' /net.alt

> To sum up, you
> use the 9 primitive operations provided by each 'Dev' when you work
> with kernel-resident filesystems, while all other filesystems are
> dealt with using regular 9P.

all devices are accessed through devtab.  it may be that that entry
is the mount driver.  the mount driver turns devtab[]->fn into
the corresponding 9p message.  (and vice versa.)

- erik


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [9fans] Re: 9P, procedure calls, kernel, confusion. :(
@ 2008-02-05 13:43 Anant Narayanan
  2008-02-05 14:22 ` erik quanstrom
  0 siblings, 1 reply; 9+ messages in thread
From: Anant Narayanan @ 2008-02-05 13:43 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

Hi,

> One more point, I googled a lot on "kernel resident file systems and
> non kernel resident file systems", but I could not find a single
> useful link. It would be great if you could specify the difference
> between the two. I wish that eases up the situation a bit.

I don't claim to know much about how 9P or Plan 9 work, but I will
attempt to answer.

9P is very much like Linux's VFS. It defines how user-space
applications can access files; whether they are stored locally, or on
the network, or whether the information is generated on-the-fly by the
kernel from its internal data-structures is of no consequence - 9P
abstracts all that.

Accessing files from within the kernel is a different ball-game
(that's true for every kernel). Since you don't have the 9P style
access anymore - The 'Dev' structure essentially replaces it. These
structures point to methods are quite analogous to the 9P operations.
The methods do the work of implementing the file operations - and
these would all be different for files depending on whether they are
stored on local disk, produced synthetically or accessed over a
network. This makes it easier to work with files in the kernel since
all the operations are delegated further to the 'Dev' structures, each
one doing what it knows best; quite similar to what happens with 9P in
user-space.

Charles mentions one such 'Dev', the mount driver, which merely passes
on the received request as 9P messages to the file descriptor
(possibly connected to a remote 9P server).

"Kernel resident filesystem" in this context simply means a filesystem
which was created for use by the kernel; this may or may not be
visible to user-space applications - I'm not too sure. To sum up, you
use the 9 primitive operations provided by each 'Dev' when you work
with kernel-resident filesystems, while all other filesystems are
dealt with using regular 9P.

I hope I'm right, and that this helps.

--
Anant


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2008-02-06  9:45 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-02-05  9:56 [9fans] 9P, procedure calls, kernel, confusion. :( Siddhant
2008-02-05 10:42 ` Charles Forsyth
2008-02-05 12:59 ` [9fans] " Siddhant
2008-02-05 16:41   ` Charles Forsyth
2008-02-05 16:35     ` erik quanstrom
2008-02-06  9:45     ` Siddhant
2008-02-05 13:43 Anant Narayanan
2008-02-05 14:22 ` erik quanstrom
2008-02-05 15:18   ` roger peppe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).