* [9fans] 9P, procedure calls, kernel, confusion. :( @ 2008-02-05 9:56 Siddhant 2008-02-05 10:42 ` Charles Forsyth 2008-02-05 12:59 ` [9fans] " Siddhant 0 siblings, 2 replies; 6+ messages in thread From: Siddhant @ 2008-02-05 9:56 UTC (permalink / raw) To: 9fans Greetings! I have started reading this documentation of plan9, the main paper. And I am stuck at one point. It would be really kind if someone could answer these questions, however silly they might look. :) >>>" for each fid, the kernel maintains a local representation in a data structure called a channel, so all operations on files performed by the kernel involve a channel connected to that fid" Q. Can I visualise it like a fid connected to channels which define operations within them? How exactly can I visualise this 'channel'? >>>"9P has two forms: RPC messages sent on a pipe or network connection and a procedural interface within the kernel. Since kernel device drivers are directly addressable, there is no need to pass messages to communicate with them; instead each 9P transaction is implemented by a direct procedure call." Q. Does it mean that every RPC message essentially has to end up being implemented via that procedural interface? >>>"A table in the kernel provides a list of entry points corresponding one to one with the 9P messages for each device." Q. Can I relate this 'table in the kernel' to the 'representation in a data structure called a channel'? And does 'for each device' refer to individual fids? Moreover, I am totally confused between the 'conversions' between 9P messages, procedure calls, system calls and operations on fids. What is being converted to what, and when, and how. I'd really appreciate if someone could take time out and reply. Thanks. Regards, Siddhant ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [9fans] 9P, procedure calls, kernel, confusion. :( 2008-02-05 9:56 [9fans] 9P, procedure calls, kernel, confusion. :( Siddhant @ 2008-02-05 10:42 ` Charles Forsyth 2008-02-05 12:59 ` [9fans] " Siddhant 1 sibling, 0 replies; 6+ messages in thread From: Charles Forsyth @ 2008-02-05 10:42 UTC (permalink / raw) To: 9fans you can see the structures themselves in /sys/src/9/port/portdat.h inside the kernel, the traditional integer file descriptor indexes an array of pointers to (possibly shared) instances of the channel data structure, Chan, which contains the integer fid, an open mode, a current offset, and most important a pointer (or an index into a table of pointers) to the Dev structure for the device driver associated with the Chan. each device driver implements its own name space (ie, operations such as walk, dirread, stat) including the file i/o operations (eg, open, read, write). most drivers use a small library (/sys/src/9/port/dev.c) to make implementing simple name spaces mainly a matter of defining a table and some conventional calls for walk, stat, wstat, open, and dirread (which is handled as a special case inside the device's read operation, not as a separate entry in Dev). all open files have a Chan, but not all Chans correspond to open files (they can mark current directories, or more interesting the current location in a walk of a name space). inside the kernel, operations are done by Chan *c = p->fgrp->fd[fd]; devtab[c->type]->walk(c, ...); devtab[c->type]->read(c, va, n, offset); and so on. fairly obvious stuff, really. >Q. Does it mean that every RPC message essentially has to end up being implemented via that procedural interface? one interesting driver (ie, one Dev) is the `mount driver' (/sys/src/9/port/devmnt.c). its implementations of walk, open, stat, read, write, etc build 9P messages representating the corresponding operations and exchanges them on any given file descriptor (ie, Chan) using devtab[...]->write and devtab[...]->read so it doesn't care what sort of file descriptor is used (pipe, network, ...). (an alternative might be to use 9P messages throughout.) >>>"A table in the kernel provides a list of entry points corresponding one to one with the 9P messages for each device." >Q. Can I relate this 'table in the kernel' to the 'representation in a it's the devtab[] that lists pointers to the Dev structures representing configured devices. ^ permalink raw reply [flat|nested] 6+ messages in thread
* [9fans] Re: 9P, procedure calls, kernel, confusion. :( 2008-02-05 9:56 [9fans] 9P, procedure calls, kernel, confusion. :( Siddhant 2008-02-05 10:42 ` Charles Forsyth @ 2008-02-05 12:59 ` Siddhant 2008-02-05 16:41 ` Charles Forsyth 1 sibling, 1 reply; 6+ messages in thread From: Siddhant @ 2008-02-05 12:59 UTC (permalink / raw) To: 9fans On Feb 5, 3:35 pm, fors...@terzarima.net (Charles Forsyth) wrote: > you can see the structures themselves in /sys/src/9/port/portdat.h > > inside the kernel, the traditional integer file descriptor indexes an array of pointers to (possibly shared) > instances of the channel data structure, Chan, which contains the integer fid, an open mode, a current offset, > and most important a pointer (or an index into a table of pointers) > to the Dev structure for the device driver associated with the Chan. each device driver implements its > own name space (ie, operations such as walk, dirread, stat) including the file i/o > operations (eg, open, read, write). most drivers use a small library (/sys/src/9/port/dev.c) > to make implementing simple name spaces mainly a matter of defining a table > and some conventional calls for walk, stat, wstat, open, and dirread (which is handled as a special > case inside the device's read operation, not as a separate entry in Dev). > > all open files have a Chan, but not all Chans correspond to open files (they can mark > current directories, or more interesting the current location in a walk of a name space). > > inside the kernel, operations are done by > Chan *c = p->fgrp->fd[fd]; > devtab[c->type]->walk(c, ...); > devtab[c->type]->read(c, va, n, offset); > and so on. fairly obvious stuff, really. > > >Q. Does it mean that every RPC message essentially has to end up being implemented via that procedural interface? > > one interesting driver (ie, one Dev) is the `mount driver' (/sys/src/9/port/devmnt.c). its implementations > of walk, open, stat, read, write, etc build 9P messages representating the corresponding operations and > exchanges them on any given file descriptor (ie, Chan) using devtab[...]->write and devtab[...]->read > so it doesn't care what sort of file descriptor is used (pipe, network, ...). > > (an alternative might be to use 9P messages throughout.) > > >>>"A table in the kernel provides a list of entry points corresponding one to one with the 9P messages for each device." > >Q. Can I relate this 'table in the kernel' to the 'representation in a > > it's the devtab[] that lists pointers to the Dev structures representing configured devices. Thanks a lot for the reply. For whatever I could make out, does it work like the following? 9P is used universally in plan9 to deal with files. If the file is present in the kernel resident file system, then a direct procedure call is enough, else, the mount driver comes into picture. If its in the kernel resident file system, then the channel of the file is looked up for the Dev structure (the device driver corresponding to the file being referred to by the channel) , which defines all those procedures, that may be finally executed. If its not.... then....? One more point, I googled a lot on "kernel resident file systems and non kernel resident file systems", but I could not find a single useful link. It would be great if you could specify the difference between the two. I wish that eases up the situation a bit. Thanks once again. ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [9fans] Re: 9P, procedure calls, kernel, confusion. :( 2008-02-05 12:59 ` [9fans] " Siddhant @ 2008-02-05 16:41 ` Charles Forsyth 2008-02-05 16:35 ` erik quanstrom 2008-02-06 9:45 ` Siddhant 0 siblings, 2 replies; 6+ messages in thread From: Charles Forsyth @ 2008-02-05 16:41 UTC (permalink / raw) To: 9fans > 9P is used universally in plan9 to deal with files. If the file is > present in the kernel resident file system, then a direct procedure > call is enough, else, the mount driver comes into picture. > If its in the kernel resident file system, then the channel of the > file is looked up for the Dev structure (the device driver > corresponding to the file being referred to by the channel) , which > defines all those procedures, that may be finally executed. yes, that seems fine. conventional device drivers are the main file servers that are built-in, but there are others. if you look at section 3 of the programmer's manual you'll see a representative set. notably, in plan 9, none of them deal with files on discs etc. that's all done by user-level servers, which are typically found in section 4 of the manual. >there is no client-side caching in the plan 9 kernel. see bind(2): Finally, the MCACHE flag, valid for mount only, turns on caching for files made available by the mount. By default, file contents are always retrieved from the server. With caching enabled, the kernel may instead use a local cache to satisfy read(5) requests for files accessible through this mount point. The currency of cached data for a file is ver- ified at each open(5) of the file from this client machine. ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [9fans] Re: 9P, procedure calls, kernel, confusion. :( 2008-02-05 16:41 ` Charles Forsyth @ 2008-02-05 16:35 ` erik quanstrom 2008-02-06 9:45 ` Siddhant 1 sibling, 0 replies; 6+ messages in thread From: erik quanstrom @ 2008-02-05 16:35 UTC (permalink / raw) To: 9fans > see bind(2): > > Finally, the MCACHE flag, valid for mount only, turns on > caching for files made available by the mount. By default, > file contents are always retrieved from the server. With > caching enabled, the kernel may instead use a local cache to > satisfy read(5) requests for files accessible through this > mount point. The currency of cached data for a file is ver- > ified at each open(5) of the file from this client machine. > sleezy hacks notwithstanding. - erik ^ permalink raw reply [flat|nested] 6+ messages in thread
* [9fans] Re: 9P, procedure calls, kernel, confusion. :( 2008-02-05 16:41 ` Charles Forsyth 2008-02-05 16:35 ` erik quanstrom @ 2008-02-06 9:45 ` Siddhant 1 sibling, 0 replies; 6+ messages in thread From: Siddhant @ 2008-02-06 9:45 UTC (permalink / raw) To: 9fans On Feb 5, 9:36 pm, quans...@quanstro.net (erik quanstrom) wrote: > > see bind(2): > > > Finally, the MCACHE flag, valid for mount only, turns on > > caching for files made available by the mount. By default, > > file contents are always retrieved from the server. With > > caching enabled, the kernel may instead use a local cache to > > satisfy read(5) requests for files accessible through this > > mount point. The currency of cached data for a file is ver- > > ified at each open(5) of the file from this client machine. > > sleezy hacks notwithstanding. > > - erik Thanks people! For your replies! Thanks a lot! I'm new to these kernel thingies. I think I've got the point now. (Almost... :) ) I'll sum it up. Please tell me if I am right or wrong. 9P clients execute remote procedure calls to establish fid in the remote (9P) file server. This fid HAS to get mapped from a channel to a driver in devtab[]. If the file is present in kernel resident filesystem, well and good. Else, mount driver gets in (from devtab[]), converts the local procedure call to a 9P message. (and then the mount driver is supposed to attach a namespace, so it ends there...) ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2008-02-06 9:45 UTC | newest] Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2008-02-05 9:56 [9fans] 9P, procedure calls, kernel, confusion. :( Siddhant 2008-02-05 10:42 ` Charles Forsyth 2008-02-05 12:59 ` [9fans] " Siddhant 2008-02-05 16:41 ` Charles Forsyth 2008-02-05 16:35 ` erik quanstrom 2008-02-06 9:45 ` Siddhant
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).