From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Wed, 12 Nov 2008 13:23:51 +0000 From: Eris Discordia To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Message-ID: <7D122EF9133395AC4DEA0396@[192.168.1.2]> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline Subject: Re: [9fans] Do we have a catalog of 9P servers? Topicbox-Message-UUID: 3ecce4a6-ead4-11e9-9d60-3106f5b1d025 First off, thank you so much, sqweek. When someone on 9fans tries to put things in terms of basic abstract ideas instead of technical ones I really appreciate it--I actually learn something. By the way, it is very interesting how a technical question can be sublimated into a Gedankenexperiment that even non-technical people can try to take a stab at. Believe me, such pursuits can be fruitful for the technical people as well as non-technicals. > It has a simple mapping to basic file operations, which makes it easy > to transparently use on the client side. Aside from the debugging tool > aux/9pcon, I'm not sure that any plan 9 programs do explicit 9p client > calls. That also means _if_ a resource can't be represented well using the directory/file metaphor you will have to go through some trouble fitting it into the model. In other words, 9P shouldn't be considered a "fits-all" solution for problems. That in turn says why other protocols that try to be "fits-all" solutions end up being complex--it's not because the designers were dumb or didn't know better, it's because they had a different problem to solve; namely, the problem of heterogeneity. > Could do. However, part of the simplicity of 9p stems from its > implementation in Plan 9: > * 9p doesn't need to worry about the details of auth - that's the > responsibility of authsrv + factotum. > * 9p doesn't need to support symlinks; the kernel provides that sort > of functionality through bind. > * 9p doesn't care about different user id mappings because plan 9 only > deals with user names (and those are verified by the auth server). Doesn't that imply, as Roman Shaposhnik pointed out, that 9P's edge is limited to the Plan 9 implementation? It may be--though I'm certainly not the one to consult on the subject--that the combination of 9P-esque simplicity with the natural diversity of platforms is impossible, or very hard to achieve. Unless, along with suggesting ubiquitous use of 9P one suggests that everyone should run Plan 9. > It doesn't stop at 9p: > * rc/bc/dc/hoc don't need to implement line editing or tab completion > because the "terminal" (win from rio/acme) does the job. > * rc/awk don't need to explicitly support tcp/ip because we have /net. > * fossil doesn't need hardlinks, since venti coalesces identical data > blocks. * plan 9 doesn't need to bother with NAT, since you can just > import /net from your gateway. This third case, i.e. NAT, is interesting to me. I remember you informing me some time ago on how Plan 9 could allow a "network transparent sort of networking." At the time I saw no flaws with that argument. Now I do. There's a reason for NAT being the way it is. I understand that if you import a gateway's /net on each computer in a rather large internal network you will be consuming a huge amount of mostly redundant resources on the gateway. My impression is that each imported instance of /net requires a persistent session to be established between the gateway and the host on the internal network. NAT in comparison is naturally transient. If at all points in time like half the network's population is not using any outside world connections then the gateway can be set up to handle only that much with a little to spare. Importing /net is very much like tunneling a network protocol in an application layer protocol, the drawbacks are the same, the advantages, too. If I knew something of network security I probably could also point out some serious security flaws in the imported /net model, one of them being lack of any access control beyond what auth provides which is insufficient for networking purposes. A firewalled NAT service should be able to distinguish between real packets from inside the network and packets with spoofed source address. With an imported /net since there's no packet rewriting implemented on the network layer (e.g. IP) and because the "redirection" occurs in the application layer there's no chance of capturing spoofed packets except with hacking what makes /net tick (the kernel?). There's also the problem of setting up a DMZ and/or three-legged firewall with this approach. The "width" of the DMZ is always zero with an imported /net--the entire DMZ exists within the gateway's kernel (or wherever in the gateway's memory structures is that /net resides). > Just some examples. But you're right, simplicity of design does > happen elsewhere, but it's less likely to arise elsewhere simply > because other systems have a lot more crap to deal with. When you're > stuck with vt100 emulators, of course every shell implements its own > line editing. So the gist of this argument is that existing flawed designs are basically continuations of measures that were previously required by constraints but are now obsolete, right? Does that mean a new design "from scratch" is always bound to be better suited to current constraints? Also, if you think UNIX and clones are flawed today because of their history and origins what makes you think Plan 9 doesn't suffer from "diseases" it contracted from its original birthplace? I can't forget the Jukebox example. > Much of plan 9's simplicity comes from its structure and consistency. > Note that all the simplicity gains I've listed come from *removing* > features. To be able to do that sort of thing it helps to have a > system where each component meshes well with the other components. > Everything in its right place, so to speak. That's the meaning of Rob Pike's "less is more" then. What do you propose to be done to existing functional and perfectly operable systems? Should one scrap them to make the scene more consistent? Consistency comes at a cost. As pointed out previously on this same thread, in jest though, the Linux community always finds a way to exhaust a (consistent) model's options, and then come the extensions and "hacks." That, however, is not the Linux community's fault--it's the nature of an advancing science and technology that always welcomes not only new users but also new types of users and entirely novel use cases. What seemed ludicrous yesterday is the norm today and I must emphasize that this _doesn't_ mean yesterday was the last day of a "Golden Age." What is the Plan 9 community's plan for meeting the demands of new types of users? Do Plan 9's creators/contributors intend to keep it as their darling secret running for the purposes of research and education occasionally fitting into some company's (Coraid?) business model or suiting some "big iron's" (the new BG thing Ron Minnich wrote about) very special design. > Does "improve 9p to the point where no one will even minimally > criticise it" count? :) It counts but it won't work. At some point the accumulation of changes will cause 9P to mutate into a new protocol. I suspect that's why 9P's designers are loathe to "officially" incorporate changes and some 9fans think every change to the "original" plan is a disaster. Things could get out of (their) hand(s) that way. > The problem is it forces the server and client to synchronise on every > read/write syscall, which results in terrible bandwidth utilisation. An example of a disease contracted from a system's birthplace. > 9p is at a disadvantage because it is so general. Having to support > arbitrary file semantics makes it difficult to come up with any sort of > safe lookahead scheme to avoid waiting for a network round trip on every > read()... Why is there so much emphasis on the "everything-is-a-file" or "everything-is-a-stream-of-bytes" model? The world is obviously more diverse than that. Why isn't there provision for file "types" at least (mknod types b and c come to mind)? Cache viable ones, pass others through. If a resource is to be represented as file system what could go wrong with providing the capability in the protocol of representing its minutiae? What design principles--besides KISS, of course--require bludgeoning every aspect (read: "file") of a resource into a regular "file?" > I'm sure I've missed something, but readahead is safe for all these > constructs except event files, where if you readahead (and get data) > but the client closes the file before actually read()-ing the buffered > data, you really want to tell the file server to "put back" those > events. Mark event files by, say, "e" and don't cache them. Or implement "jitter" control at the cost of extra bandwidth (that's what is currently done in 9P, I presume). Or require the client to announce a live "transaction sum total" when closing an "e" type file. Or provide a "callback" mechanism for events that the client actually processes through a control connection and make the file server record and heed them. Outside of "one world, one model" paradigms there's no shortage of nearly optimum solutions. By the way, I believe most "event" files contain mostly "uneventful" data and only rarely an "event" (e.g. EOF or EOT). So the callback mechanism might not be as costly as it seems. And once again, thanks for enlightening me :-) --On Wednesday, November 12, 2008 1:40 PM +0900 sqweek wrote: > On Wed, Nov 12, 2008 at 9:32 AM, Eris Discordia > wrote: >> And some stuff for troll-clubbers to club me with: >> >> 1. What is 9P's edge over text-based protocol X? > > It has a simple mapping to basic file operations, which makes it easy > to transparently use on the client side. Aside from the debugging tool > aux/9pcon, I'm not sure that any plan 9 programs do explicit 9p client > calls. > >> 4. Intelligence is not something peculiar to 9fans and the creators of >> Plan 9. Simplicity of design is not something peculiar to 9P. It could >> appear out of the blue any day with the latest protocol X. > > Could do. However, part of the simplicity of 9p stems from its > implementation in Plan 9: > * 9p doesn't need to worry about the details of auth - that's the > responsibility of authsrv + factotum. > * 9p doesn't need to support symlinks; the kernel provides that sort > of functionality through bind. > * 9p doesn't care about different user id mappings because plan 9 only > deals with user names (and those are verified by the auth server). > > It doesn't stop at 9p: > * rc/bc/dc/hoc don't need to implement line editing or tab completion > because the "terminal" (win from rio/acme) does the job. > * rc/awk don't need to explicitly support tcp/ip because we have /net. > * fossil doesn't need hardlinks, since venti coalesces identical data > blocks. * plan 9 doesn't need to bother with NAT, since you can just > import /net from your gateway. > > Just some examples. But you're right, simplicity of design does > happen elsewhere, but it's less likely to arise elsewhere simply > because other systems have a lot more crap to deal with. When you're > stuck with vt100 emulators, of course every shell implements its own > line editing. Of course utf8+tcp/ip support creeps into every app one > by one when you don't have a ubiquitous interface to them. > Much of plan 9's simplicity comes from its structure and consistency. > Note that all the simplicity gains I've listed come from *removing* > features. To be able to do that sort of thing it helps to have a > system where each component meshes well with the other components. > Everything in its right place, so to speak. Not that I'm suggesting > plan 9 is 100% perfect, I'm sure there are shortcomings but it makes > so much more effort than any other system I've seen as far as elegance > goes. > >> Since protocol X is >> supposedly new it will break nothing but if it manages to get a foothold >> by its designers _not_ acting up the way the average 9fan does when 9P is >> minimally criticized then 9P will be out the window forever. Do you have >> a solution to that? > > Does "improve 9p to the point where no one will even minimally > criticise it" count? :) > 9p certainly deserves some criticism as it stands - its sensitivity > to latency means it works great on a local network, but poorly over > something as wide as the internet. Though as Erik pointed out not so > long ago, this is a limitation plan 9's 9p client code (and probably > the client portion of every other 9p implementation), not the 9p > protocol itself. The problem is it forces the server and client to > synchronise on every read/write syscall, which results in terrible > bandwidth utilisation. Unless we see some remarkable improvements in > network technology in the near future, I'd be surprised to see 9p gain > any foothold outside local networks before this problem is solved (I > should really take a close look at Op). NFS or something similar that > is only concerned with serving "regular" files could do a much better > job here; 9p is at a disadvantage because it is so general. Having to > support arbitrary file semantics makes it difficult to come up with > any sort of safe lookahead scheme to avoid waiting for a network round > trip on every read()... > OTOH, I wonder what semantics are generally used... the common sorts > of files are: > > * "regular files" - a persistent array of bytes, whatever you write() > to a certain offset will show up again when you read() the same > offset. > > * ctl files - write()s are not preserved, but affect the state of the > file server in some arbitrary way. read()s tend to give some brief > information about the state. > > * event files - read()s block until something interesting happens. > write()s might allow bi-directional events. > > * clone files - a read() of this file creates a new heirachy. > > I'm sure I've missed something, but readahead is safe for all these > constructs except event files, where if you readahead (and get data) > but the client closes the file before actually read()-ing the buffered > data, you really want to tell the file server to "put back" those > events. Buggerit, I'm stumped already. :S > -sqweek >