From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dave Subject: Re: [9fans] blanks in file names In-reply-to: <200207102222.SAA20812@math.psu.edu> To: 9fans@cse.psu.edu Message-id: <200207102301.g6AN1UV00727@dave2.dave.tj> MIME-version: 1.0 Content-type: text/plain; charset=us-ascii Content-transfer-encoding: 7BIT Date: Wed, 10 Jul 2002 19:01:29 -0400 Topicbox-Message-UUID: c8b25f5c-eaca-11e9-9e20-41e7f4b1d025 Reply inline: - Dave Dan Cross wrote: > > > The "offending" program in this case is the program that's so much simpler > > because it's using the new kernel interface rather than the old one. > > I don't think it would be simpler; I think it would be more > complicated. You're replacing a simple, textual representation of an > object with a binary representation; you have to have some way to do > canonicalization in the common case, but even that path is thrwat with > danger. Manipulating text with all sorts of dynamic buffers is substantially more complicated than simply replacing a node in a linked list. The canonicalization is all being done by the kernel, or a library. > > > Nothing has to be rethought because all my proposed changes do is restore > > the strict seperation between nodes in a filename (something UNIX - > > and therefore Plan 9 - likes to rely on). > > But they change an already well-established interface. Have you > thought through the implications of this, in all their macabre glory? > What you propose--changing the most basic interface for opening a file > in a system where everything looks more or less like a file--has huge > implications. And all this just to support a strange edge-case, which > is adequately solved by substitutions in the filename. Sure, it's not > perfect in some weird pathological case, but how often is this going to > come up in practice? Remember: Optimize for the common case. Optimization for the common case is good, but creating a system where the uncommon case will cause major mayhem at the system level is evidence of a very unclean approach. (When you consider the reasoning behind the problem (namely, spaces and slashes in filenames kill our ability to seperate nodes easily), it makes perfect sense that our solution isn't very clean. The only clean solution is to restore the ancient UNIX ideal of being able to easily seperate nodes. In other words, either kill spaces altogether and damn interoperability, or promote spaces to full citizenship.) > > > There's plenty of experience with other systems working on linked lists > > (including a huge amount of kernel code in my Linux box that I'm typing > > from, ATM). Most of the problems with linked lists have been pretty > > well documented, by now. > > It's the huge amount of kernel code that Plan 9 is trying to avoid. String manipulation is more complex than linked list manipulation. > > Being forced to conform to a lot of external interfaces *will* kill the > system. I don't dispute that point, but the interface I propose is most unlike any other interface currently known to man (not trying to conform to any external interface). I'm simply pointing out that failing to provide at least a 1-1 mapping with capabilities that are already widely used in external systems that must interoperate with ours *will* kill us. > > Besides, the point Nemo was trying to make umpteen posts ago was that, > yes, you can roll back changes using the dump filesystem, which gives > you temporal mobility. He is right. You can do a lot of things if you're prepared to get involved in the functions that your OS should be doing automatically. Try running an FTP mirror to a busy site that way, though, and you'll quickly discover why automation is a good thing. The worst part about our system is that the "solution" you eventually find for an FTP mirror will be useless on an HTTP proxy. When "solutions" need to be modified for each individual application, you know that the system isn't clean. > > - Dan C. > - Dave C. :-)