From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: To: 9fans@cse.psu.edu Subject: Re: [9fans] dumb question From: nigel@9fs.org MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Date: Thu, 27 Jun 2002 11:29:56 +0100 Topicbox-Message-UUID: bb9bc1c8-eaca-11e9-9e20-41e7f4b1d025 > im arguing about the apparent acceptance of a solution which wastes > resources, sure its 30k, but you can do quite a bit with 30k. and if you > actually have lots of users, things start to slow down. point im trying to > make: tar|tar uses more memory and resources then a recursive copy, ive > thoroughly explained this many times. whats so hard to grasp here? Perhaps you cannot grasp that the consensus agrees that this is not the most efficient solution, but that it is "efficient enough" given that recursive tree copy is not the most commonly used operation. This is based on experience. Members of the list have provided numerical evidence that it's efficient enough. But that's probably a mistake; it just started a techno-pissing match from which there are no winners. Telling the list about disk sectors and DMA will not advance your argument. The majority know about those. Really. Plan 9 has a philosophy. It has certain aims which are much more important than the exact path by which data in files travel during a tree copy. One day, when everything else which is more important is done, cp -r will get tuned till the magneto-optical domains squeak. But that day will probably never come. I think I'll close with the Presotto contention. It runs like this; "if you're unhappy with the way Plan 9 does does X, just wait until you see Y". In this case X = "file tree copies", and Y is always "web browsing". Positively my last word on the subject. Nigel