From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Stitt To: "rob pike, esq." Cc: 9fans@cse.psu.edu Subject: Re: [9fans] dumb question In-Reply-To: <171e27d5f2cfd12a9303117e58818e5b@plan9.bell-labs.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Date: Wed, 26 Jun 2002 11:27:07 -0700 Topicbox-Message-UUID: ba1f0378-eaca-11e9-9e20-41e7f4b1d025 On Wed, 26 Jun -1, rob pike, esq. wrote: > > i beg to differ, tar uses memory, it uses system resources, i fail to see > > how you think this is just as good as just recursively copying files. The > > point is I shouldnt have to needlessly use this other program (for Tape > > ARchives) to copy directorys. If this is on a fairly busy file server > > needlessly running tar twice is simply wasteful and unacceptable when you > > could just follow the directory tree. > > The command > tar c ... | tar x ... > will have almost identical cost to any explicit recursive copy, since > it does the same amount of work. The only extra overhead is writing > to and reading from the pipe, which is so cheap - and entirely local - > that it is insignificant. > > Whether you cp or tar, you must read the files from one tree and write > them to another. > > -rob > no, there is the extra overhead of running tar to archive, then dearchive the directory tree. that uses extra memory and cpu time. anyone who thinks that archiving and dearchiving is just as efficent as a sector sector (or byte for byte even) copy is simply wrong. if _nothing_ else, EVEN is tar was somehow optimized to work as fast as cp, tar larger then cp, and it wasnt designed for copying directory trees, its an archiver. cp is about 60k in plan9 and tar is 80k in plan9. cp on 3 seperate unix machines (linux, SCO unix, os-X) in its overcomplicated copying directory tree glory is under 30k, on sunOS it happens to be 17k. to tar then untar requires two processes using a shared 80k tar, plus some intermediate data to archive and process the data, then immediatly reverse this process. cp _could_ be written to do all this in 17k but instead our 60k cp cant do it, and instead we need two entries in the process table and twice the number ofuser space pages. Im sorry if im sounding like a troll, or if it seems like im flaming, but seriously, whats up with this? ive no problem anymore with cp being as simple as it is, in fact I like that idea, but i hold efficiency high also. My soon to be plan 9 network isnt made of superfast zigahertz cpus. Alternative OS's are my hobby, that said im not a funded computer lab, i do this in my spare time with computers Ive salvaged and inherited. That said maybe you wont notice how long it takes on a superfast computer, but on older slower computers its unacceptable. Of course if we simply pass off the idea of efficiency with the excuse that computers are getting faster is no excuse, 300 mhz used to be undreamed of speed, but now we've given way to in-efficient, kludged together solutions, and computers appear no faster now then they did 10 years ago. anyways, im done ranting. tar |tar no matter how you want to argue it is not a particularly efficient or simple solution.