From mboxrd@z Thu Jan 1 00:00:00 1970 To: 9fans@cse.psu.edu From: Andrew Stitt Message-ID: Content-Type: TEXT/PLAIN; charset=US-ASCII References: , <9fdd4f468be52d3ca43b4c5ed3371463@9fs.org> Subject: Re: [9fans] dumb question Date: Wed, 26 Jun 2002 17:41:06 +0000 Topicbox-Message-UUID: b9f4022c-eaca-11e9-9e20-41e7f4b1d025 On Wed, 26 Jun 2002, Nigel Roles wrote: > On Wed, 26 Jun 2002 08:41:06 GMT, Andrew Stitt wrote: > > >On Tue, 25 Jun 2002, Fco.J.Ballesteros wrote: > > > >> Again, from rsc tiny tools :-) > >> > >> ; cat /bin/dircp > >> #!/bin/rc > >> > >> switch($#*){ > >> case 2 > >> @{cd $1 && tar c .}|@{cd $2 && tar x} > >> case * > >> echo usage: dircp from to >[1=2] > >> } > >> > >> > >why must i needlessly shove all the files into a tar, then unpack them > >again? thats incredibly inefficient! that uses roughly twice the space > >that should be required, it has to copy the files twice, and it has the > >overhead of having to needless run the data through tar. Is there a better > >solution to this? > > Andrew > > This does not use any more space. The tar commands are piped together. > I doubt a specific cp -r type command would be particularly more efficient. > > > > > i beg to differ, tar uses memory, it uses system resources, i fail to see how you think this is just as good as just recursively copying files. The point is I shouldnt have to needlessly use this other program (for Tape ARchives) to copy directorys. If this is on a fairly busy file server needlessly running tar twice is simply wasteful and unacceptable when you could just follow the directory tree.