9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
* Re: [9fans] dumb question
@ 2002-06-26 17:45 rob pike, esq.
  2002-06-27  9:16 ` Andrew Stitt
  2002-06-27  9:17 ` Douglas A. Gwyn
  0 siblings, 2 replies; 88+ messages in thread
From: rob pike, esq. @ 2002-06-26 17:45 UTC (permalink / raw)
  To: astitt, 9fans

> i beg to differ, tar uses memory, it uses system resources, i fail to see
> how you think this is just as good as just recursively copying files. The
> point is I shouldnt have to needlessly use this other program (for Tape
> ARchives) to copy directorys. If this is on a fairly busy file server
> needlessly running tar twice is simply wasteful and unacceptable when you
> could just follow the directory tree.

The command
	tar c ... | tar x ...
will have almost identical cost to any explicit recursive copy, since
it does the same amount of work.  The only extra overhead is writing
to and reading from the pipe, which is so cheap - and entirely local -
that it is insignificant.

Whether you cp or tar, you must read the files from one tree and write
them to another.

-rob



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27  9:17 ` Douglas A. Gwyn
@ 2002-06-26 19:53   ` arisawa
  2002-06-27 11:43   ` Ralph Corderoy
  1 sibling, 0 replies; 88+ messages in thread
From: arisawa @ 2002-06-26 19:53 UTC (permalink / raw)
  To: 9fans

Hello,

>I guess "ls -R" would almost work as a file
>tree walker (as input to some shell procedure), except for its
>irregular output format and its apparent nonexistence under Plan 9.
I agree.
I would like to have R option for ls.
But the output format should be simple so that we can pass them to
pipe.

By the way, real value of my cpdir resides in the case:
term% cpdir srcdir dstdir
term% touch some-files-under-srcdir
term% cpdir -m srcdir dstdir
try.

Kenji Arisawa
E-mail: arisawa@aichi-u.ac.jp


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-26 17:45 [9fans] dumb question rob pike, esq.
@ 2002-06-27  9:16 ` Andrew Stitt
  2002-06-27  9:17 ` Douglas A. Gwyn
  1 sibling, 0 replies; 88+ messages in thread
From: Andrew Stitt @ 2002-06-27  9:16 UTC (permalink / raw)
  To: 9fans

On Wed, 26 Jun 2002, rob pike, esq. wrote:

> > i beg to differ, tar uses memory, it uses system resources, i fail to see
> > how you think this is just as good as just recursively copying files. The
> > point is I shouldnt have to needlessly use this other program (for Tape
> > ARchives) to copy directorys. If this is on a fairly busy file server
> > needlessly running tar twice is simply wasteful and unacceptable when you
> > could just follow the directory tree.
>
> The command
> 	tar c ... | tar x ...
> will have almost identical cost to any explicit recursive copy, since
> it does the same amount of work.  The only extra overhead is writing

it does not do the same amount of work, it runs two processes which use at
least two pages instead of one, it also runs all the data through some
sort of algorithm to place it into an archive, then immediate reverse the
same algorithm to remove it, i dont see whats so complicated about this
concept, it needlessly processes the data. Im sure if you read and wrote
in parrallel using two processes itd be faster, guess what would be even
faster? parallel cp, why? cp doesnt archive then dearchive, its as simple
as that. i can almost guarentee you that tar-c|tar -x uses more clock
cycles then cp, why? two processes both are simulataneously packing and
unpacking data for a _local_ transfer. in reality the work ought to be
done mostly by a dma controller which simply copys sectors from one
location to another, then the other activities of adding inodes or table
entries etc should be done in parallel, you have to remember that IO is
done in the background because it is slow, with tar you have to at least
move the data into some pages in memory, processes it, then run it through
a device, where it gets copied into another processes address space, then
processed then written to disk. If im copying a large directory tree im
going to get a serious performance hit for that. If i can just have a well
written cp program deal with having the disk simply copy some sectors
around, hopefully with dma, thats going to be faster. maybe on your newer
faster computers the difference is so small "who cares" but thats what
keeps processor manufactuerers in business, cutting corners like that. you
keep replacing speed with blind simplicity and 2 months later your
hardware is obsolete, so you buy new faster hardware which now runs all
your older kludge fast enough, so you cut corners on the new stuff. take
microsoft for example. they wrote windows 95, which iirc runs reasonably
well on even a mere 386, now we've got win xp, which has several hundred
mhz as the minimum required speed! yet how much does it really add to the
mix? the UI is nearly the same, sure its got some bells and whistles but
you can take that away. I wouldnt dare try it on a 386 (much less use the
cd as more then a coaster but anyways) run win95 on a p4, yea its pretty
darn fast, compare to winxp on a p4? i doubt you're going to get the same
performance out of it. im extreme i know, but seriously, my points follow
logically, tar|tar is not a more efficient solution, nor is it in any way
acceptable. sure i may not do cp -R a lot, but when youve got 50-100
people using your network maybe they will? either way, its a complete
waste of resources that can be better used elsewhere. can someone tell me
whats so difficult to see about that?

> to and reading from the pipe, which is so cheap - and entirelylocal -
> that it is insignificant.
>
> Whether you cp or tar, you must read the files from one tree and write
> them to another.
>
> -rob


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-26 17:45 [9fans] dumb question rob pike, esq.
  2002-06-27  9:16 ` Andrew Stitt
@ 2002-06-27  9:17 ` Douglas A. Gwyn
  2002-06-26 19:53   ` arisawa
  2002-06-27 11:43   ` Ralph Corderoy
  1 sibling, 2 replies; 88+ messages in thread
From: Douglas A. Gwyn @ 2002-06-27  9:17 UTC (permalink / raw)
  To: 9fans

"rob pike, esq." wrote:
> Whether you cp or tar, you must read the files from one tree and write
> them to another.

But with twin tars (y'all), there is encoding and decoding (aka
format translation) going on for the attribute info, which does
seem like a waste of cycles.

There is frequently, at least in my end of the universe, need for
directory hierarchy walking for a variety of purposes.  The old
Unix tools "find" and "xargs" aren't very satisfactory, but at
least they exist.  I guess "ls -R" would almost work as a file
tree walker (as input to some shell procedure), except for its
irregular output format and its apparent nonexistence under Plan 9.

Do you guys work mostly with flat file hierarchies or what?


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27 11:43   ` Ralph Corderoy
@ 2002-06-27 11:04     ` Sam
  0 siblings, 0 replies; 88+ messages in thread
From: Sam @ 2002-06-27 11:04 UTC (permalink / raw)
  To: 9fans

> Hi,
>
> > I guess "ls -R" would almost work as a file tree walker (as input to
> > some shell procedure), except for its irregular output format and its
> > apparent nonexistence under Plan 9.
>
> 9ers do it with du, don't they?

Indeed.  I use du on linux systems now as well.  It's simple, easy to
remember, and doesn't really take all that long.  Friends laugh and
repeatedly ask, "why don't you use slocate?"  Then, while they're waiting
ten minutes for slocate -u to update its database because they can't
find a file, I've long since gone back to work.

Simplicity is good.  Human beings aren't *that* different from
machines.  The more store you use maintaining lists of flags to
seldom run programs the less you can use for really complex tasks.
What's so hard to grasp here?  (sorry, intentionally catty)

;)

Cheers,

Sam




^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27  9:17 ` Douglas A. Gwyn
  2002-06-26 19:53   ` arisawa
@ 2002-06-27 11:43   ` Ralph Corderoy
  2002-06-27 11:04     ` Sam
  1 sibling, 1 reply; 88+ messages in thread
From: Ralph Corderoy @ 2002-06-27 11:43 UTC (permalink / raw)
  To: 9fans

Hi,

> I guess "ls -R" would almost work as a file tree walker (as input to
> some shell procedure), except for its irregular output format and its
> apparent nonexistence under Plan 9.

9ers do it with du, don't they?


Ralph.


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] Dumb question.
  2003-07-15  0:37 [9fans] Dumb question Dan Cross
@ 2003-07-15  3:23 ` boyd, rounin
  0 siblings, 0 replies; 88+ messages in thread
From: boyd, rounin @ 2003-07-15  3:23 UTC (permalink / raw)
  To: 9fans

> Does a Java implementation of 9p2000 exist?

Harry Callahan: But being as this is a .44 Magnum, the most powerful
handgun in the world, and would blow your head clean off, you've got
to ask yourself one question: Do I feel lucky? Well, do ya punk?



^ permalink raw reply	[flat|nested] 88+ messages in thread

* [9fans] Dumb question.
@ 2003-07-15  0:37 Dan Cross
  2003-07-15  3:23 ` boyd, rounin
  0 siblings, 1 reply; 88+ messages in thread
From: Dan Cross @ 2003-07-15  0:37 UTC (permalink / raw)
  To: 9fans

Don't shoot me.

Does a Java implementation of 9p2000 exist?

	- Dan C.

(Don't ask.)


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-07-05  9:43 Geoff Collyer
  0 siblings, 0 replies; 88+ messages in thread
From: Geoff Collyer @ 2002-07-05  9:43 UTC (permalink / raw)
  To: 9fans

> i have also damaged whole file trees by doing
> 'tar cf adir - | {cd /missspelled/dir; tar xf -}'

I did too, a long time ago, and the lesson I drew is: you shouldn't
use "cd directory; tar", you should write
	cd directory && tar
.  In shell scripts, the shell will generally exit if cd fails, but
obviously interactive shells don't.

On Plan 9 at least, you can avoid destroying the archive (at least by
confusing "c" and "x") by not using the "f" key letter and letting tar
read standard input and write standard output, thus:

	tar c >tarfile afile
	tar x <tarfile afile

(Performance is a little poorer than using the "f" key letter, but
correctness seems more important here.)  To destroy the archive, you
have to get "<" and ">" mixed up, which seems less likely.  If it's
that much of a problem, protect your archives ("chmod a-w tarfile").



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27 13:20 rob pike, esq.
@ 2002-07-05  9:07 ` Maarit Maliniemi
  0 siblings, 0 replies; 88+ messages in thread
From: Maarit Maliniemi @ 2002-07-05  9:07 UTC (permalink / raw)
  To: 9fans

In article <3ba2a45af5c572e7280e465acb3a805a@plan9.bell-labs.com>,
9fans@cse.psu.edu wrote:

> > Perhaps someone could explain why cp doesn't work recursively (without
> > an option) when given a directory as a first argument.
> 
> Because running cp that way would probably happen more often by mistake
> than by intention, and it could damage a lot of files very fast before the
> user noticed.
> 
> -rob

since the subject of damage has been raised i would like to mention that i
have managed to damage tar files by using 'tar cf tarfile afile' (the
intention was 'tar xf tarfile afile').
tar would be less dangerous if split into two. see inferno.

i have also damaged whole file trees by doing
'tar cf adir - | {cd /missspelled/dir; tar xf -}'
IMHO, it is less dangerous using 'cp -R'.


bengt


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-07-01 11:03 rog
@ 2002-07-02  8:53 ` Douglas A. Gwyn
  0 siblings, 0 replies; 88+ messages in thread
From: Douglas A. Gwyn @ 2002-07-02  8:53 UTC (permalink / raw)
  To: 9fans

rog@vitanuova.com wrote:
> the benefits of converting to utf-8 throughout were obvious.  does the
> ability to type ' ' rather than (say) ALT-' ' really confer comparable
> advantages?

The benefits of any feature depend on whether you make use of them.
People living in an ASCII world get no particular benefit from UTF8,
and people with spaces in file names (e.g. on a Windows filesystem)
get substantial benefit from having their files handled properly.


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-07-01 11:03 rog
  2002-07-02  8:53 ` Douglas A. Gwyn
  0 siblings, 1 reply; 88+ messages in thread
From: rog @ 2002-07-01 11:03 UTC (permalink / raw)
  To: 9fans

> It strikes me that the arguments people are making against having
> blanks in file names are the same arguments I heard against converting
> Plan 9 to Unicode and UTF

the benefits of converting to utf-8 throughout were obvious.  does the
ability to type ' ' rather than (say) ALT-' ' really confer comparable
advantages?



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-28 14:30                 ` Douglas A. Gwyn
  2002-06-28 15:06                   ` Boyd Roberts
@ 2002-07-01  9:46                   ` Ralph Corderoy
  1 sibling, 0 replies; 88+ messages in thread
From: Ralph Corderoy @ 2002-07-01  9:46 UTC (permalink / raw)
  To: 9fans

Hi Douglas,

> But in general one might be processing files named by somebody else,
> e.g. to move the last user off a mounted filesystem onto a new larger
> one.  And in general it's a serious problem:
>
> 	$ > 'funny file name; rm -rf /'
> 	$ ls
> 	funny file name; rm -rf /
> 	$ find . -print | xargs echo
> 	funny file name
> 	panic -- essential files not found

The shell doesn't necessarily get involved with xargs' processing
though.  Here's some actual output, rather than theorectical.

    $ ls h*
    hello; date
    $ ls h* | xargs echo
    hello; date

I agree the problem exists, just that it doesn't always trigger.

Cheers,


Ralph.


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-28 16:49 ` rob pike, esq.
@ 2002-06-29  2:23   ` Scott Schwartz
  0 siblings, 0 replies; 88+ messages in thread
From: Scott Schwartz @ 2002-06-29  2:23 UTC (permalink / raw)
  To: 9fans

Rob says:
| conversion, and we handled that one just fine.  All that's missing is
| a concerted effort to push ahead, plus the will to do so.  Neither
| seems to be forthcoming, though.  Oh well.

I suspect it's more the case that people need time to roll
the ideas around in their heads.  You guys wrote a nice paper
that pretty clearly showed how to do utf8 and make it work,
but still it took a while to catch on elsewhere.



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-28 16:49 ` rob pike, esq.
  2002-06-29  2:23   ` Scott Schwartz
  0 siblings, 1 reply; 88+ messages in thread
From: rob pike, esq. @ 2002-06-28 16:49 UTC (permalink / raw)
  To: 9fans

It strikes me that the arguments people are making against having
blanks in file names are the same arguments I heard against converting
Plan 9 to Unicode and UTF: too hard, too much code to change, too many
symmetries broken.  But there's no way this problem is as hard as that
conversion, and we handled that one just fine.  All that's missing is
a concerted effort to push ahead, plus the will to do so.  Neither
seems to be forthcoming, though.  Oh well.

-rob



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-28 16:39 rob pike, esq.
  0 siblings, 0 replies; 88+ messages in thread
From: rob pike, esq. @ 2002-06-28 16:39 UTC (permalink / raw)
  To: 9fans

> diffs to dosfs? :-)

Putting your fingers in your ears and humming does not constitute
a viable solution to an enduring problem.

-rob



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-28 16:31 rog
  0 siblings, 0 replies; 88+ messages in thread
From: rog @ 2002-06-28 16:31 UTC (permalink / raw)
  To: 9fans

doug:
> The question is, what would be a similar mechanism for rc that you'd
> find acceptable?

the same mechanism applies in rc ($ifs rather than $IFS).

but just fixing it in the shell doesn't apply to the multitude of
little nasty things that still need tampering with to work in the
presence of quoted filenames.

whereas before it was possible to just manipulate text irrespective of
whether it was a filename or a word, now you have to unquote filenames
on input and quote on output, so many things that manipulate text will
need to understand rc-style quotes.

in 3e, the internal representation of a filename is the same as its
external representation, and a lot of tools gain simplicity from this
identity transformation.

there are so many things in the system that potentially or actually
break, i really don't see why we can't just transform spaces at the
boundaries of the system (dossrv, ftpfs, et al) and hence be assured
that *nothing* breaks, rather than adding hack after hack to existing
tools.

rob:
> it seems the shell can and should be fixed. i will entertain
> proposals that include code.

diffs to dosfs? :-)

  cheers,
    rog.



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-28 16:14 rob pike, esq.
  0 siblings, 0 replies; 88+ messages in thread
From: rob pike, esq. @ 2002-06-28 16:14 UTC (permalink / raw)
  To: 9fans

Well, rc has $ifs so you can always use the same familiar trick, but
it's not intellectually satisfying.  I have been quietly pushing a
uniform quoting convention through Plan 9, with good library support
(tokenize, %q).  The result is that if `{} used tokenize I think that
not only would this specific example work, but the need for hacks like
$ifs could be reduced.  If programs consistently quoted their output,
consistency would improve.

I regret blanks in file names, but they're here to stay if you do any
interoperability with Windows.  Windows being a royal irritation
doesn't mean that the problems it raises are not more generally
apparent.  The quoting stuff in Plan 9 came in for other reasons
having to do with consistent output from commands and devices.  If it
also leads to sensible handling of blanks in file names, good for us.

-rob



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-28 14:10 rob pike, esq.
@ 2002-06-28 15:42 ` Douglas A. Gwyn
  0 siblings, 0 replies; 88+ messages in thread
From: Douglas A. Gwyn @ 2002-06-28 15:42 UTC (permalink / raw)
  To: 9fans

"rob pike, esq." wrote:
> ls quotes its output, ...
> it seems the shell can and should be fixed.

I don't see how to make it consistent with ls quoting.
The root problem seems to be that ` applies parsing
using white-space delimiters both when it is desired
and when it isn't desired.  What I think will be
necessary is finer-grained control over "wordization".
In the Bourne shell, that is done by setting IFS.
It would probably suffice to consider newline reserved
as a delimiter; while I have often found spaces in
filenames to be useful, newline in filenames has never,
in my experience, occurred except by accident.  So
"IFS='<newline>' sh whatever" should keep space-
containing words as units in a similar usage.  The
question is, what would be a similar mechanism for rc
that you'd find acceptable?


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-28 14:30                 ` Douglas A. Gwyn
@ 2002-06-28 15:06                   ` Boyd Roberts
  2002-07-01  9:46                   ` Ralph Corderoy
  1 sibling, 0 replies; 88+ messages in thread
From: Boyd Roberts @ 2002-06-28 15:06 UTC (permalink / raw)
  To: 9fans

"Douglas A. Gwyn" wrote:
> But in general one might be processing files named by somebody else,
> e.g. to move the last user off a mounted filesystem onto a new
> larger one.  And in general it's a serious problem:

Oh definitely.  That's why in this case I made the 'process' run
as the user.  They can trash their own world.  I guess as a side
effect it would have found such lurking traps.

I couldn't see the benefit in going to a lot of trouble for yet
another end case in something that was designed as a convenience,
rather than a guarantee.  Yet another tradeoff.

Didn't 8th Ed find .. -print0 | apply get this right?


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-28  9:31               ` Boyd Roberts
@ 2002-06-28 14:30                 ` Douglas A. Gwyn
  2002-06-28 15:06                   ` Boyd Roberts
  2002-07-01  9:46                   ` Ralph Corderoy
  0 siblings, 2 replies; 88+ messages in thread
From: Douglas A. Gwyn @ 2002-06-28 14:30 UTC (permalink / raw)
  To: 9fans

Boyd Roberts wrote:
> As far as quoting went I think i decided that if you were stupid
> enough to name your files with newlines you didn't deserve to get
> them copied.

But in general one might be processing files named by somebody else,
e.g. to move the last user off a mounted filesystem onto a new
larger one.  And in general it's a serious problem:
	$ > 'funny file name; rm -rf /'
	$ ls
	funny file name; rm -rf /
	$ find . -print | xargs echo
	funny file name
	panic -- essential files not found


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-28 14:10 rob pike, esq.
  2002-06-28 15:42 ` Douglas A. Gwyn
  0 siblings, 1 reply; 88+ messages in thread
From: rob pike, esq. @ 2002-06-28 14:10 UTC (permalink / raw)
  To: 9fans

	rm `{ls | grep -v precious}

	(which is now broken).

ls quotes its output, so i didn't see why this would fail.
Then I tried it:

rob%% date>'a b'
rob%% echo `{ls | grep ' '}
'a b'
rob%% rm `{ls | grep ' '}
rm: 'a: '''a' No such file or directory
rm: b': 'b''' No such file or directory
rob%%

it seems the shell can and should be fixed. i will entertain
proposals that include code.

-rob



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-28 14:08 rog
  0 siblings, 0 replies; 88+ messages in thread
From: rog @ 2002-06-28 14:08 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 1813 bytes --]

>  From time to time I toy with the idea of implementing a
> fully correct xargs (the Unix one doesn't quote arguments).

if you're guaranteed not to have spaces or newlines in filenames, then
it doesn't matter.

i have a small C xargs (attached) that is probably still correct.  (i
think newlines are probably still banned in filenames).

> I wonder if structural r.e.s can express a
> complete and correct quoting algorithm.

well a normal r.e.  can express the rc quoting algorithm.  the problem
you can't just match it, you've got to translate it as well.

(([^ \t']+)|'([^']|\n|'')*')

the inferno shell provides primitives for quoting and unquoting lists,
which are useful at times.  (e.g.  variables in /env are stored in
quoted form rather than '\0' terminated).

> what we really need is a convenient way to feed macros to
> sam in a sed-like manner and somebody with the time and
> inclination to develop suitable macros and package it all up.

to be honest there's very little that beats the convenience of:

	rm `{ls | grep -v precious}

(which is now broken).

while i'm mentioning the inferno shell, nemo's example works nicely,
as you can pass shell fragments around as words.

e.g.
	walk /source {if {ftest -r $1} {echo $1}}

which lets the shell parse the argument command before passing it on
to the walk script:

	#!/dis/sh
	if {! ~ $#* 2} {
		echo 'usage: walk file cmd' >[1=2]
		exit usage
	}
	(file cmd)=$*
	du -a $file | getlines {
		(nil f) := ${split ' ' $line}
		$cmd $f
	}

if du produced a quoted word for the filename,

	(nil f) := ${split ' ' $line}

would just change to:

	(nil f) := ${unquote ' ' $line}

it's quite nice being able to do this sort of thing, but there are
tradeoffs, naturally.

  cheers,
    rog.


[-- Attachment #2: xargs.c.gz --]
[-- Type: application/octet-stream, Size: 765 bytes --]

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27 16:57             ` Lucio De Re
  2002-06-28  8:45               ` Douglas A. Gwyn
@ 2002-06-28  9:31               ` Boyd Roberts
  2002-06-28 14:30                 ` Douglas A. Gwyn
  1 sibling, 1 reply; 88+ messages in thread
From: Boyd Roberts @ 2002-06-28  9:31 UTC (permalink / raw)
  To: 9fans

Lucio De Re wrote:
> I keep promising myself I'll write a "stat" modelled on "du" that
> produces a record of stat fields for each entry in the descended
> directory.

I did this once so I could build a magnetic version of /n/dump on ULTRIX.

I wanted to hang onto files modified in the 'recent past' so I needed
something to get at the stat information and then use this to decide
what files to copy (based on type, mtime, size, name, ...).

So I wrote 'walk' and implemented an /n/dump file tree writer as:

    walk | select | push

walk was program as it had to call ftw(3), but the others were scripts.

select contained the logic for deciding if a file was worth copying
and if so it output the name of the file.  I also had a plan that
select could be user supplied, but I never implemented it.

push would copy filenames read from stdin to /n/dump/YYYY/MMDD/...

All 3 ran as the user whose home directory was being copied.  This
was imperfect, but it was secure.  The controlling script used 'su'
to become the user, avoiding the yet another program running as root
syndrome.

Time information output by walk was represented as a numeric value
to avoid re-parsing problems and it was never intended to be read
by humans.  Anyway, it's a trivial exercise to write a filter to
mung the output of walk.

As far as quoting went I think i decided that if you were stupid
enough to name your files with newlines you didn't deserve to get
them copied.  select would catch some of these or push would get
'No such file or directory'.  The name of the file was output
last by walk, after the stat info.

This version of /n/dump was just a convenience so loss of data
was no big drama (the standard backup system would catch that).

The hardest part was getting the script to glue 4 RA-90's [1Gb each]
together so that the last month of stuff was kept/recycled.
Teaching shell scripts about time is no fun.


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27 18:38 forsyth
@ 2002-06-28  8:45 ` Douglas A. Gwyn
  0 siblings, 0 replies; 88+ messages in thread
From: Douglas A. Gwyn @ 2002-06-28  8:45 UTC (permalink / raw)
  To: 9fans

forsyth@caldo.demon.co.uk wrote:
> i do something like that with some software i use for mirroring in Inferno,
> where i take advantage of dynamically-loaded
> module that traverses the structures and applies
> another command or module as it goes.

Ooh, "apply" (which I think was in 8th Ed. Unix) but at the
structure level.  Nice.


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27 16:57             ` Lucio De Re
@ 2002-06-28  8:45               ` Douglas A. Gwyn
  2002-06-28  9:31               ` Boyd Roberts
  1 sibling, 0 replies; 88+ messages in thread
From: Douglas A. Gwyn @ 2002-06-28  8:45 UTC (permalink / raw)
  To: 9fans

Lucio De Re wrote:
> given no xargs or cpio, ...

 From time to time I toy with the idea of implementing a
fully correct xargs (the Unix one doesn't quote arguments).
Presumably on Plan9, "rc" should be the default interpreter
for xargs.  Actually xargs probably should just output its
constructed commands rather than execute them; the output
could be fed to "rc" but might be useful for other things.
Then we'll want more control over what xargs does, e.g.
options to specify how many arguments in a batch, "line
length" limit, etc.  Hmm, it's starting to sound like xargs
might be replaced by a stream editor except for the quoting
feature.  Maybe what we need is an "rc-safe" quoting
utility.  I wonder if structural r.e.s can express a
complete and correct quoting algorithm.  Hmm, sounds like
what we really need is a convenient way to feed macros to
sam in a sed-like manner and somebody with the time and
inclination to develop suitable macros and package it all up.


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27 15:12         ` Sam
@ 2002-06-28  8:44           ` Douglas A. Gwyn
  0 siblings, 0 replies; 88+ messages in thread
From: Douglas A. Gwyn @ 2002-06-28  8:44 UTC (permalink / raw)
  To: 9fans

Sam wrote:
> That'll be the day, when a full development system runs on an 8-bit
> part.

I didn't say anything about an 8-bit part.  But certainly a more
powerful embedded processor can support reasonable operating systems.

> Why port plan9 when you can write your own?

Why buy a car when you can build your own?

> Call me crazy, but it sounds like what you need isn't plan9, it's
inferno.

Inferno doesn't work due to forcing an interpretive language;
small systems often can't spare the cycles.  Even if they can,
cycles may translate to battery usage, so it's still an issue.


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-28  7:52 Fco.J.Ballesteros
  0 siblings, 0 replies; 88+ messages in thread
From: Fco.J.Ballesteros @ 2002-06-28  7:52 UTC (permalink / raw)
  To: 9fans

> What seems to me to make sense would be to cleanly segregate out
> the file-tree walker from *all* those applications, and design
> the application interfaces so they could operate in conjunction
> with the walker.  Sort of like find <criteria> -print | cpio ...

Do you mean something like this?


; cat walk
#!/bin/rc
# $f in cmd gets expanded to each file name.
rfork e
if (~ $#* 0 1) {
	echo 'usage: walk file cmd_using_$f...' >[1=2]
	exit usage
}
file=$1
shift
cd `{basename -d $file}
exec du -a $file | awk '{print $2}' | while (f=`{read}) { echo $* |rc }

I use it like in
	walk /tmp 'test -r $f && echo $f'

But perhaps I could also
	dst=/dst walk /source 'test -d $f && mkdir $dst/$f || cp $f $dst/$f'
although I'm just too used to tar for that purpose.



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27 22:59 Geoff Collyer
@ 2002-06-28  4:32 ` Lucio De Re
  0 siblings, 0 replies; 88+ messages in thread
From: Lucio De Re @ 2002-06-28  4:32 UTC (permalink / raw)
  To: 9fans

On Thu, Jun 27, 2002 at 03:59:44PM -0700, Geoff Collyer wrote:
>
> Lucio, try this implementation of xargs.  It's what I use when very
> long arg lists cause trouble.
>
Thank you, Geoff.  Something like this certainly will come in handy.
We need some way of collecting these, kind of like in the permuted
index with the code as the "man" page - HTML, perhaps, or Wiki?

++L


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-27 22:59 Geoff Collyer
  2002-06-28  4:32 ` Lucio De Re
  0 siblings, 1 reply; 88+ messages in thread
From: Geoff Collyer @ 2002-06-27 22:59 UTC (permalink / raw)
  To: 9fans

Lucio, try this implementation of xargs.  It's what I use when very
long arg lists cause trouble.

#!/bin/rc
# xargs cmd [arg ...] - emulate Unix xargs
rfork ne
ramfs
split -n 500 -f /tmp/x
for (f in /tmp/x*)
	$* `{cat $f}



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-27 18:38 forsyth
  2002-06-28  8:45 ` Douglas A. Gwyn
  0 siblings, 1 reply; 88+ messages in thread
From: forsyth @ 2002-06-27 18:38 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 212 bytes --]

i do something like that with some software i use for mirroring in Inferno,
where i take advantage of dynamically-loaded
module that traverses the structures and applies
another command or module as it goes.

[-- Attachment #2: Type: message/rfc822, Size: 2307 bytes --]

To: 9fans@cse.psu.edu
Subject: Re: [9fans] dumb question
Date: Thu, 27 Jun 2002 15:40:23 GMT
Message-ID: <3D1B2CCF.1FD49007@null.net>

Lucio De Re wrote:
> The penalty each instance of cp would have to pay if it included
> directory descent logic would be paid by every user, every time
> they use cp, while being used only a fraction of the time.
> Does it not make sense to leave the logic in tar, where it gets
> used under suitable circumstances and omit it in cp (and mv, for
> good measure and plenty other reasons)?

What seems to me to make sense would be to cleanly segregate out
the file-tree walker from *all* those applications, and design
the application interfaces so they could operate in conjunction
with the walker.  Sort of like find <criteria> -print | cpio ...

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-27 17:07 rog
  0 siblings, 0 replies; 88+ messages in thread
From: rog @ 2002-06-27 17:07 UTC (permalink / raw)
  To: 9fans

> For the record, it doesn't have cp -R either.

ahem.
actually it does. i know 'cos i just cleaned up the source.
mind you at 247 lines and under 4K binary, it isn't huge.



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-27 17:07 forsyth
  2002-06-27 16:33 ` Sam
  0 siblings, 1 reply; 88+ messages in thread
From: forsyth @ 2002-06-27 17:07 UTC (permalink / raw)
  To: 9fans

>>inferno.  For the record, it doesn't have cp -R either.

no, it's called cp -r.



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27 15:40           ` Douglas A. Gwyn
@ 2002-06-27 16:57             ` Lucio De Re
  2002-06-28  8:45               ` Douglas A. Gwyn
  2002-06-28  9:31               ` Boyd Roberts
  0 siblings, 2 replies; 88+ messages in thread
From: Lucio De Re @ 2002-06-27 16:57 UTC (permalink / raw)
  To: 9fans

On Thu, Jun 27, 2002 at 03:40:23PM +0000, Douglas A. Gwyn wrote:
>
> What seems to me to make sense would be to cleanly segregate out
> the file-tree walker from *all* those applications, and design
> the application interfaces so they could operate in conjunction
> with the walker.  Sort of like find <criteria> -print | cpio ...

I keep promising myself I'll write a "stat" modelled on "du" that
produces a record of stat fields for each entry in the descended
directory.  However, (a) until this afternoon, I had no idea what
to do with the output - I have now concluded that a Unix-date style
format string is called for, no matter how much the Plan 9 purists
may find this revolting and, (b) given no xargs or cpio, such an
output is not really very useful - is this a chicken-and-egg
situation, perhaps?

++L


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27 17:07 forsyth
@ 2002-06-27 16:33 ` Sam
  0 siblings, 0 replies; 88+ messages in thread
From: Sam @ 2002-06-27 16:33 UTC (permalink / raw)
  To: 9fans

On Thu, 27 Jun 2002 forsyth@vitanuova.com wrote:

> >>inferno.  For the record, it doesn't have cp -R either.
>
> no, it's called cp -r.
>
heheha,

whew ... got away by the skin of my teeth on that one, didn't I?

See what happens when you open your mouth too wide?  Tsk. Tsk.

Sam



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27 13:19 rob pike, esq.
  2002-06-27 13:21 ` david presotto
@ 2002-06-27 15:40 ` Douglas A. Gwyn
  1 sibling, 0 replies; 88+ messages in thread
From: Douglas A. Gwyn @ 2002-06-27 15:40 UTC (permalink / raw)
  To: 9fans

"rob pike, esq." wrote:
>         #endif ED

?


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27  9:59         ` Lucio De Re
  2002-06-27 10:05           ` Boyd Roberts
@ 2002-06-27 15:40           ` Douglas A. Gwyn
  2002-06-27 16:57             ` Lucio De Re
  1 sibling, 1 reply; 88+ messages in thread
From: Douglas A. Gwyn @ 2002-06-27 15:40 UTC (permalink / raw)
  To: 9fans

Lucio De Re wrote:
> The penalty each instance of cp would have to pay if it included
> directory descent logic would be paid by every user, every time
> they use cp, while being used only a fraction of the time.
> Does it not make sense to leave the logic in tar, where it gets
> used under suitable circumstances and omit it in cp (and mv, for
> good measure and plenty other reasons)?

What seems to me to make sense would be to cleanly segregate out
the file-tree walker from *all* those applications, and design
the application interfaces so they could operate in conjunction
with the walker.  Sort of like find <criteria> -print | cpio ...


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27  9:48       ` Boyd Roberts
  2002-06-27 10:07         ` John Murdie
@ 2002-06-27 15:40         ` Douglas A. Gwyn
  1 sibling, 0 replies; 88+ messages in thread
From: Douglas A. Gwyn @ 2002-06-27 15:40 UTC (permalink / raw)
  To: 9fans

Boyd Roberts wrote:
> ... or a find | cpio -pdm ... was always the method.

One nice thing about separation of function is illustrated by
the ability to add the "l" flag to cpio so as to conserve space
(if within the same physical filesystem).  For those not
familiar with it, that created (hard) links instead of new
inodes.


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-26 17:39     ` Sam
  2002-06-27  9:17       ` Andrew Stitt
@ 2002-06-27 15:36       ` Douglas A. Gwyn
  2002-06-27 15:12         ` Sam
  1 sibling, 1 reply; 88+ messages in thread
From: Douglas A. Gwyn @ 2002-06-27 15:36 UTC (permalink / raw)
  To: 9fans

Sam wrote:
> really arguing about 30k?  Am I dreaming?  Are you trying to run plan9
> on a machine with 256k of memory?

I recall not long ago when Plan 9 proponents were poking fun at
Windows and even commercial Unix for becoming bloated.

There are those of us who work with much smaller systems who
would like to at least be able to consider Plan 9 for them, but
alas cannot because of size and/or over-dependence on graphics.


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27 15:36       ` Douglas A. Gwyn
@ 2002-06-27 15:12         ` Sam
  2002-06-28  8:44           ` Douglas A. Gwyn
  0 siblings, 1 reply; 88+ messages in thread
From: Sam @ 2002-06-27 15:12 UTC (permalink / raw)
  To: 9fans

> I recall not long ago when Plan 9 proponents were poking fun at
> Windows and even commercial Unix for becoming bloated.

Uhm, are you saying plan9 is bloated?  (not that there aren't
trimmable parts, but comparing 9bloat to Winbloat is just
silly)

> There are those of us who work with much smaller systems who
> would like to at least be able to consider Plan 9 for them, but
> alas cannot because of size and/or over-dependence on graphics.

That'll be the day, when a full development system runs on an 8-bit
part.  Besides, systems of that nature typically require a
special purpose OS anyway (if one at all) to achieve any decent
performance in functionality.  Why port plan9 when you can
write your own?

Call me crazy, but it sounds like what you need isn't plan9, it's
inferno.  For the record, it doesn't have cp -R either.

Sam



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-27 14:43 Richard Miller
  0 siblings, 0 replies; 88+ messages in thread
From: Richard Miller @ 2002-06-27 14:43 UTC (permalink / raw)
  To: 9fans

> For simplicity, I keep all my work in one huge file and use e.g.
> 	#ifdef ED
> 	....
> 	#endif ED
> to extract the specific lines of C source that make up ed when
> i need to work on ed...

This is pretty much the file system structure advocated by
Jef Raskin in The Humane Interface:

"... Documents are sequences of pages separated by document characters,
each typable, searchable, and deletable, just as is any other character.
There can be higher delimiters, such as folder and volume characters,
even section and library delimiters, but the number of levels depends
on the size of the data... An individual who insists on having explicit
document names can adopt the personal convention of placing the desired
names immediately following document characters... If you wanted a
directory, there could be a command that would assemble a document
comprising all instances of strings consisting of a document character,
followed by any other characters, up to and including the first
Return or higher delimiter." (pp 120-121)

-- Richard




^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-27 14:33 forsyth
  0 siblings, 0 replies; 88+ messages in thread
From: forsyth @ 2002-06-27 14:33 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 568 bytes --]

accident is one possible justification, and there's also a matter of
meaning.  if i say `copy this file to that file' i expect to see a
strict copy of the file as a result.  it's fairly clear cut.  on most
systems that provide directory `copying', if i say cp dir1 target (or
cp -r) and dir1 exists in the target, i usually obtain a merge (it's
perhaps one of the reasons that options bristle as finer control over
the operation is provided).  arguably, cp should instead remove files in the
existing destination directory in order to produce a strict copy.


[-- Attachment #2: Type: message/rfc822, Size: 1595 bytes --]

To: 9fans@cse.psu.edu
Subject: Re: [9fans] dumb question
Date: Thu, 27 Jun 2002 09:20:55 -0400
Message-ID: <3ba2a45af5c572e7280e465acb3a805a@plan9.bell-labs.com>

> Perhaps someone could explain why cp doesn't work recursively (without
> an option) when given a directory as a first argument.

Because running cp that way would probably happen more often by mistake
than by intention, and it could damage a lot of files very fast before the
user noticed.

-rob

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27 13:19 rob pike, esq.
@ 2002-06-27 13:21 ` david presotto
  2002-06-27 15:40 ` Douglas A. Gwyn
  1 sibling, 0 replies; 88+ messages in thread
From: david presotto @ 2002-06-27 13:21 UTC (permalink / raw)
  To: 9fans

I forgot to tell you, I renamed ED to CP...
----- Original Message -----
From: "rob pike, esq." <rob@plan9.bell-labs.com>
To: <9fans@cse.psu.edu>
Sent: Thursday, June 27, 2002 9:19 AM
Subject: Re: [9fans] dumb question


> > Do you guys work mostly with flat file hierarchies or what?
>
> For simplicity, I keep all my work in one huge file and use e.g.
> #ifdef ED
> ....
> #endif ED
> to extract the specific lines of C source that make up ed when
> i need to work on ed. (I'm usually working on ed when I should
> be working on cp; old habits die hard). This style of use means
> for example there is only one copy of
> main(int argc, char *argv[])
> on the system, which saves a lot of space.
>
> I have some ideas for a similar technology to apply to 9fans.
>
> -rob
>
>


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-27 13:20 rob pike, esq.
  2002-07-05  9:07 ` Maarit Maliniemi
  0 siblings, 1 reply; 88+ messages in thread
From: rob pike, esq. @ 2002-06-27 13:20 UTC (permalink / raw)
  To: 9fans

> Perhaps someone could explain why cp doesn't work recursively (without
> an option) when given a directory as a first argument.

Because running cp that way would probably happen more often by mistake
than by intention, and it could damage a lot of files very fast before the
user noticed.

-rob



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-27 13:19 rob pike, esq.
  2002-06-27 13:21 ` david presotto
  2002-06-27 15:40 ` Douglas A. Gwyn
  0 siblings, 2 replies; 88+ messages in thread
From: rob pike, esq. @ 2002-06-27 13:19 UTC (permalink / raw)
  To: 9fans

> Do you guys work mostly with flat file hierarchies or what?

For simplicity, I keep all my work in one huge file and use e.g.
	#ifdef ED
	....
	#endif ED
to extract the specific lines of C source that make up ed when
i need to work on ed. (I'm usually working on ed when I should
be working on cp; old habits die hard). This style of use means
for example there is only one copy of
	main(int argc, char *argv[])
on the system, which saves a lot of space.

I have some ideas for a similar technology to apply to 9fans.

-rob



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-27 12:27 rog
  0 siblings, 0 replies; 88+ messages in thread
From: rog @ 2002-06-27 12:27 UTC (permalink / raw)
  To: 9fans

some great quotes have come out of this little exchange:

"9ers do it with du"
"...techno-pissing contest..."
"human beings aren't that different from machines"
"you keep replacing speed with blind simplicity"

the latter seems like a fair summary of what plan 9 is about:
sacrifice a few CPU cycles here for some more generality there.

> im arguing about the apparent acceptance of a solution which wastes
> resources

we accept many solutions that, strictly speaking, waste resources.

e.g.
	ls | wc

but do we care?  not unless the overhead impacts significantly on our
ability to get the job done...  and if it does, then measurement and
judicious optimisation is the way forward, as with any programming
task.

  cheers,
    rog.



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-27 12:12 Fco.J.Ballesteros
  0 siblings, 0 replies; 88+ messages in thread
From: Fco.J.Ballesteros @ 2002-06-27 12:12 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 71 bytes --]

I see, you'd need to sign an NDA just to tell me.
Back to work then.

[-- Attachment #2: Type: message/rfc822, Size: 1613 bytes --]

From: Sam <sah@softcardsystems.com>
To: <9fans@cse.psu.edu>
Subject: Re: [9fans] dumb question
Date: Thu, 27 Jun 2002 07:13:49 -0400 (EDT)
Message-ID: <Pine.LNX.4.30.0206270712440.24208-100000@athena>

On Thu, 27 Jun 2002, Fco.J.Ballesteros wrote:

> :  Human beings aren't *that* different from
> :  machines.
>
> Which OS do you  run?

Model number / type is hard to come by since we're having a hard
reaching the author.

;-P

Sam


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-27 12:06 Fco.J.Ballesteros
  2002-06-27 11:13 ` Sam
  0 siblings, 1 reply; 88+ messages in thread
From: Fco.J.Ballesteros @ 2002-06-27 12:06 UTC (permalink / raw)
  To: 9fans

:  Human beings aren't *that* different from
:  machines.

Which OS do you  run?

☺



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27 11:13 ` Sam
@ 2002-06-27 11:15   ` Sam
  0 siblings, 0 replies; 88+ messages in thread
From: Sam @ 2002-06-27 11:15 UTC (permalink / raw)
  To: 9fans

s/rea/time rea/

On Thu, 27 Jun 2002, Sam wrote:

> On Thu, 27 Jun 2002, Fco.J.Ballesteros wrote:
>
> > :  Human beings aren't *that* different from
> > :  machines.
> >
> > Which OS do you  run?
>
> Model number / type is hard to come by since we're having a hard
> reaching the author.
>
> ;-P
>
> Sam
>
>



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27 12:06 Fco.J.Ballesteros
@ 2002-06-27 11:13 ` Sam
  2002-06-27 11:15   ` Sam
  0 siblings, 1 reply; 88+ messages in thread
From: Sam @ 2002-06-27 11:13 UTC (permalink / raw)
  To: 9fans

On Thu, 27 Jun 2002, Fco.J.Ballesteros wrote:

> :  Human beings aren't *that* different from
> :  machines.
>
> Which OS do you  run?

Model number / type is hard to come by since we're having a hard
reaching the author.

;-P

Sam




^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-27 10:29 nigel
  0 siblings, 0 replies; 88+ messages in thread
From: nigel @ 2002-06-27 10:29 UTC (permalink / raw)
  To: 9fans

> im arguing about the apparent acceptance of a solution which wastes
> resources, sure its 30k, but you can do quite a bit with 30k. and if you
> actually have lots of users, things start to slow down. point im trying to
> make: tar|tar uses more memory and resources then a recursive copy, ive
> thoroughly explained this many times. whats so hard to grasp here?

Perhaps you cannot grasp that the consensus agrees that this is not
the most efficient solution, but that it is "efficient enough" given
that recursive tree copy is not the most commonly used operation.
This is based on experience.

Members of the list have provided numerical evidence that it's
efficient enough.  But that's probably a mistake; it just started a
techno-pissing match from which there are no winners.  Telling the
list about disk sectors and DMA will not advance your argument.  The
majority know about those.  Really.

Plan 9 has a philosophy.  It has certain aims which are much more
important than the exact path by which data in files travel during a
tree copy.  One day, when everything else which is more important is
done, cp -r will get tuned till the magneto-optical domains squeak.
But that day will probably never come.

I think I'll close with the Presotto contention.  It runs like this;
"if you're unhappy with the way Plan 9 does does X, just wait until
you see Y".

In this case X = "file tree copies", and Y is always "web browsing".

Positively my last word on the subject.

Nigel



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27 10:05           ` Boyd Roberts
@ 2002-06-27 10:21             ` Lucio De Re
  0 siblings, 0 replies; 88+ messages in thread
From: Lucio De Re @ 2002-06-27 10:21 UTC (permalink / raw)
  To: 9fans

On Thu, Jun 27, 2002 at 12:05:52PM +0200, Boyd Roberts wrote:
>
> Lucio De Re wrote:
> > Using DMA at the user level sounds like a major burden to me, and
> > hardly portable.
>
> nah, physio() was your friend.

With no consideration for the underlying filesystem?  Not up my
alley at all :-)

To answer Boyd's implied question elsewhere, my first encounter
with a recursive "copy" command was in SCO Xenix.  It was called
"copy", not "cp" and optionally preserved file properties.

Hard to check on that bit of history, these days.

++L


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27  9:48       ` Boyd Roberts
@ 2002-06-27 10:07         ` John Murdie
  2002-06-27 15:40         ` Douglas A. Gwyn
  1 sibling, 0 replies; 88+ messages in thread
From: John Murdie @ 2002-06-27 10:07 UTC (permalink / raw)
  To: 9fans; +Cc: John Murdie

On 27 Jun, Boyd Roberts wrote:
> Ralph Corderoy wrote:
>> tar did the job, more or less, and so no one could be bothered to extend
>> cp.  I'm sure this has come up some months ago.
>
> cp never did the job.  And a good thing too, until the -r option was
> added by BSD (I guess).  A back-to-back tar or a find | cpio -pdm
> (if you had cpio) was always the method.
>
> The point being that copying directory trees is a rare operation
> and it doesn't merit breaking cp for the small functionality gain.
>
> As rob said, the pipe doesn't cost that much and having a reader
> and a writer process probably increases the i/o throughput.
>
> As I look at debian now I see cp has 25 options, modified in
> turn by 2 environment variables, and it pretty much implements
> tar.  I don't call that progress.

Perhaps someone could explain why cp doesn't work recursively (without
an option) when given a directory as a first argument.
--

John A. Murdie
Experimental Officer (Software)
Department of Computer Science
University of York
England



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27  9:59         ` Lucio De Re
@ 2002-06-27 10:05           ` Boyd Roberts
  2002-06-27 10:21             ` Lucio De Re
  2002-06-27 15:40           ` Douglas A. Gwyn
  1 sibling, 1 reply; 88+ messages in thread
From: Boyd Roberts @ 2002-06-27 10:05 UTC (permalink / raw)
  To: 9fans

Lucio De Re wrote:
> Using DMA at the user level sounds like a major burden to me, and
> hardly portable.

nah, physio() was your friend.


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27  9:17       ` Andrew Stitt
  2002-06-27  9:33         ` Ish Rattan
@ 2002-06-27  9:59         ` Lucio De Re
  2002-06-27 10:05           ` Boyd Roberts
  2002-06-27 15:40           ` Douglas A. Gwyn
  1 sibling, 2 replies; 88+ messages in thread
From: Lucio De Re @ 2002-06-27  9:59 UTC (permalink / raw)
  To: 9fans

On Thu, Jun 27, 2002 at 09:17:01AM +0000, Andrew Stitt wrote:
> > ...
> im arguing about the apparent acceptance of a solution which wastes
> resources, sure its 30k, but you can do quite a bit with 30k. and if you
> actually have lots of users, things start to slow down. point im trying to
> make: tar|tar uses more memory and resources then a recursive copy, ive
> thoroughly explained this many times. whats so hard to grasp here?

The penalty each instance of cp would have to pay if it included
directory descent logic would be paid by every user, every time
they use cp, while being used only a fraction of the time.

Does it not make sense to leave the logic in tar, where it gets
used under suitable circumstances and omit it in cp (and mv, for
good measure and plenty other reasons)?

There are additional benefits in using the piped approach, as
mentioned, in that the two processes can run concurrently and
therefore exploit any idle cpu opportunity.  The two images will
share text memory, and the system will manage the pipe between the
two processes instead of dealing with possibly suboptimal memory
allocation in a single program.

Using DMA at the user level sounds like a major burden to me, and
hardly portable.  My understanding of memory mapped I/O is even
more frightening.

Has anyone ported cpio to Plan 9?  Its "p" mode would give us an
indication of costs.  I note, for Andrew's benefit, that recursive
operation of cp is recent, in relative term, whereas find | cpio
-p goes back to smaller memory footprint machines.  Something tells
me that recursive cp is more greedy of resources than its predecessors.
I could be mistaken, of course.

++L


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27  9:17     ` Ralph Corderoy
@ 2002-06-27  9:48       ` Boyd Roberts
  2002-06-27 10:07         ` John Murdie
  2002-06-27 15:40         ` Douglas A. Gwyn
  0 siblings, 2 replies; 88+ messages in thread
From: Boyd Roberts @ 2002-06-27  9:48 UTC (permalink / raw)
  To: 9fans

Ralph Corderoy wrote:
> tar did the job, more or less, and so no one could be bothered to extend
> cp.  I'm sure this has come up some months ago.

cp never did the job.  And a good thing too, until the -r option was
added by BSD (I guess).  A back-to-back tar or a find | cpio -pdm
(if you had cpio) was always the method.

The point being that copying directory trees is a rare operation
and it doesn't merit breaking cp for the small functionality gain.

As rob said, the pipe doesn't cost that much and having a reader
and a writer process probably increases the i/o throughput.

As I look at debian now I see cp has 25 options, modified in
turn by 2 environment variables, and it pretty much implements
tar.  I don't call that progress.


^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [9fans] dumb question
@ 2002-06-27  9:44 Stephen Parker
  0 siblings, 0 replies; 88+ messages in thread
From: Stephen Parker @ 2002-06-27  9:44 UTC (permalink / raw)
  To: '9fans@cse.psu.edu'

most of us don't spend much time moving directories around.
maybe once per week.  maybe.
its not worth optimising this.

stephen

--
Stephen Parker.  Pi Technology.  +44 (0)1223 203438.

> -----Original Message-----
> From: Andrew Stitt [mailto:astitt@cats.ucsc.edu]
> Sent: 27 June 2002 10:17
> To: 9fans@cse.psu.edu
> Subject: Re: [9fans] dumb question
>
>
> On Wed, 26 Jun 2002, Sam wrote:
>
> > > cp is about 60k in plan9 and tar is 80k in plan9. cp on 3
> seperate unix
> > > machines (linux, SCO unix, os-X) in its overcomplicated
> copying directory
> > > tree glory is under 30k, on sunOS it happens to be 17k.
> to tar then untar
> > > requires two processes using a shared 80k tar, plus some
> intermediate data
> > > to archive and process the data, then immediatly reverse
> this process. cp
> > > _could_ be written to do all this in 17k but instead our
> 60k cp cant do
> > > it, and instead we need two entries in the process table
> and twice the
> > > number ofuser space pages.
> > I'm sorry, but there must be something wrong with my mail
> reader.  Are we
> > really arguing about 30k?  Am I dreaming?  Are you trying
> to run plan9
> > on a machine with 256k of memory?
> >
> > ...
> im arguing about the apparent acceptance of a solution which wastes
> resources, sure its 30k, but you can do quite a bit with 30k.
> and if you
> actually have lots of users, things start to slow down. point
> im trying to
> make: tar|tar uses more memory and resources then a recursive
> copy, ive
> thoroughly explained this many times. whats so hard to grasp here?
>


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-27  9:43 Fco.J.Ballesteros
  0 siblings, 0 replies; 88+ messages in thread
From: Fco.J.Ballesteros @ 2002-06-27  9:43 UTC (permalink / raw)
  To: 9fans

:  resources, sure its 30k, but you can do quite a bit with 30k. and if you
:  actually have lots of users, things start to slow down. point im trying to

Answer these questions:

How many users are doing dircps at the same time?
Have you actually seen that happen?

I think that it's not even worth discussing about saving 30k at
this place, but that's just IMHO.



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-27  9:36 Fco.J.Ballesteros
  0 siblings, 0 replies; 88+ messages in thread
From: Fco.J.Ballesteros @ 2002-06-27  9:36 UTC (permalink / raw)
  To: 9fans

:  > The command
:  > 	tar c ... | tar x ...
:  > will have almost identical cost to any explicit recursive copy, since
:  > it does the same amount of work.  The only extra overhead is writing
:
:  it does not do the same amount of work, it runs two processes which use at
:  least two pages instead of one, it also runs all the data through some

I think the point is that if you don't notice the `extra work'
done by the tars, then you'd better forget about optimizing it
(Who cares when nobody notices it?).



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27  9:17       ` Andrew Stitt
@ 2002-06-27  9:33         ` Ish Rattan
  2002-06-27  9:59         ` Lucio De Re
  1 sibling, 0 replies; 88+ messages in thread
From: Ish Rattan @ 2002-06-27  9:33 UTC (permalink / raw)
  To: 9fans

On Thu, 27 Jun 2002, Andrew Stitt wrote:

> On Wed, 26 Jun 2002, Sam wrote:
>
> > > cp is about 60k in plan9 and tar is 80k in plan9. cp on 3 seperate unix
> > > machines (linux, SCO unix, os-X) in its overcomplicated copying directory
> > > tree glory is under 30k, on sunOS it happens to be 17k. to tar then untar
> > > requires two processes using a shared 80k tar, plus some intermediate data
> > > to archive and process the data, then immediatly reverse this process. cp
> > > _could_ be written to do all this in 17k but instead our 60k cp cant do
> > > it, and instead we need two entries in the process table and twice the
> > > number ofuser space pages.
> > I'm sorry, but there must be something wrong with my mail reader.  Are we
> > really arguing about 30k?  Am I dreaming?  Are you trying to run plan9
> > on a machine with 256k of memory?
> >
> > ...
> im arguing about the apparent acceptance of a solution which wastes
> resources, sure its 30k, but you can do quite a bit with 30k. and if you
> actually have lots of users, things start to slow down. point im trying to
And they also use tar at the same time too!
As I said before keep it up :-)

-ishwar



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-26 18:27   ` Andrew Stitt
  2002-06-26 17:39     ` Sam
  2002-06-26 22:07     ` Micah Stetson
@ 2002-06-27  9:17     ` Ralph Corderoy
  2002-06-27  9:48       ` Boyd Roberts
  2 siblings, 1 reply; 88+ messages in thread
From: Ralph Corderoy @ 2002-06-27  9:17 UTC (permalink / raw)
  To: 9fans

Hi Andrew,

> Im sorry if im sounding like a troll, or if it seems like im flaming,
> but seriously, whats up with this?

tar did the job, more or less, and so no one could be bothered to extend
cp.  I'm sure this has come up some months ago.

Cheers,


Ralph.


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-26 17:39     ` Sam
@ 2002-06-27  9:17       ` Andrew Stitt
  2002-06-27  9:33         ` Ish Rattan
  2002-06-27  9:59         ` Lucio De Re
  2002-06-27 15:36       ` Douglas A. Gwyn
  1 sibling, 2 replies; 88+ messages in thread
From: Andrew Stitt @ 2002-06-27  9:17 UTC (permalink / raw)
  To: 9fans

On Wed, 26 Jun 2002, Sam wrote:

> > cp is about 60k in plan9 and tar is 80k in plan9. cp on 3 seperate unix
> > machines (linux, SCO unix, os-X) in its overcomplicated copying directory
> > tree glory is under 30k, on sunOS it happens to be 17k. to tar then untar
> > requires two processes using a shared 80k tar, plus some intermediate data
> > to archive and process the data, then immediatly reverse this process. cp
> > _could_ be written to do all this in 17k but instead our 60k cp cant do
> > it, and instead we need two entries in the process table and twice the
> > number ofuser space pages.
> I'm sorry, but there must be something wrong with my mail reader.  Are we
> really arguing about 30k?  Am I dreaming?  Are you trying to run plan9
> on a machine with 256k of memory?
>
> ...
im arguing about the apparent acceptance of a solution which wastes
resources, sure its 30k, but you can do quite a bit with 30k. and if you
actually have lots of users, things start to slow down. point im trying to
make: tar|tar uses more memory and resources then a recursive copy, ive
thoroughly explained this many times. whats so hard to grasp here?


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-27  1:22 okamoto
  0 siblings, 0 replies; 88+ messages in thread
From: okamoto @ 2002-06-27  1:22 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 210 bytes --]

I forgot the essense to be written.

For cp -r, I suppose they thought that "can't you use bind(1) instead?".
If it's not, it must be somthing wrong usage of Plan 9, so you can use
it by tar...

Kenji


[-- Attachment #2: Type: message/rfc822, Size: 5549 bytes --]

[-- Attachment #2.1.1: Type: text/plain, Size: 338 bytes --]

Please let me say one very important thing. ^_^

If you feel something is missing in Plan 9, you'd better to think why they
eliminated it, particularly when it's related to basic unix commands.
I think there is no such thing which they eliminated carelessly in Plan 9.
I'm saying this from my many such experiences.  :-)

Kenji


[-- Attachment #2.1.2: Type: message/rfc822, Size: 3269 bytes --]

From: Andrew Stitt <astitt@cats.ucsc.edu>
To: 9fans@cse.psu.edu
Subject: Re: [9fans] dumb question
Date: Wed, 26 Jun 2002 17:41:06 GMT
Message-ID: <Pine.SOL.3.96.1020626101934.26964C-100000@teach.ic.ucsc.edu>

On Wed, 26 Jun 2002, Nigel Roles wrote:

> On Wed, 26 Jun 2002 08:41:06 GMT, Andrew Stitt wrote:
>
> >On Tue, 25 Jun 2002, Fco.J.Ballesteros wrote:
> >
> >> Again, from rsc tiny tools :-)
> >>
> >> ; cat /bin/dircp
> >> #!/bin/rc
> >>
> >> switch($#*){
> >> case 2
> >> 	@{cd $1 && tar c .}|@{cd $2 && tar x}
> >> case *
> >> 	echo usage: dircp from to >[1=2]
> >> }
> >>
> >>
> >why must i needlessly shove all the files into a tar, then unpack them
> >again? thats incredibly inefficient! that uses roughly twice the space
> >that should be required, it has to copy the files twice, and it has the
> >overhead of having to needless run the data through tar. Is there a better
> >solution to this?
> >      Andrew
>
> This does not use any more space. The tar commands are piped together.
> I doubt a specific cp -r type command would be particularly more efficient.
>
>
>
>
>
i beg to differ, tar uses memory, it uses system resources, i fail to see
how you think this is just as good as just recursively copying files. The
point is I shouldnt have to needlessly use this other program (for Tape
ARchives) to copy directorys. If this is on a fairly busy file server
needlessly running tar twice is simply wasteful and unacceptable when you
could just follow the directory tree.

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-27  1:05 okamoto
  0 siblings, 0 replies; 88+ messages in thread
From: okamoto @ 2002-06-27  1:05 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 338 bytes --]

Please let me say one very important thing. ^_^

If you feel something is missing in Plan 9, you'd better to think why they
eliminated it, particularly when it's related to basic unix commands.
I think there is no such thing which they eliminated carelessly in Plan 9.
I'm saying this from my many such experiences.  :-)

Kenji


[-- Attachment #2: Type: message/rfc822, Size: 3269 bytes --]

From: Andrew Stitt <astitt@cats.ucsc.edu>
To: 9fans@cse.psu.edu
Subject: Re: [9fans] dumb question
Date: Wed, 26 Jun 2002 17:41:06 GMT
Message-ID: <Pine.SOL.3.96.1020626101934.26964C-100000@teach.ic.ucsc.edu>

On Wed, 26 Jun 2002, Nigel Roles wrote:

> On Wed, 26 Jun 2002 08:41:06 GMT, Andrew Stitt wrote:
>
> >On Tue, 25 Jun 2002, Fco.J.Ballesteros wrote:
> >
> >> Again, from rsc tiny tools :-)
> >>
> >> ; cat /bin/dircp
> >> #!/bin/rc
> >>
> >> switch($#*){
> >> case 2
> >> 	@{cd $1 && tar c .}|@{cd $2 && tar x}
> >> case *
> >> 	echo usage: dircp from to >[1=2]
> >> }
> >>
> >>
> >why must i needlessly shove all the files into a tar, then unpack them
> >again? thats incredibly inefficient! that uses roughly twice the space
> >that should be required, it has to copy the files twice, and it has the
> >overhead of having to needless run the data through tar. Is there a better
> >solution to this?
> >      Andrew
>
> This does not use any more space. The tar commands are piped together.
> I doubt a specific cp -r type command would be particularly more efficient.
>
>
>
>
>
i beg to differ, tar uses memory, it uses system resources, i fail to see
how you think this is just as good as just recursively copying files. The
point is I shouldnt have to needlessly use this other program (for Tape
ARchives) to copy directorys. If this is on a fairly busy file server
needlessly running tar twice is simply wasteful and unacceptable when you
could just follow the directory tree.

^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-27  0:03         ` Micah Stetson
@ 2002-06-27  0:44           ` Micah Stetson
  0 siblings, 0 replies; 88+ messages in thread
From: Micah Stetson @ 2002-06-27  0:44 UTC (permalink / raw)
  To: 9fans

> Adding -z 8192 to the mkfs command line caused everything
> to work better.

I spoke too soon, it makes Remote->Remote work considerably
better, but it slows down Local->Local by around 50%
(making it slower than either cpdir or dircp) and slows
Local->Remote by around 10% (which is still a lot faster
than cpdir).  It would be nice if there was some magic
buffer size that was optimal for everything, but I suppose
that's just fantasy.

Micah



^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [9fans] dumb question
  2002-06-26 19:00 Warrier, Sadanand (Sadanand)
@ 2002-06-27  0:13 ` Steve Arons
  0 siblings, 0 replies; 88+ messages in thread
From: Steve Arons @ 2002-06-27  0:13 UTC (permalink / raw)
  To: '9fans@cse.psu.edu'

On RH 7.1:

$ tar --version
tar (GNU tar) 1.13.19
[...]

$ file /bin/tar
/bin/tar: ELF 32-bit LSB executable, Intel 80386, version 1, dynamically
linked (uses shared libs), stripped

$ ls -l /bin/tar
-rwxr-xr-x    1 root	root	150908 Mar 6 2001 /bin/tar











^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-26 22:59       ` Chris Hollis-Locke
@ 2002-06-27  0:03         ` Micah Stetson
  2002-06-27  0:44           ` Micah Stetson
  0 siblings, 1 reply; 88+ messages in thread
From: Micah Stetson @ 2002-06-27  0:03 UTC (permalink / raw)
  To: 9fans

> When doing your remote -> remote copy
> try copying between two different mounts of the same
> fileserver so as reads and writes are on separate network
> connections.

The times don't differ significantly.

> you could see if iostats(4) offers a hint.
> quite apart from retaining Plan 9 modes and long names correctly,
> mkfs archives use a file header that's more compact than tar's (1 line per file not
> 512 bytes)

I noticed it was doing about 8x as many reads as writes.
Adding -z 8192 to the mkfs command line caused everything
to work better.  The new time is about 30% faster than
cpdir and seven or eight times faster than the tar version.
Question answered.  Thanks.

Micah



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-26 23:08 forsyth
       [not found] ` <171e27d5f2cfd12a9303117e58818e5b@plan9.bell-labs.com>
  0 siblings, 1 reply; 88+ messages in thread
From: forsyth @ 2002-06-26 23:08 UTC (permalink / raw)
  To: 9fans

you could see if iostats(4) offers a hint.
quite apart from retaining Plan 9 modes and long names correctly,
mkfs archives use a file header that's more compact than tar's (1 line per file not
512 bytes)



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-26 22:07     ` Micah Stetson
@ 2002-06-26 22:59       ` Chris Hollis-Locke
  2002-06-27  0:03         ` Micah Stetson
  0 siblings, 1 reply; 88+ messages in thread
From: Chris Hollis-Locke @ 2002-06-26 22:59 UTC (permalink / raw)
  To: 9fans

When doing your remote -> remote copy
try copying between two different mounts of the same
fileserver so as reads and writes are on separate network
connections.




^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-26 18:27   ` Andrew Stitt
  2002-06-26 17:39     ` Sam
@ 2002-06-26 22:07     ` Micah Stetson
  2002-06-26 22:59       ` Chris Hollis-Locke
  2002-06-27  9:17     ` Ralph Corderoy
  2 siblings, 1 reply; 88+ messages in thread
From: Micah Stetson @ 2002-06-26 22:07 UTC (permalink / raw)
  To: 9fans

> Im sorry if im sounding like a troll, or if it seems like im flaming, but
> seriously, whats up with this? ive no problem anymore with cp being as
> simple as it is, in fact I like that idea, but i hold efficiency high
> also. My soon to be plan 9 network isnt made of superfast zigahertz cpus.
> Alternative OS's are my hobby, that said im not a funded computer lab, i
> do this in my spare time with computers Ive salvaged and inherited. That
> said maybe you wont notice how long it takes on a superfast computer, but
> on older slower computers its unacceptable. Of course if we simply pass
> off the idea of efficiency with the excuse that computers are getting
> faster is no excuse, 300 mhz used to be undreamed of speed, but now we've
> given way to in-efficient, kludged together solutions, and computers
> appear no faster now then they did 10 years ago. anyways, im done ranting.
> tar |tar no matter how you want to argue it is not a particularly
> efficient or simple solution.

In an effort to debunk this paragraph, I ran a few tests.  I
used kenji's cpdir and rsc's dircp to copy a directory
tree of about 100M both locally and remotely.  I've only run
the tests a couple of times, but the numbers didn't vary much.

Here are the numbers for the few tests I ran, the local machine
is a 633Mhz Celeron running 4th Ed., the remote is a 120Mhz
Pentium running u9fs on Linux (I don't have a real fileserver).
I don't this this network can be accused of using 'zigahertz'
equipment.  8.out is Kenji's cpdir.

Local kfs -> Local kfs
----------------------

term% time 8.out doc foo
0.56u 13.48s 338.82r 	 8.out doc foo  # status= main
term% time 8.out doc foo
0.50u 14.08s 338.56r 	 8.out doc foo  # status= main

term% time dircp doc foo
2.39u 37.37s 316.66r 	 dircp doc foo
term% time dircp doc foo
2.55u 37.41s 322.92r 	 dircp doc foo

Local kfs -> Remote u9fs
------------------------

term% time dircp doc /n/cephas/home/micah/foo
1.45u 27.48s 425.44r 	 dircp doc /n/cephas/home/micah/foo
term% time dircp doc /n/cephas/home/micah/foo
1.11u 27.33s 421.52r 	 dircp doc /n/cephas/home/micah/foo
term% time dircp doc /n/cephas/home/micah/foo
1.58u 26.71s 422.80r 	 dircp doc /n/cephas/home/micah/foo

term% time 8.out doc /n/cephas/home/micah/foo
0.10u 7.72s 119.46r 	 8.out doc /n/cephas/home/micah/foo  # status= main
term% time 8.out doc /n/cephas/home/micah/foo
0.12u 7.75s 117.92r 	 8.out doc /n/cephas/home/micah/foo  # status= main

Remote u9fs -> Remote u9fs
--------------------------

term% time /usr/micah/8.out foo bar
0.17u 6.58s 163.85r 	 /usr/micah/8.out foo bar  # status= main

term% time dircp foo bar
1.43u 33.91s 730.83r 	 dircp foo bar

I was floored.  I haven't any clue why dircp is so slow.  But I
knew the arguments against using archive programs connected with
a pipe were unjustifiable, so I rewrote dircp to use mkfs and
mkext:

#!/bin/rc

switch($#*){
case 2
	# mkfs is too chatty, is there a quiet mode?
	@{cd $1 && disk/mkfs -a -s . <{echo '+'} >[2]/dev/null}|@{cd $2 && disk/mkext -d . >[2]/dev/null}
case *
	echo usage: dircp from to >[1=2]
}

And here are my results with this (calling it mydircp):

Local kfs -> Local kfs
----------------------
term% time mydircp doc foo
3.44u 19.44s 240.70r 	 mydircp doc foo

That's about 30% faster than cpdir (real time).

Remote u9fs -> Remote u9fs
--------------------------
term% time mydircp foo bar
2.23u 13.54s 273.04r 	 mydircp foo bar

This one's 66% slower than cpdir (don't know why), but it's a good
even and a half MINUTES faster than the tar version.

Local kfs -> Remote u9fs
-------------------
term% time mydircp doc /n/cephas/home/micah/foo
2.41u 12.14s 68.08r 	 mydircp doc /n/cephas/home/micah/foo
term% time mydircp doc /n/cephas/home/micah/foo
2.70u 12.55s 67.56r 	 mydircp doc /n/cephas/home/micah/foo

Here's the gem.  I think this takes real advantage of having
two communicating processes.  It's nearly twice as fast as
cpdir and six or seven times as fast as dircp.

My question is, is there a reason to use tar over mkfs/mkext, and
why is tar so slow?  Also, does anyone have an explanation for
why Remote->Remote is slower with mydircp while Local->Remote is
so much faster?

Micah



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-26 19:04 rog
  0 siblings, 0 replies; 88+ messages in thread
From: rog @ 2002-06-26 19:04 UTC (permalink / raw)
  To: 9fans

> cp is about 60k in plan9 and tar is 80k in plan9.  cp on 3 seperate
> unix machines (linux, SCO unix, os-X) in its overcomplicated copying
> directory tree glory is under 30k, on sunOS it happens to be 17k.

this is a little over-harsh on plan 9, i think.

to start with none of the plan 9 binaries are stripped, which takes
cp and tar down to 40K and 50K respectively. also all plan 9 binaries
are statically linked. a look at a closeby FreeBSD box gives cp at
65K statically linked and stripped, which seems like a fairer comparison.
(remembering that dynamic libraries impose their own overhead too,
when starting up a program).

mind you, a quick look at the 3rd edition cp gives 15K stripped,
and the cp source hasn't changed that much. i wonder what's using
that extra 20K of text. i suspect the new print library.

  cheers,
    rog.



^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [9fans] dumb question
@ 2002-06-26 19:00 Warrier, Sadanand (Sadanand)
  2002-06-27  0:13 ` Steve Arons
  0 siblings, 1 reply; 88+ messages in thread
From: Warrier, Sadanand (Sadanand) @ 2002-06-26 19:00 UTC (permalink / raw)
  To: '9fans@cse.psu.edu'

On my solaris machine

ls -la /usr/sbin/tar

-r-xr-xr-x   1  bin   bin 63544 Nov 24 1998 /usr/sbin/tar*

ls -la /opt/exp/gnu/bin/tar

-rwxrwxr-x   1  exptools  software 161328  Jul  4  2000
/opt/exp/gnu/bin/tar*

S

-----Original Message-----
From: forsyth@caldo.demon.co.uk [mailto:forsyth@caldo.demon.co.uk]
Sent: Wednesday, June 26, 2002 2:41 PM
To: 9fans@cse.psu.edu
Subject: Re: [9fans] dumb question


> cp is about 60k in plan9 and tar is 80k in plan9. cp on 3 seperate unix
> machines (linux, SCO unix, os-X) in its overcomplicated copying directory
> tree glory is under 30k, on sunOS it happens to be 17k. to tar then untar
> requires two processes using a shared 80k tar, plus some intermediate data
> to archive and process the data, then immediatly reverse this process. cp
> _could_ be written to do all this in 17k but instead our 60k cp cant do
> it, and instead we need two entries in the process table and twice the
> number ofuser space pages.

there's a useful point to be made here with respect to other such
comparisons:
you're overlooking the cost on the other systems of the (usually)
large memory space consumed by the dynamically-linked shared libraries.
that's why their cp seems `small' and plan9's seems `large'.  with plan 9,
cp really does take 60k (code, plus data+stack), on the others, it's
anyone's guess: the 17k is the tip of the iceberg.


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-26 18:41 forsyth
  0 siblings, 0 replies; 88+ messages in thread
From: forsyth @ 2002-06-26 18:41 UTC (permalink / raw)
  To: 9fans

> cp is about 60k in plan9 and tar is 80k in plan9. cp on 3 seperate unix
> machines (linux, SCO unix, os-X) in its overcomplicated copying directory
> tree glory is under 30k, on sunOS it happens to be 17k. to tar then untar
> requires two processes using a shared 80k tar, plus some intermediate data
> to archive and process the data, then immediatly reverse this process. cp
> _could_ be written to do all this in 17k but instead our 60k cp cant do
> it, and instead we need two entries in the process table and twice the
> number ofuser space pages.

there's a useful point to be made here with respect to other such comparisons:
you're overlooking the cost on the other systems of the (usually)
large memory space consumed by the dynamically-linked shared libraries.
that's why their cp seems `small' and plan9's seems `large'.  with plan 9,
cp really does take 60k (code, plus data+stack), on the others, it's
anyone's guess: the 17k is the tip of the iceberg.



^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [9fans] dumb question
@ 2002-06-26 18:40 David Gordon Hogan
  0 siblings, 0 replies; 88+ messages in thread
From: David Gordon Hogan @ 2002-06-26 18:40 UTC (permalink / raw)
  To: 9fans

> do you plan on doing this a lot?

I hope not.



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
       [not found] ` <171e27d5f2cfd12a9303117e58818e5b@plan9.bell-labs.com>
@ 2002-06-26 18:27   ` Andrew Stitt
  2002-06-26 17:39     ` Sam
                       ` (2 more replies)
  0 siblings, 3 replies; 88+ messages in thread
From: Andrew Stitt @ 2002-06-26 18:27 UTC (permalink / raw)
  To: rob pike, esq.; +Cc: 9fans

On Wed, 26 Jun -1, rob pike, esq. wrote:

> > i beg to differ, tar uses memory, it uses system resources, i fail to see
> > how you think this is just as good as just recursively copying files. The
> > point is I shouldnt have to needlessly use this other program (for Tape
> > ARchives) to copy directorys. If this is on a fairly busy file server
> > needlessly running tar twice is simply wasteful and unacceptable when you
> > could just follow the directory tree.
>
> The command
> 	tar c ... | tar x ...
> will have almost identical cost to any explicit recursive copy, since
> it does the same amount of work.  The only extra overhead is writing
> to and reading from the pipe, which is so cheap - and entirely local -
> that it is insignificant.
>
> Whether you cp or tar, you must read the files from one tree and write
> them to another.
>
> -rob
>

no, there is the extra overhead of running tar to archive, then dearchive
the directory tree. that uses extra memory and cpu time. anyone who thinks
that archiving and dearchiving is just as efficent as a sector sector (or
byte for byte even) copy is simply wrong. if _nothing_ else, EVEN is tar
was somehow optimized  to work as fast as cp, tar larger then cp, and it
wasnt designed for copying directory trees, its an archiver.

cp is about 60k in plan9 and tar is 80k in plan9. cp on 3 seperate unix
machines (linux, SCO unix, os-X) in its overcomplicated copying directory
tree glory is under 30k, on sunOS it happens to be 17k. to tar then untar
requires two processes using a shared 80k tar, plus some intermediate data
to archive and process the data, then immediatly reverse this process. cp
_could_ be written to do all this in 17k but instead our 60k cp cant do
it, and instead we need two entries in the process table and twice the
number ofuser space pages.

Im sorry if im sounding like a troll, or if it seems like im flaming, but
seriously, whats up with this? ive no problem anymore with cp being as
simple as it is, in fact I like that idea, but i hold efficiency high
also. My soon to be plan 9 network isnt made of superfast zigahertz cpus.
Alternative OS's are my hobby, that said im not a funded computer lab, i
do this in my spare time with computers Ive salvaged and inherited. That
said maybe you wont notice how long it takes on a superfast computer, but
on older slower computers its unacceptable. Of course if we simply pass
off the idea of efficiency with the excuse that computers are getting
faster is no excuse, 300 mhz used to be undreamed of speed, but now we've
given way to in-efficient, kludged together solutions, and computers
appear no faster now then they did 10 years ago. anyways, im done ranting.
tar |tar no matter how you want to argue it is not a particularly
efficient or simple solution.



^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [9fans] dumb question
@ 2002-06-26 17:53 rog
  0 siblings, 0 replies; 88+ messages in thread
From: rog @ 2002-06-26 17:53 UTC (permalink / raw)
  To: 9fans

> possibly, keep in mind its a complete waste of file server resources to
> repeatedly tar/untar a directory tree, all i want is the ability to copy a
> directory tree without having to use a program that wasnt really designed
> for that anyways.

actually it's quite possible that the tar pipeline is *more* efficient
than a traditionally implemented cp -r, as it's doing reading and
writing in parallel.

i wouldn't be surprised to see the tar pipeline win out over cp -r
when copying between different disks.

the most valid objection is that tar doesn't cope with the new, longer
filenames in plan 9 (if that's true).  that could be annoying on
occasion.

  rog.



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-26  9:14     ` Nigel Roles
@ 2002-06-26 17:41       ` Andrew Stitt
  2002-06-26 16:08         ` Ish Rattan
  0 siblings, 1 reply; 88+ messages in thread
From: Andrew Stitt @ 2002-06-26 17:41 UTC (permalink / raw)
  To: 9fans

On Wed, 26 Jun 2002, Nigel Roles wrote:

> On Wed, 26 Jun 2002 08:41:06 GMT, Andrew Stitt wrote:
>
> >On Tue, 25 Jun 2002, Fco.J.Ballesteros wrote:
> >
> >> Again, from rsc tiny tools :-)
> >>
> >> ; cat /bin/dircp
> >> #!/bin/rc
> >>
> >> switch($#*){
> >> case 2
> >> 	@{cd $1 && tar c .}|@{cd $2 && tar x}
> >> case *
> >> 	echo usage: dircp from to >[1=2]
> >> }
> >>
> >>
> >why must i needlessly shove all the files into a tar, then unpack them
> >again? thats incredibly inefficient! that uses roughly twice the space
> >that should be required, it has to copy the files twice, and it has the
> >overhead of having to needless run the data through tar. Is there a better
> >solution to this?
> >      Andrew
>
> This does not use any more space. The tar commands are piped together.
> I doubt a specific cp -r type command would be particularly more efficient.
>
>
>
>
>
i beg to differ, tar uses memory, it uses system resources, i fail to see
how you think this is just as good as just recursively copying files. The
point is I shouldnt have to needlessly use this other program (for Tape
ARchives) to copy directorys. If this is on a fairly busy file server
needlessly running tar twice is simply wasteful and unacceptable when you
could just follow the directory tree.


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-26 18:27   ` Andrew Stitt
@ 2002-06-26 17:39     ` Sam
  2002-06-27  9:17       ` Andrew Stitt
  2002-06-27 15:36       ` Douglas A. Gwyn
  2002-06-26 22:07     ` Micah Stetson
  2002-06-27  9:17     ` Ralph Corderoy
  2 siblings, 2 replies; 88+ messages in thread
From: Sam @ 2002-06-26 17:39 UTC (permalink / raw)
  To: 9fans

> cp is about 60k in plan9 and tar is 80k in plan9. cp on 3 seperate unix
> machines (linux, SCO unix, os-X) in its overcomplicated copying directory
> tree glory is under 30k, on sunOS it happens to be 17k. to tar then untar
> requires two processes using a shared 80k tar, plus some intermediate data
> to archive and process the data, then immediatly reverse this process. cp
> _could_ be written to do all this in 17k but instead our 60k cp cant do
> it, and instead we need two entries in the process table and twice the
> number ofuser space pages.
I'm sorry, but there must be something wrong with my mail reader.  Are we
really arguing about 30k?  Am I dreaming?  Are you trying to run plan9
on a machine with 256k of memory?

...

Sam



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-26  8:45 ` arisawa
@ 2002-06-26 17:36   ` Andrew Stitt
  0 siblings, 0 replies; 88+ messages in thread
From: Andrew Stitt @ 2002-06-26 17:36 UTC (permalink / raw)
  To: 9fans

thank you this is very helpful.

On Wed, 26 Jun 2002 arisawa@ar.aichi-u.ac.jp wrote:

> Hello,
>
> >ok so im totally stumped on this: how do i copy a directory?
> >cp can copy files only, but has no nifty recursive option,
> >what do i do? thanks
> >                 Andrew
>
> look at http://plan9.aichi-u.ac.jp/netlib/cmd/cpdir/
>
> The following is a part of README file.
>
> Kenji Arisawa
> E-mail: arisawa@aichi-u.ac.jp
>
>
> Usage: cpdir [-vugRm]  [-l list] source destination [path ...]
> Options and arguments:
>         -v: verbose
>         -u: copy owner info
>         -g: copy group info
>         -R: remove redundant files and directories
>         -m: merge
>         -l list:
>                 path list to be processed. path list must be a path
> per line.
>         source:
>                 source dir from which copy is made.
>         destination:
>                 destination dir in which copy is made.
>         path ... : pathes in source to be processed.
>
> Function:
>         `cpdir' makes effort to make a copy of source/path
>         in destination/path.
>         `cpdir' copies only those files that are out of date.
>         `cpdir' tries to preserve the following info.:
>         - contents (judged from modification time)
>         - modification time
>         - permissions
>         - group (under -g option)
>         - owner if possible. (under -u option)
>         and under -R option, `cpdir' removes files and directories
>         in destination/path
>         that are non-existent in source/path.
>
>



^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [9fans] dumb question
  2002-06-26  8:55 Stephen Parker
@ 2002-06-26 17:18 ` Andrew Stitt
  0 siblings, 0 replies; 88+ messages in thread
From: Andrew Stitt @ 2002-06-26 17:18 UTC (permalink / raw)
  To: 9fans

possibly, keep in mind its a complete waste of file server resources to
repeatedly tar/untar a directory tree, all i want is the ability to copy a
directory tree without having to use a program that wasnt really designed
for that anyways.

On Wed, 26 Jun 2002, Stephen Parker wrote:

> do you plan on doing this a lot?
>
> --
> Stephen Parker.  Pi Technology.  +44 (0)1223 203438.
>
> > why must i needlessly shove all the files into a tar, then unpack them
> > again? thats incredibly inefficient! that uses roughly twice the space
> > that should be required, it has to copy the files twice, and
> > it has the
> > overhead of having to needless run the data through tar. Is
> > there a better
> > solution to this?
>
>
>



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-26 17:41       ` Andrew Stitt
@ 2002-06-26 16:08         ` Ish Rattan
  0 siblings, 0 replies; 88+ messages in thread
From: Ish Rattan @ 2002-06-26 16:08 UTC (permalink / raw)
  To: 9fans

On Wed, 26 Jun 2002, Andrew Stitt wrote:

> On Wed, 26 Jun 2002, Nigel Roles wrote:
>
> > On Wed, 26 Jun 2002 08:41:06 GMT, Andrew Stitt wrote:
> >
> > >On Tue, 25 Jun 2002, Fco.J.Ballesteros wrote:
> > >
> > >> Again, from rsc tiny tools :-)
> > >>
> > >> ; cat /bin/dircp
> > >> #!/bin/rc
> > >>
> > >> switch($#*){
> > >> case 2
> > >> 	@{cd $1 && tar c .}|@{cd $2 && tar x}
> > >> case *
> > >> 	echo usage: dircp from to >[1=2]
> > >> }
> > >>
> > >>
> > >why must i needlessly shove all the files into a tar, then unpack them
> > >again? thats incredibly inefficient! that uses roughly twice the space
> > >that should be required, it has to copy the files twice, and it has the
> > >overhead of having to needless run the data through tar. Is there a better
> > >solution to this?
> > >      Andrew
> >
> > This does not use any more space. The tar commands are piped together.
> > I doubt a specific cp -r type command would be particularly more efficient.
> >
> >
> >
> >
> >
> i beg to differ, tar uses memory, it uses system resources, i fail to see
Keep it up :-)

-ishwar



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-26  8:41   ` Andrew Stitt
@ 2002-06-26  9:14     ` Nigel Roles
  2002-06-26 17:41       ` Andrew Stitt
  0 siblings, 1 reply; 88+ messages in thread
From: Nigel Roles @ 2002-06-26  9:14 UTC (permalink / raw)
  To: 9fans

On Wed, 26 Jun 2002 08:41:06 GMT, Andrew Stitt wrote:

>On Tue, 25 Jun 2002, Fco.J.Ballesteros wrote:
>
>> Again, from rsc tiny tools :-)
>>
>> ; cat /bin/dircp
>> #!/bin/rc
>>
>> switch($#*){
>> case 2
>> 	@{cd $1 && tar c .}|@{cd $2 && tar x}
>> case *
>> 	echo usage: dircp from to >[1=2]
>> }
>>
>>
>why must i needlessly shove all the files into a tar, then unpack them
>again? thats incredibly inefficient! that uses roughly twice the space
>that should be required, it has to copy the files twice, and it has the
>overhead of having to needless run the data through tar. Is there a better
>solution to this?
>      Andrew

This does not use any more space. The tar commands are piped together.
I doubt a specific cp -r type command would be particularly more efficient.





^ permalink raw reply	[flat|nested] 88+ messages in thread

* RE: [9fans] dumb question
@ 2002-06-26  8:55 Stephen Parker
  2002-06-26 17:18 ` Andrew Stitt
  0 siblings, 1 reply; 88+ messages in thread
From: Stephen Parker @ 2002-06-26  8:55 UTC (permalink / raw)
  To: '9fans@cse.psu.edu'

do you plan on doing this a lot?

--
Stephen Parker.  Pi Technology.  +44 (0)1223 203438.

> why must i needlessly shove all the files into a tar, then unpack them
> again? thats incredibly inefficient! that uses roughly twice the space
> that should be required, it has to copy the files twice, and
> it has the
> overhead of having to needless run the data through tar. Is
> there a better
> solution to this?



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-25 16:48 Andrew Stitt
@ 2002-06-26  8:45 ` arisawa
  2002-06-26 17:36   ` Andrew Stitt
  0 siblings, 1 reply; 88+ messages in thread
From: arisawa @ 2002-06-26  8:45 UTC (permalink / raw)
  To: 9fans

Hello,

>ok so im totally stumped on this: how do i copy a directory?
>cp can copy files only, but has no nifty recursive option,
>what do i do? thanks
>                 Andrew

look at http://plan9.aichi-u.ac.jp/netlib/cmd/cpdir/

The following is a part of README file.

Kenji Arisawa
E-mail: arisawa@aichi-u.ac.jp


Usage: cpdir [-vugRm]  [-l list] source destination [path ...]
Options and arguments:
        -v: verbose
        -u: copy owner info
        -g: copy group info
        -R: remove redundant files and directories
        -m: merge
        -l list:
                path list to be processed. path list must be a path
per line.
        source:
                source dir from which copy is made.
        destination:
                destination dir in which copy is made.
        path ... : pathes in source to be processed.

Function:
        `cpdir' makes effort to make a copy of source/path
        in destination/path.
        `cpdir' copies only those files that are out of date.
        `cpdir' tries to preserve the following info.:
        - contents (judged from modification time)
        - modification time
        - permissions
        - group (under -g option)
        - owner if possible. (under -u option)
        and under -R option, `cpdir' removes files and directories
        in destination/path
        that are non-existent in source/path.


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-25 17:06 ` Fco.J.Ballesteros
  2002-06-25 18:01   ` Scott Schwartz
@ 2002-06-26  8:41   ` Andrew Stitt
  2002-06-26  9:14     ` Nigel Roles
  1 sibling, 1 reply; 88+ messages in thread
From: Andrew Stitt @ 2002-06-26  8:41 UTC (permalink / raw)
  To: 9fans

On Tue, 25 Jun 2002, Fco.J.Ballesteros wrote:

> Again, from rsc tiny tools :-)
>
> ; cat /bin/dircp
> #!/bin/rc
>
> switch($#*){
> case 2
> 	@{cd $1 && tar c .}|@{cd $2 && tar x}
> case *
> 	echo usage: dircp from to >[1=2]
> }
>
>
why must i needlessly shove all the files into a tar, then unpack them
again? thats incredibly inefficient! that uses roughly twice the space
that should be required, it has to copy the files twice, and it has the
overhead of having to needless run the data through tar. Is there a better
solution to this?
      Andrew


^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
  2002-06-25 17:06 ` Fco.J.Ballesteros
@ 2002-06-25 18:01   ` Scott Schwartz
  2002-06-26  8:41   ` Andrew Stitt
  1 sibling, 0 replies; 88+ messages in thread
From: Scott Schwartz @ 2002-06-25 18:01 UTC (permalink / raw)
  To: 9fans

> 	@{cd $1 && tar c .}|@{cd $2 && tar x}

What about path length limits in tar?



^ permalink raw reply	[flat|nested] 88+ messages in thread

* Re: [9fans] dumb question
@ 2002-06-25 17:06 ` Fco.J.Ballesteros
  2002-06-25 18:01   ` Scott Schwartz
  2002-06-26  8:41   ` Andrew Stitt
  0 siblings, 2 replies; 88+ messages in thread
From: Fco.J.Ballesteros @ 2002-06-25 17:06 UTC (permalink / raw)
  To: 9fans

Again, from rsc tiny tools :-)

; cat /bin/dircp
#!/bin/rc

switch($#*){
case 2
	@{cd $1 && tar c .}|@{cd $2 && tar x}
case *
	echo usage: dircp from to >[1=2]
}


^ permalink raw reply	[flat|nested] 88+ messages in thread

* [9fans] dumb question
@ 2002-06-25 16:48 Andrew Stitt
  2002-06-26  8:45 ` arisawa
  0 siblings, 1 reply; 88+ messages in thread
From: Andrew Stitt @ 2002-06-25 16:48 UTC (permalink / raw)
  To: 9fans

ok so im totally stumped on this: how do i copy a directory? cp can copy
files only, but has no nifty recursive option, what do i do? thanks
                 Andrew


^ permalink raw reply	[flat|nested] 88+ messages in thread

end of thread, other threads:[~2003-07-15  3:23 UTC | newest]

Thread overview: 88+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2002-06-26 17:45 [9fans] dumb question rob pike, esq.
2002-06-27  9:16 ` Andrew Stitt
2002-06-27  9:17 ` Douglas A. Gwyn
2002-06-26 19:53   ` arisawa
2002-06-27 11:43   ` Ralph Corderoy
2002-06-27 11:04     ` Sam
  -- strict thread matches above, loose matches on Subject: below --
2003-07-15  0:37 [9fans] Dumb question Dan Cross
2003-07-15  3:23 ` boyd, rounin
2002-07-05  9:43 [9fans] dumb question Geoff Collyer
2002-07-01 11:03 rog
2002-07-02  8:53 ` Douglas A. Gwyn
     [not found] <rob@plan9.bell-labs.com>
2002-06-28 16:49 ` rob pike, esq.
2002-06-29  2:23   ` Scott Schwartz
2002-06-28 16:39 rob pike, esq.
2002-06-28 16:31 rog
2002-06-28 16:14 rob pike, esq.
2002-06-28 14:10 rob pike, esq.
2002-06-28 15:42 ` Douglas A. Gwyn
2002-06-28 14:08 rog
2002-06-28  7:52 Fco.J.Ballesteros
2002-06-27 22:59 Geoff Collyer
2002-06-28  4:32 ` Lucio De Re
2002-06-27 18:38 forsyth
2002-06-28  8:45 ` Douglas A. Gwyn
2002-06-27 17:07 rog
2002-06-27 17:07 forsyth
2002-06-27 16:33 ` Sam
2002-06-27 14:43 Richard Miller
2002-06-27 14:33 forsyth
2002-06-27 13:20 rob pike, esq.
2002-07-05  9:07 ` Maarit Maliniemi
2002-06-27 13:19 rob pike, esq.
2002-06-27 13:21 ` david presotto
2002-06-27 15:40 ` Douglas A. Gwyn
2002-06-27 12:27 rog
2002-06-27 12:12 Fco.J.Ballesteros
2002-06-27 12:06 Fco.J.Ballesteros
2002-06-27 11:13 ` Sam
2002-06-27 11:15   ` Sam
2002-06-27 10:29 nigel
2002-06-27  9:44 Stephen Parker
2002-06-27  9:43 Fco.J.Ballesteros
2002-06-27  9:36 Fco.J.Ballesteros
2002-06-27  1:22 okamoto
2002-06-27  1:05 okamoto
2002-06-26 23:08 forsyth
     [not found] ` <171e27d5f2cfd12a9303117e58818e5b@plan9.bell-labs.com>
2002-06-26 18:27   ` Andrew Stitt
2002-06-26 17:39     ` Sam
2002-06-27  9:17       ` Andrew Stitt
2002-06-27  9:33         ` Ish Rattan
2002-06-27  9:59         ` Lucio De Re
2002-06-27 10:05           ` Boyd Roberts
2002-06-27 10:21             ` Lucio De Re
2002-06-27 15:40           ` Douglas A. Gwyn
2002-06-27 16:57             ` Lucio De Re
2002-06-28  8:45               ` Douglas A. Gwyn
2002-06-28  9:31               ` Boyd Roberts
2002-06-28 14:30                 ` Douglas A. Gwyn
2002-06-28 15:06                   ` Boyd Roberts
2002-07-01  9:46                   ` Ralph Corderoy
2002-06-27 15:36       ` Douglas A. Gwyn
2002-06-27 15:12         ` Sam
2002-06-28  8:44           ` Douglas A. Gwyn
2002-06-26 22:07     ` Micah Stetson
2002-06-26 22:59       ` Chris Hollis-Locke
2002-06-27  0:03         ` Micah Stetson
2002-06-27  0:44           ` Micah Stetson
2002-06-27  9:17     ` Ralph Corderoy
2002-06-27  9:48       ` Boyd Roberts
2002-06-27 10:07         ` John Murdie
2002-06-27 15:40         ` Douglas A. Gwyn
2002-06-26 19:04 rog
2002-06-26 19:00 Warrier, Sadanand (Sadanand)
2002-06-27  0:13 ` Steve Arons
2002-06-26 18:41 forsyth
2002-06-26 18:40 David Gordon Hogan
2002-06-26 17:53 rog
2002-06-26  8:55 Stephen Parker
2002-06-26 17:18 ` Andrew Stitt
     [not found] <nemo@plan9.escet.urjc.es>
2002-06-25 17:06 ` Fco.J.Ballesteros
2002-06-25 18:01   ` Scott Schwartz
2002-06-26  8:41   ` Andrew Stitt
2002-06-26  9:14     ` Nigel Roles
2002-06-26 17:41       ` Andrew Stitt
2002-06-26 16:08         ` Ish Rattan
2002-06-25 16:48 Andrew Stitt
2002-06-26  8:45 ` arisawa
2002-06-26 17:36   ` Andrew Stitt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).