* RE: [9fans] dumb question
@ 2002-06-26 8:55 Stephen Parker
2002-06-26 17:18 ` Andrew Stitt
0 siblings, 1 reply; 88+ messages in thread
From: Stephen Parker @ 2002-06-26 8:55 UTC (permalink / raw)
To: '9fans@cse.psu.edu'
do you plan on doing this a lot?
--
Stephen Parker. Pi Technology. +44 (0)1223 203438.
> why must i needlessly shove all the files into a tar, then unpack them
> again? thats incredibly inefficient! that uses roughly twice the space
> that should be required, it has to copy the files twice, and
> it has the
> overhead of having to needless run the data through tar. Is
> there a better
> solution to this?
^ permalink raw reply [flat|nested] 88+ messages in thread
* RE: [9fans] dumb question
2002-06-26 8:55 [9fans] dumb question Stephen Parker
@ 2002-06-26 17:18 ` Andrew Stitt
0 siblings, 0 replies; 88+ messages in thread
From: Andrew Stitt @ 2002-06-26 17:18 UTC (permalink / raw)
To: 9fans
possibly, keep in mind its a complete waste of file server resources to
repeatedly tar/untar a directory tree, all i want is the ability to copy a
directory tree without having to use a program that wasnt really designed
for that anyways.
On Wed, 26 Jun 2002, Stephen Parker wrote:
> do you plan on doing this a lot?
>
> --
> Stephen Parker. Pi Technology. +44 (0)1223 203438.
>
> > why must i needlessly shove all the files into a tar, then unpack them
> > again? thats incredibly inefficient! that uses roughly twice the space
> > that should be required, it has to copy the files twice, and
> > it has the
> > overhead of having to needless run the data through tar. Is
> > there a better
> > solution to this?
>
>
>
^ permalink raw reply [flat|nested] 88+ messages in thread
* [9fans] Dumb question.
@ 2003-07-15 0:37 Dan Cross
2003-07-15 3:23 ` boyd, rounin
0 siblings, 1 reply; 88+ messages in thread
From: Dan Cross @ 2003-07-15 0:37 UTC (permalink / raw)
To: 9fans
Don't shoot me.
Does a Java implementation of 9p2000 exist?
- Dan C.
(Don't ask.)
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-07-05 9:43 Geoff Collyer
0 siblings, 0 replies; 88+ messages in thread
From: Geoff Collyer @ 2002-07-05 9:43 UTC (permalink / raw)
To: 9fans
> i have also damaged whole file trees by doing
> 'tar cf adir - | {cd /missspelled/dir; tar xf -}'
I did too, a long time ago, and the lesson I drew is: you shouldn't
use "cd directory; tar", you should write
cd directory && tar
. In shell scripts, the shell will generally exit if cd fails, but
obviously interactive shells don't.
On Plan 9 at least, you can avoid destroying the archive (at least by
confusing "c" and "x") by not using the "f" key letter and letting tar
read standard input and write standard output, thus:
tar c >tarfile afile
tar x <tarfile afile
(Performance is a little poorer than using the "f" key letter, but
correctness seems more important here.) To destroy the archive, you
have to get "<" and ">" mixed up, which seems less likely. If it's
that much of a problem, protect your archives ("chmod a-w tarfile").
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-07-01 11:03 rog
2002-07-02 8:53 ` Douglas A. Gwyn
0 siblings, 1 reply; 88+ messages in thread
From: rog @ 2002-07-01 11:03 UTC (permalink / raw)
To: 9fans
> It strikes me that the arguments people are making against having
> blanks in file names are the same arguments I heard against converting
> Plan 9 to Unicode and UTF
the benefits of converting to utf-8 throughout were obvious. does the
ability to type ' ' rather than (say) ALT-' ' really confer comparable
advantages?
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
2002-07-01 11:03 rog
@ 2002-07-02 8:53 ` Douglas A. Gwyn
0 siblings, 0 replies; 88+ messages in thread
From: Douglas A. Gwyn @ 2002-07-02 8:53 UTC (permalink / raw)
To: 9fans
rog@vitanuova.com wrote:
> the benefits of converting to utf-8 throughout were obvious. does the
> ability to type ' ' rather than (say) ALT-' ' really confer comparable
> advantages?
The benefits of any feature depend on whether you make use of them.
People living in an ASCII world get no particular benefit from UTF8,
and people with spaces in file names (e.g. on a Windows filesystem)
get substantial benefit from having their files handled properly.
^ permalink raw reply [flat|nested] 88+ messages in thread
[parent not found: <rob@plan9.bell-labs.com>]
* Re: [9fans] dumb question
@ 2002-06-28 16:49 ` rob pike, esq.
2002-06-29 2:23 ` Scott Schwartz
0 siblings, 1 reply; 88+ messages in thread
From: rob pike, esq. @ 2002-06-28 16:49 UTC (permalink / raw)
To: 9fans
It strikes me that the arguments people are making against having
blanks in file names are the same arguments I heard against converting
Plan 9 to Unicode and UTF: too hard, too much code to change, too many
symmetries broken. But there's no way this problem is as hard as that
conversion, and we handled that one just fine. All that's missing is
a concerted effort to push ahead, plus the will to do so. Neither
seems to be forthcoming, though. Oh well.
-rob
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
2002-06-28 16:49 ` rob pike, esq.
@ 2002-06-29 2:23 ` Scott Schwartz
0 siblings, 0 replies; 88+ messages in thread
From: Scott Schwartz @ 2002-06-29 2:23 UTC (permalink / raw)
To: 9fans
Rob says:
| conversion, and we handled that one just fine. All that's missing is
| a concerted effort to push ahead, plus the will to do so. Neither
| seems to be forthcoming, though. Oh well.
I suspect it's more the case that people need time to roll
the ideas around in their heads. You guys wrote a nice paper
that pretty clearly showed how to do utf8 and make it work,
but still it took a while to catch on elsewhere.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-28 16:39 rob pike, esq.
0 siblings, 0 replies; 88+ messages in thread
From: rob pike, esq. @ 2002-06-28 16:39 UTC (permalink / raw)
To: 9fans
> diffs to dosfs? :-)
Putting your fingers in your ears and humming does not constitute
a viable solution to an enduring problem.
-rob
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-28 16:31 rog
0 siblings, 0 replies; 88+ messages in thread
From: rog @ 2002-06-28 16:31 UTC (permalink / raw)
To: 9fans
doug:
> The question is, what would be a similar mechanism for rc that you'd
> find acceptable?
the same mechanism applies in rc ($ifs rather than $IFS).
but just fixing it in the shell doesn't apply to the multitude of
little nasty things that still need tampering with to work in the
presence of quoted filenames.
whereas before it was possible to just manipulate text irrespective of
whether it was a filename or a word, now you have to unquote filenames
on input and quote on output, so many things that manipulate text will
need to understand rc-style quotes.
in 3e, the internal representation of a filename is the same as its
external representation, and a lot of tools gain simplicity from this
identity transformation.
there are so many things in the system that potentially or actually
break, i really don't see why we can't just transform spaces at the
boundaries of the system (dossrv, ftpfs, et al) and hence be assured
that *nothing* breaks, rather than adding hack after hack to existing
tools.
rob:
> it seems the shell can and should be fixed. i will entertain
> proposals that include code.
diffs to dosfs? :-)
cheers,
rog.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-28 16:14 rob pike, esq.
0 siblings, 0 replies; 88+ messages in thread
From: rob pike, esq. @ 2002-06-28 16:14 UTC (permalink / raw)
To: 9fans
Well, rc has $ifs so you can always use the same familiar trick, but
it's not intellectually satisfying. I have been quietly pushing a
uniform quoting convention through Plan 9, with good library support
(tokenize, %q). The result is that if `{} used tokenize I think that
not only would this specific example work, but the need for hacks like
$ifs could be reduced. If programs consistently quoted their output,
consistency would improve.
I regret blanks in file names, but they're here to stay if you do any
interoperability with Windows. Windows being a royal irritation
doesn't mean that the problems it raises are not more generally
apparent. The quoting stuff in Plan 9 came in for other reasons
having to do with consistent output from commands and devices. If it
also leads to sensible handling of blanks in file names, good for us.
-rob
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-28 14:10 rob pike, esq.
2002-06-28 15:42 ` Douglas A. Gwyn
0 siblings, 1 reply; 88+ messages in thread
From: rob pike, esq. @ 2002-06-28 14:10 UTC (permalink / raw)
To: 9fans
rm `{ls | grep -v precious}
(which is now broken).
ls quotes its output, so i didn't see why this would fail.
Then I tried it:
rob%% date>'a b'
rob%% echo `{ls | grep ' '}
'a b'
rob%% rm `{ls | grep ' '}
rm: 'a: '''a' No such file or directory
rm: b': 'b''' No such file or directory
rob%%
it seems the shell can and should be fixed. i will entertain
proposals that include code.
-rob
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
2002-06-28 14:10 rob pike, esq.
@ 2002-06-28 15:42 ` Douglas A. Gwyn
0 siblings, 0 replies; 88+ messages in thread
From: Douglas A. Gwyn @ 2002-06-28 15:42 UTC (permalink / raw)
To: 9fans
"rob pike, esq." wrote:
> ls quotes its output, ...
> it seems the shell can and should be fixed.
I don't see how to make it consistent with ls quoting.
The root problem seems to be that ` applies parsing
using white-space delimiters both when it is desired
and when it isn't desired. What I think will be
necessary is finer-grained control over "wordization".
In the Bourne shell, that is done by setting IFS.
It would probably suffice to consider newline reserved
as a delimiter; while I have often found spaces in
filenames to be useful, newline in filenames has never,
in my experience, occurred except by accident. So
"IFS='<newline>' sh whatever" should keep space-
containing words as units in a similar usage. The
question is, what would be a similar mechanism for rc
that you'd find acceptable?
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-28 14:08 rog
0 siblings, 0 replies; 88+ messages in thread
From: rog @ 2002-06-28 14:08 UTC (permalink / raw)
To: 9fans
[-- Attachment #1: Type: text/plain, Size: 1813 bytes --]
> From time to time I toy with the idea of implementing a
> fully correct xargs (the Unix one doesn't quote arguments).
if you're guaranteed not to have spaces or newlines in filenames, then
it doesn't matter.
i have a small C xargs (attached) that is probably still correct. (i
think newlines are probably still banned in filenames).
> I wonder if structural r.e.s can express a
> complete and correct quoting algorithm.
well a normal r.e. can express the rc quoting algorithm. the problem
you can't just match it, you've got to translate it as well.
(([^ \t']+)|'([^']|\n|'')*')
the inferno shell provides primitives for quoting and unquoting lists,
which are useful at times. (e.g. variables in /env are stored in
quoted form rather than '\0' terminated).
> what we really need is a convenient way to feed macros to
> sam in a sed-like manner and somebody with the time and
> inclination to develop suitable macros and package it all up.
to be honest there's very little that beats the convenience of:
rm `{ls | grep -v precious}
(which is now broken).
while i'm mentioning the inferno shell, nemo's example works nicely,
as you can pass shell fragments around as words.
e.g.
walk /source {if {ftest -r $1} {echo $1}}
which lets the shell parse the argument command before passing it on
to the walk script:
#!/dis/sh
if {! ~ $#* 2} {
echo 'usage: walk file cmd' >[1=2]
exit usage
}
(file cmd)=$*
du -a $file | getlines {
(nil f) := ${split ' ' $line}
$cmd $f
}
if du produced a quoted word for the filename,
(nil f) := ${split ' ' $line}
would just change to:
(nil f) := ${unquote ' ' $line}
it's quite nice being able to do this sort of thing, but there are
tradeoffs, naturally.
cheers,
rog.
[-- Attachment #2: xargs.c.gz --]
[-- Type: application/octet-stream, Size: 765 bytes --]
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-28 7:52 Fco.J.Ballesteros
0 siblings, 0 replies; 88+ messages in thread
From: Fco.J.Ballesteros @ 2002-06-28 7:52 UTC (permalink / raw)
To: 9fans
> What seems to me to make sense would be to cleanly segregate out
> the file-tree walker from *all* those applications, and design
> the application interfaces so they could operate in conjunction
> with the walker. Sort of like find <criteria> -print | cpio ...
Do you mean something like this?
; cat walk
#!/bin/rc
# $f in cmd gets expanded to each file name.
rfork e
if (~ $#* 0 1) {
echo 'usage: walk file cmd_using_$f...' >[1=2]
exit usage
}
file=$1
shift
cd `{basename -d $file}
exec du -a $file | awk '{print $2}' | while (f=`{read}) { echo $* |rc }
I use it like in
walk /tmp 'test -r $f && echo $f'
But perhaps I could also
dst=/dst walk /source 'test -d $f && mkdir $dst/$f || cp $f $dst/$f'
although I'm just too used to tar for that purpose.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 22:59 Geoff Collyer
2002-06-28 4:32 ` Lucio De Re
0 siblings, 1 reply; 88+ messages in thread
From: Geoff Collyer @ 2002-06-27 22:59 UTC (permalink / raw)
To: 9fans
Lucio, try this implementation of xargs. It's what I use when very
long arg lists cause trouble.
#!/bin/rc
# xargs cmd [arg ...] - emulate Unix xargs
rfork ne
ramfs
split -n 500 -f /tmp/x
for (f in /tmp/x*)
$* `{cat $f}
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
2002-06-27 22:59 Geoff Collyer
@ 2002-06-28 4:32 ` Lucio De Re
0 siblings, 0 replies; 88+ messages in thread
From: Lucio De Re @ 2002-06-28 4:32 UTC (permalink / raw)
To: 9fans
On Thu, Jun 27, 2002 at 03:59:44PM -0700, Geoff Collyer wrote:
>
> Lucio, try this implementation of xargs. It's what I use when very
> long arg lists cause trouble.
>
Thank you, Geoff. Something like this certainly will come in handy.
We need some way of collecting these, kind of like in the permuted
index with the code as the "man" page - HTML, perhaps, or Wiki?
++L
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 18:38 forsyth
2002-06-28 8:45 ` Douglas A. Gwyn
0 siblings, 1 reply; 88+ messages in thread
From: forsyth @ 2002-06-27 18:38 UTC (permalink / raw)
To: 9fans
[-- Attachment #1: Type: text/plain, Size: 212 bytes --]
i do something like that with some software i use for mirroring in Inferno,
where i take advantage of dynamically-loaded
module that traverses the structures and applies
another command or module as it goes.
[-- Attachment #2: Type: message/rfc822, Size: 2307 bytes --]
To: 9fans@cse.psu.edu
Subject: Re: [9fans] dumb question
Date: Thu, 27 Jun 2002 15:40:23 GMT
Message-ID: <3D1B2CCF.1FD49007@null.net>
Lucio De Re wrote:
> The penalty each instance of cp would have to pay if it included
> directory descent logic would be paid by every user, every time
> they use cp, while being used only a fraction of the time.
> Does it not make sense to leave the logic in tar, where it gets
> used under suitable circumstances and omit it in cp (and mv, for
> good measure and plenty other reasons)?
What seems to me to make sense would be to cleanly segregate out
the file-tree walker from *all* those applications, and design
the application interfaces so they could operate in conjunction
with the walker. Sort of like find <criteria> -print | cpio ...
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
2002-06-27 18:38 forsyth
@ 2002-06-28 8:45 ` Douglas A. Gwyn
0 siblings, 0 replies; 88+ messages in thread
From: Douglas A. Gwyn @ 2002-06-28 8:45 UTC (permalink / raw)
To: 9fans
forsyth@caldo.demon.co.uk wrote:
> i do something like that with some software i use for mirroring in Inferno,
> where i take advantage of dynamically-loaded
> module that traverses the structures and applies
> another command or module as it goes.
Ooh, "apply" (which I think was in 8th Ed. Unix) but at the
structure level. Nice.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 17:07 rog
0 siblings, 0 replies; 88+ messages in thread
From: rog @ 2002-06-27 17:07 UTC (permalink / raw)
To: 9fans
> For the record, it doesn't have cp -R either.
ahem.
actually it does. i know 'cos i just cleaned up the source.
mind you at 247 lines and under 4K binary, it isn't huge.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 17:07 forsyth
2002-06-27 16:33 ` Sam
0 siblings, 1 reply; 88+ messages in thread
From: forsyth @ 2002-06-27 17:07 UTC (permalink / raw)
To: 9fans
>>inferno. For the record, it doesn't have cp -R either.
no, it's called cp -r.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 14:43 Richard Miller
0 siblings, 0 replies; 88+ messages in thread
From: Richard Miller @ 2002-06-27 14:43 UTC (permalink / raw)
To: 9fans
> For simplicity, I keep all my work in one huge file and use e.g.
> #ifdef ED
> ....
> #endif ED
> to extract the specific lines of C source that make up ed when
> i need to work on ed...
This is pretty much the file system structure advocated by
Jef Raskin in The Humane Interface:
"... Documents are sequences of pages separated by document characters,
each typable, searchable, and deletable, just as is any other character.
There can be higher delimiters, such as folder and volume characters,
even section and library delimiters, but the number of levels depends
on the size of the data... An individual who insists on having explicit
document names can adopt the personal convention of placing the desired
names immediately following document characters... If you wanted a
directory, there could be a command that would assemble a document
comprising all instances of strings consisting of a document character,
followed by any other characters, up to and including the first
Return or higher delimiter." (pp 120-121)
-- Richard
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 14:33 forsyth
0 siblings, 0 replies; 88+ messages in thread
From: forsyth @ 2002-06-27 14:33 UTC (permalink / raw)
To: 9fans
[-- Attachment #1: Type: text/plain, Size: 568 bytes --]
accident is one possible justification, and there's also a matter of
meaning. if i say `copy this file to that file' i expect to see a
strict copy of the file as a result. it's fairly clear cut. on most
systems that provide directory `copying', if i say cp dir1 target (or
cp -r) and dir1 exists in the target, i usually obtain a merge (it's
perhaps one of the reasons that options bristle as finer control over
the operation is provided). arguably, cp should instead remove files in the
existing destination directory in order to produce a strict copy.
[-- Attachment #2: Type: message/rfc822, Size: 1595 bytes --]
To: 9fans@cse.psu.edu
Subject: Re: [9fans] dumb question
Date: Thu, 27 Jun 2002 09:20:55 -0400
Message-ID: <3ba2a45af5c572e7280e465acb3a805a@plan9.bell-labs.com>
> Perhaps someone could explain why cp doesn't work recursively (without
> an option) when given a directory as a first argument.
Because running cp that way would probably happen more often by mistake
than by intention, and it could damage a lot of files very fast before the
user noticed.
-rob
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 13:20 rob pike, esq.
2002-07-05 9:07 ` Maarit Maliniemi
0 siblings, 1 reply; 88+ messages in thread
From: rob pike, esq. @ 2002-06-27 13:20 UTC (permalink / raw)
To: 9fans
> Perhaps someone could explain why cp doesn't work recursively (without
> an option) when given a directory as a first argument.
Because running cp that way would probably happen more often by mistake
than by intention, and it could damage a lot of files very fast before the
user noticed.
-rob
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
2002-06-27 13:20 rob pike, esq.
@ 2002-07-05 9:07 ` Maarit Maliniemi
0 siblings, 0 replies; 88+ messages in thread
From: Maarit Maliniemi @ 2002-07-05 9:07 UTC (permalink / raw)
To: 9fans
In article <3ba2a45af5c572e7280e465acb3a805a@plan9.bell-labs.com>,
9fans@cse.psu.edu wrote:
> > Perhaps someone could explain why cp doesn't work recursively (without
> > an option) when given a directory as a first argument.
>
> Because running cp that way would probably happen more often by mistake
> than by intention, and it could damage a lot of files very fast before the
> user noticed.
>
> -rob
since the subject of damage has been raised i would like to mention that i
have managed to damage tar files by using 'tar cf tarfile afile' (the
intention was 'tar xf tarfile afile').
tar would be less dangerous if split into two. see inferno.
i have also damaged whole file trees by doing
'tar cf adir - | {cd /missspelled/dir; tar xf -}'
IMHO, it is less dangerous using 'cp -R'.
bengt
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 13:19 rob pike, esq.
2002-06-27 13:21 ` david presotto
2002-06-27 15:40 ` Douglas A. Gwyn
0 siblings, 2 replies; 88+ messages in thread
From: rob pike, esq. @ 2002-06-27 13:19 UTC (permalink / raw)
To: 9fans
> Do you guys work mostly with flat file hierarchies or what?
For simplicity, I keep all my work in one huge file and use e.g.
#ifdef ED
....
#endif ED
to extract the specific lines of C source that make up ed when
i need to work on ed. (I'm usually working on ed when I should
be working on cp; old habits die hard). This style of use means
for example there is only one copy of
main(int argc, char *argv[])
on the system, which saves a lot of space.
I have some ideas for a similar technology to apply to 9fans.
-rob
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
2002-06-27 13:19 rob pike, esq.
@ 2002-06-27 13:21 ` david presotto
2002-06-27 15:40 ` Douglas A. Gwyn
1 sibling, 0 replies; 88+ messages in thread
From: david presotto @ 2002-06-27 13:21 UTC (permalink / raw)
To: 9fans
I forgot to tell you, I renamed ED to CP...
----- Original Message -----
From: "rob pike, esq." <rob@plan9.bell-labs.com>
To: <9fans@cse.psu.edu>
Sent: Thursday, June 27, 2002 9:19 AM
Subject: Re: [9fans] dumb question
> > Do you guys work mostly with flat file hierarchies or what?
>
> For simplicity, I keep all my work in one huge file and use e.g.
> #ifdef ED
> ....
> #endif ED
> to extract the specific lines of C source that make up ed when
> i need to work on ed. (I'm usually working on ed when I should
> be working on cp; old habits die hard). This style of use means
> for example there is only one copy of
> main(int argc, char *argv[])
> on the system, which saves a lot of space.
>
> I have some ideas for a similar technology to apply to 9fans.
>
> -rob
>
>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
2002-06-27 13:19 rob pike, esq.
2002-06-27 13:21 ` david presotto
@ 2002-06-27 15:40 ` Douglas A. Gwyn
1 sibling, 0 replies; 88+ messages in thread
From: Douglas A. Gwyn @ 2002-06-27 15:40 UTC (permalink / raw)
To: 9fans
"rob pike, esq." wrote:
> #endif ED
?
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 12:27 rog
0 siblings, 0 replies; 88+ messages in thread
From: rog @ 2002-06-27 12:27 UTC (permalink / raw)
To: 9fans
some great quotes have come out of this little exchange:
"9ers do it with du"
"...techno-pissing contest..."
"human beings aren't that different from machines"
"you keep replacing speed with blind simplicity"
the latter seems like a fair summary of what plan 9 is about:
sacrifice a few CPU cycles here for some more generality there.
> im arguing about the apparent acceptance of a solution which wastes
> resources
we accept many solutions that, strictly speaking, waste resources.
e.g.
ls | wc
but do we care? not unless the overhead impacts significantly on our
ability to get the job done... and if it does, then measurement and
judicious optimisation is the way forward, as with any programming
task.
cheers,
rog.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 12:12 Fco.J.Ballesteros
0 siblings, 0 replies; 88+ messages in thread
From: Fco.J.Ballesteros @ 2002-06-27 12:12 UTC (permalink / raw)
To: 9fans
[-- Attachment #1: Type: text/plain, Size: 71 bytes --]
I see, you'd need to sign an NDA just to tell me.
Back to work then.
[-- Attachment #2: Type: message/rfc822, Size: 1613 bytes --]
From: Sam <sah@softcardsystems.com>
To: <9fans@cse.psu.edu>
Subject: Re: [9fans] dumb question
Date: Thu, 27 Jun 2002 07:13:49 -0400 (EDT)
Message-ID: <Pine.LNX.4.30.0206270712440.24208-100000@athena>
On Thu, 27 Jun 2002, Fco.J.Ballesteros wrote:
> : Human beings aren't *that* different from
> : machines.
>
> Which OS do you run?
Model number / type is hard to come by since we're having a hard
reaching the author.
;-P
Sam
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 12:06 Fco.J.Ballesteros
2002-06-27 11:13 ` Sam
0 siblings, 1 reply; 88+ messages in thread
From: Fco.J.Ballesteros @ 2002-06-27 12:06 UTC (permalink / raw)
To: 9fans
: Human beings aren't *that* different from
: machines.
Which OS do you run?
☺
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 10:29 nigel
0 siblings, 0 replies; 88+ messages in thread
From: nigel @ 2002-06-27 10:29 UTC (permalink / raw)
To: 9fans
> im arguing about the apparent acceptance of a solution which wastes
> resources, sure its 30k, but you can do quite a bit with 30k. and if you
> actually have lots of users, things start to slow down. point im trying to
> make: tar|tar uses more memory and resources then a recursive copy, ive
> thoroughly explained this many times. whats so hard to grasp here?
Perhaps you cannot grasp that the consensus agrees that this is not
the most efficient solution, but that it is "efficient enough" given
that recursive tree copy is not the most commonly used operation.
This is based on experience.
Members of the list have provided numerical evidence that it's
efficient enough. But that's probably a mistake; it just started a
techno-pissing match from which there are no winners. Telling the
list about disk sectors and DMA will not advance your argument. The
majority know about those. Really.
Plan 9 has a philosophy. It has certain aims which are much more
important than the exact path by which data in files travel during a
tree copy. One day, when everything else which is more important is
done, cp -r will get tuned till the magneto-optical domains squeak.
But that day will probably never come.
I think I'll close with the Presotto contention. It runs like this;
"if you're unhappy with the way Plan 9 does does X, just wait until
you see Y".
In this case X = "file tree copies", and Y is always "web browsing".
Positively my last word on the subject.
Nigel
^ permalink raw reply [flat|nested] 88+ messages in thread
* RE: [9fans] dumb question
@ 2002-06-27 9:44 Stephen Parker
0 siblings, 0 replies; 88+ messages in thread
From: Stephen Parker @ 2002-06-27 9:44 UTC (permalink / raw)
To: '9fans@cse.psu.edu'
most of us don't spend much time moving directories around.
maybe once per week. maybe.
its not worth optimising this.
stephen
--
Stephen Parker. Pi Technology. +44 (0)1223 203438.
> -----Original Message-----
> From: Andrew Stitt [mailto:astitt@cats.ucsc.edu]
> Sent: 27 June 2002 10:17
> To: 9fans@cse.psu.edu
> Subject: Re: [9fans] dumb question
>
>
> On Wed, 26 Jun 2002, Sam wrote:
>
> > > cp is about 60k in plan9 and tar is 80k in plan9. cp on 3
> seperate unix
> > > machines (linux, SCO unix, os-X) in its overcomplicated
> copying directory
> > > tree glory is under 30k, on sunOS it happens to be 17k.
> to tar then untar
> > > requires two processes using a shared 80k tar, plus some
> intermediate data
> > > to archive and process the data, then immediatly reverse
> this process. cp
> > > _could_ be written to do all this in 17k but instead our
> 60k cp cant do
> > > it, and instead we need two entries in the process table
> and twice the
> > > number ofuser space pages.
> > I'm sorry, but there must be something wrong with my mail
> reader. Are we
> > really arguing about 30k? Am I dreaming? Are you trying
> to run plan9
> > on a machine with 256k of memory?
> >
> > ...
> im arguing about the apparent acceptance of a solution which wastes
> resources, sure its 30k, but you can do quite a bit with 30k.
> and if you
> actually have lots of users, things start to slow down. point
> im trying to
> make: tar|tar uses more memory and resources then a recursive
> copy, ive
> thoroughly explained this many times. whats so hard to grasp here?
>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 9:43 Fco.J.Ballesteros
0 siblings, 0 replies; 88+ messages in thread
From: Fco.J.Ballesteros @ 2002-06-27 9:43 UTC (permalink / raw)
To: 9fans
: resources, sure its 30k, but you can do quite a bit with 30k. and if you
: actually have lots of users, things start to slow down. point im trying to
Answer these questions:
How many users are doing dircps at the same time?
Have you actually seen that happen?
I think that it's not even worth discussing about saving 30k at
this place, but that's just IMHO.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 9:36 Fco.J.Ballesteros
0 siblings, 0 replies; 88+ messages in thread
From: Fco.J.Ballesteros @ 2002-06-27 9:36 UTC (permalink / raw)
To: 9fans
: > The command
: > tar c ... | tar x ...
: > will have almost identical cost to any explicit recursive copy, since
: > it does the same amount of work. The only extra overhead is writing
:
: it does not do the same amount of work, it runs two processes which use at
: least two pages instead of one, it also runs all the data through some
I think the point is that if you don't notice the `extra work'
done by the tars, then you'd better forget about optimizing it
(Who cares when nobody notices it?).
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 1:22 okamoto
0 siblings, 0 replies; 88+ messages in thread
From: okamoto @ 2002-06-27 1:22 UTC (permalink / raw)
To: 9fans
[-- Attachment #1: Type: text/plain, Size: 210 bytes --]
I forgot the essense to be written.
For cp -r, I suppose they thought that "can't you use bind(1) instead?".
If it's not, it must be somthing wrong usage of Plan 9, so you can use
it by tar...
Kenji
[-- Attachment #2: Type: message/rfc822, Size: 5549 bytes --]
[-- Attachment #2.1.1: Type: text/plain, Size: 338 bytes --]
Please let me say one very important thing. ^_^
If you feel something is missing in Plan 9, you'd better to think why they
eliminated it, particularly when it's related to basic unix commands.
I think there is no such thing which they eliminated carelessly in Plan 9.
I'm saying this from my many such experiences. :-)
Kenji
[-- Attachment #2.1.2: Type: message/rfc822, Size: 3269 bytes --]
From: Andrew Stitt <astitt@cats.ucsc.edu>
To: 9fans@cse.psu.edu
Subject: Re: [9fans] dumb question
Date: Wed, 26 Jun 2002 17:41:06 GMT
Message-ID: <Pine.SOL.3.96.1020626101934.26964C-100000@teach.ic.ucsc.edu>
On Wed, 26 Jun 2002, Nigel Roles wrote:
> On Wed, 26 Jun 2002 08:41:06 GMT, Andrew Stitt wrote:
>
> >On Tue, 25 Jun 2002, Fco.J.Ballesteros wrote:
> >
> >> Again, from rsc tiny tools :-)
> >>
> >> ; cat /bin/dircp
> >> #!/bin/rc
> >>
> >> switch($#*){
> >> case 2
> >> @{cd $1 && tar c .}|@{cd $2 && tar x}
> >> case *
> >> echo usage: dircp from to >[1=2]
> >> }
> >>
> >>
> >why must i needlessly shove all the files into a tar, then unpack them
> >again? thats incredibly inefficient! that uses roughly twice the space
> >that should be required, it has to copy the files twice, and it has the
> >overhead of having to needless run the data through tar. Is there a better
> >solution to this?
> > Andrew
>
> This does not use any more space. The tar commands are piped together.
> I doubt a specific cp -r type command would be particularly more efficient.
>
>
>
>
>
i beg to differ, tar uses memory, it uses system resources, i fail to see
how you think this is just as good as just recursively copying files. The
point is I shouldnt have to needlessly use this other program (for Tape
ARchives) to copy directorys. If this is on a fairly busy file server
needlessly running tar twice is simply wasteful and unacceptable when you
could just follow the directory tree.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 1:05 okamoto
0 siblings, 0 replies; 88+ messages in thread
From: okamoto @ 2002-06-27 1:05 UTC (permalink / raw)
To: 9fans
[-- Attachment #1: Type: text/plain, Size: 338 bytes --]
Please let me say one very important thing. ^_^
If you feel something is missing in Plan 9, you'd better to think why they
eliminated it, particularly when it's related to basic unix commands.
I think there is no such thing which they eliminated carelessly in Plan 9.
I'm saying this from my many such experiences. :-)
Kenji
[-- Attachment #2: Type: message/rfc822, Size: 3269 bytes --]
From: Andrew Stitt <astitt@cats.ucsc.edu>
To: 9fans@cse.psu.edu
Subject: Re: [9fans] dumb question
Date: Wed, 26 Jun 2002 17:41:06 GMT
Message-ID: <Pine.SOL.3.96.1020626101934.26964C-100000@teach.ic.ucsc.edu>
On Wed, 26 Jun 2002, Nigel Roles wrote:
> On Wed, 26 Jun 2002 08:41:06 GMT, Andrew Stitt wrote:
>
> >On Tue, 25 Jun 2002, Fco.J.Ballesteros wrote:
> >
> >> Again, from rsc tiny tools :-)
> >>
> >> ; cat /bin/dircp
> >> #!/bin/rc
> >>
> >> switch($#*){
> >> case 2
> >> @{cd $1 && tar c .}|@{cd $2 && tar x}
> >> case *
> >> echo usage: dircp from to >[1=2]
> >> }
> >>
> >>
> >why must i needlessly shove all the files into a tar, then unpack them
> >again? thats incredibly inefficient! that uses roughly twice the space
> >that should be required, it has to copy the files twice, and it has the
> >overhead of having to needless run the data through tar. Is there a better
> >solution to this?
> > Andrew
>
> This does not use any more space. The tar commands are piped together.
> I doubt a specific cp -r type command would be particularly more efficient.
>
>
>
>
>
i beg to differ, tar uses memory, it uses system resources, i fail to see
how you think this is just as good as just recursively copying files. The
point is I shouldnt have to needlessly use this other program (for Tape
ARchives) to copy directorys. If this is on a fairly busy file server
needlessly running tar twice is simply wasteful and unacceptable when you
could just follow the directory tree.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-26 23:08 forsyth
[not found] ` <171e27d5f2cfd12a9303117e58818e5b@plan9.bell-labs.com>
0 siblings, 1 reply; 88+ messages in thread
From: forsyth @ 2002-06-26 23:08 UTC (permalink / raw)
To: 9fans
you could see if iostats(4) offers a hint.
quite apart from retaining Plan 9 modes and long names correctly,
mkfs archives use a file header that's more compact than tar's (1 line per file not
512 bytes)
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-26 19:04 rog
0 siblings, 0 replies; 88+ messages in thread
From: rog @ 2002-06-26 19:04 UTC (permalink / raw)
To: 9fans
> cp is about 60k in plan9 and tar is 80k in plan9. cp on 3 seperate
> unix machines (linux, SCO unix, os-X) in its overcomplicated copying
> directory tree glory is under 30k, on sunOS it happens to be 17k.
this is a little over-harsh on plan 9, i think.
to start with none of the plan 9 binaries are stripped, which takes
cp and tar down to 40K and 50K respectively. also all plan 9 binaries
are statically linked. a look at a closeby FreeBSD box gives cp at
65K statically linked and stripped, which seems like a fairer comparison.
(remembering that dynamic libraries impose their own overhead too,
when starting up a program).
mind you, a quick look at the 3rd edition cp gives 15K stripped,
and the cp source hasn't changed that much. i wonder what's using
that extra 20K of text. i suspect the new print library.
cheers,
rog.
^ permalink raw reply [flat|nested] 88+ messages in thread
* RE: [9fans] dumb question
@ 2002-06-26 19:00 Warrier, Sadanand (Sadanand)
2002-06-27 0:13 ` Steve Arons
0 siblings, 1 reply; 88+ messages in thread
From: Warrier, Sadanand (Sadanand) @ 2002-06-26 19:00 UTC (permalink / raw)
To: '9fans@cse.psu.edu'
On my solaris machine
ls -la /usr/sbin/tar
-r-xr-xr-x 1 bin bin 63544 Nov 24 1998 /usr/sbin/tar*
ls -la /opt/exp/gnu/bin/tar
-rwxrwxr-x 1 exptools software 161328 Jul 4 2000
/opt/exp/gnu/bin/tar*
S
-----Original Message-----
From: forsyth@caldo.demon.co.uk [mailto:forsyth@caldo.demon.co.uk]
Sent: Wednesday, June 26, 2002 2:41 PM
To: 9fans@cse.psu.edu
Subject: Re: [9fans] dumb question
> cp is about 60k in plan9 and tar is 80k in plan9. cp on 3 seperate unix
> machines (linux, SCO unix, os-X) in its overcomplicated copying directory
> tree glory is under 30k, on sunOS it happens to be 17k. to tar then untar
> requires two processes using a shared 80k tar, plus some intermediate data
> to archive and process the data, then immediatly reverse this process. cp
> _could_ be written to do all this in 17k but instead our 60k cp cant do
> it, and instead we need two entries in the process table and twice the
> number ofuser space pages.
there's a useful point to be made here with respect to other such
comparisons:
you're overlooking the cost on the other systems of the (usually)
large memory space consumed by the dynamically-linked shared libraries.
that's why their cp seems `small' and plan9's seems `large'. with plan 9,
cp really does take 60k (code, plus data+stack), on the others, it's
anyone's guess: the 17k is the tip of the iceberg.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-26 18:41 forsyth
0 siblings, 0 replies; 88+ messages in thread
From: forsyth @ 2002-06-26 18:41 UTC (permalink / raw)
To: 9fans
> cp is about 60k in plan9 and tar is 80k in plan9. cp on 3 seperate unix
> machines (linux, SCO unix, os-X) in its overcomplicated copying directory
> tree glory is under 30k, on sunOS it happens to be 17k. to tar then untar
> requires two processes using a shared 80k tar, plus some intermediate data
> to archive and process the data, then immediatly reverse this process. cp
> _could_ be written to do all this in 17k but instead our 60k cp cant do
> it, and instead we need two entries in the process table and twice the
> number ofuser space pages.
there's a useful point to be made here with respect to other such comparisons:
you're overlooking the cost on the other systems of the (usually)
large memory space consumed by the dynamically-linked shared libraries.
that's why their cp seems `small' and plan9's seems `large'. with plan 9,
cp really does take 60k (code, plus data+stack), on the others, it's
anyone's guess: the 17k is the tip of the iceberg.
^ permalink raw reply [flat|nested] 88+ messages in thread
* RE: [9fans] dumb question
@ 2002-06-26 18:40 David Gordon Hogan
0 siblings, 0 replies; 88+ messages in thread
From: David Gordon Hogan @ 2002-06-26 18:40 UTC (permalink / raw)
To: 9fans
> do you plan on doing this a lot?
I hope not.
^ permalink raw reply [flat|nested] 88+ messages in thread
* RE: [9fans] dumb question
@ 2002-06-26 17:53 rog
0 siblings, 0 replies; 88+ messages in thread
From: rog @ 2002-06-26 17:53 UTC (permalink / raw)
To: 9fans
> possibly, keep in mind its a complete waste of file server resources to
> repeatedly tar/untar a directory tree, all i want is the ability to copy a
> directory tree without having to use a program that wasnt really designed
> for that anyways.
actually it's quite possible that the tar pipeline is *more* efficient
than a traditionally implemented cp -r, as it's doing reading and
writing in parallel.
i wouldn't be surprised to see the tar pipeline win out over cp -r
when copying between different disks.
the most valid objection is that tar doesn't cope with the new, longer
filenames in plan 9 (if that's true). that could be annoying on
occasion.
rog.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-26 17:45 rob pike, esq.
2002-06-27 9:16 ` Andrew Stitt
2002-06-27 9:17 ` Douglas A. Gwyn
0 siblings, 2 replies; 88+ messages in thread
From: rob pike, esq. @ 2002-06-26 17:45 UTC (permalink / raw)
To: astitt, 9fans
> i beg to differ, tar uses memory, it uses system resources, i fail to see
> how you think this is just as good as just recursively copying files. The
> point is I shouldnt have to needlessly use this other program (for Tape
> ARchives) to copy directorys. If this is on a fairly busy file server
> needlessly running tar twice is simply wasteful and unacceptable when you
> could just follow the directory tree.
The command
tar c ... | tar x ...
will have almost identical cost to any explicit recursive copy, since
it does the same amount of work. The only extra overhead is writing
to and reading from the pipe, which is so cheap - and entirely local -
that it is insignificant.
Whether you cp or tar, you must read the files from one tree and write
them to another.
-rob
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
2002-06-26 17:45 rob pike, esq.
@ 2002-06-27 9:16 ` Andrew Stitt
2002-06-27 9:17 ` Douglas A. Gwyn
1 sibling, 0 replies; 88+ messages in thread
From: Andrew Stitt @ 2002-06-27 9:16 UTC (permalink / raw)
To: 9fans
On Wed, 26 Jun 2002, rob pike, esq. wrote:
> > i beg to differ, tar uses memory, it uses system resources, i fail to see
> > how you think this is just as good as just recursively copying files. The
> > point is I shouldnt have to needlessly use this other program (for Tape
> > ARchives) to copy directorys. If this is on a fairly busy file server
> > needlessly running tar twice is simply wasteful and unacceptable when you
> > could just follow the directory tree.
>
> The command
> tar c ... | tar x ...
> will have almost identical cost to any explicit recursive copy, since
> it does the same amount of work. The only extra overhead is writing
it does not do the same amount of work, it runs two processes which use at
least two pages instead of one, it also runs all the data through some
sort of algorithm to place it into an archive, then immediate reverse the
same algorithm to remove it, i dont see whats so complicated about this
concept, it needlessly processes the data. Im sure if you read and wrote
in parrallel using two processes itd be faster, guess what would be even
faster? parallel cp, why? cp doesnt archive then dearchive, its as simple
as that. i can almost guarentee you that tar-c|tar -x uses more clock
cycles then cp, why? two processes both are simulataneously packing and
unpacking data for a _local_ transfer. in reality the work ought to be
done mostly by a dma controller which simply copys sectors from one
location to another, then the other activities of adding inodes or table
entries etc should be done in parallel, you have to remember that IO is
done in the background because it is slow, with tar you have to at least
move the data into some pages in memory, processes it, then run it through
a device, where it gets copied into another processes address space, then
processed then written to disk. If im copying a large directory tree im
going to get a serious performance hit for that. If i can just have a well
written cp program deal with having the disk simply copy some sectors
around, hopefully with dma, thats going to be faster. maybe on your newer
faster computers the difference is so small "who cares" but thats what
keeps processor manufactuerers in business, cutting corners like that. you
keep replacing speed with blind simplicity and 2 months later your
hardware is obsolete, so you buy new faster hardware which now runs all
your older kludge fast enough, so you cut corners on the new stuff. take
microsoft for example. they wrote windows 95, which iirc runs reasonably
well on even a mere 386, now we've got win xp, which has several hundred
mhz as the minimum required speed! yet how much does it really add to the
mix? the UI is nearly the same, sure its got some bells and whistles but
you can take that away. I wouldnt dare try it on a 386 (much less use the
cd as more then a coaster but anyways) run win95 on a p4, yea its pretty
darn fast, compare to winxp on a p4? i doubt you're going to get the same
performance out of it. im extreme i know, but seriously, my points follow
logically, tar|tar is not a more efficient solution, nor is it in any way
acceptable. sure i may not do cp -R a lot, but when youve got 50-100
people using your network maybe they will? either way, its a complete
waste of resources that can be better used elsewhere. can someone tell me
whats so difficult to see about that?
> to and reading from the pipe, which is so cheap - and entirelylocal -
> that it is insignificant.
>
> Whether you cp or tar, you must read the files from one tree and write
> them to another.
>
> -rob
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
2002-06-26 17:45 rob pike, esq.
2002-06-27 9:16 ` Andrew Stitt
@ 2002-06-27 9:17 ` Douglas A. Gwyn
2002-06-26 19:53 ` arisawa
2002-06-27 11:43 ` Ralph Corderoy
1 sibling, 2 replies; 88+ messages in thread
From: Douglas A. Gwyn @ 2002-06-27 9:17 UTC (permalink / raw)
To: 9fans
"rob pike, esq." wrote:
> Whether you cp or tar, you must read the files from one tree and write
> them to another.
But with twin tars (y'all), there is encoding and decoding (aka
format translation) going on for the attribute info, which does
seem like a waste of cycles.
There is frequently, at least in my end of the universe, need for
directory hierarchy walking for a variety of purposes. The old
Unix tools "find" and "xargs" aren't very satisfactory, but at
least they exist. I guess "ls -R" would almost work as a file
tree walker (as input to some shell procedure), except for its
irregular output format and its apparent nonexistence under Plan 9.
Do you guys work mostly with flat file hierarchies or what?
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
2002-06-27 9:17 ` Douglas A. Gwyn
@ 2002-06-26 19:53 ` arisawa
2002-06-27 11:43 ` Ralph Corderoy
1 sibling, 0 replies; 88+ messages in thread
From: arisawa @ 2002-06-26 19:53 UTC (permalink / raw)
To: 9fans
Hello,
>I guess "ls -R" would almost work as a file
>tree walker (as input to some shell procedure), except for its
>irregular output format and its apparent nonexistence under Plan 9.
I agree.
I would like to have R option for ls.
But the output format should be simple so that we can pass them to
pipe.
By the way, real value of my cpdir resides in the case:
term% cpdir srcdir dstdir
term% touch some-files-under-srcdir
term% cpdir -m srcdir dstdir
try.
Kenji Arisawa
E-mail: arisawa@aichi-u.ac.jp
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
2002-06-27 9:17 ` Douglas A. Gwyn
2002-06-26 19:53 ` arisawa
@ 2002-06-27 11:43 ` Ralph Corderoy
2002-06-27 11:04 ` Sam
1 sibling, 1 reply; 88+ messages in thread
From: Ralph Corderoy @ 2002-06-27 11:43 UTC (permalink / raw)
To: 9fans
Hi,
> I guess "ls -R" would almost work as a file tree walker (as input to
> some shell procedure), except for its irregular output format and its
> apparent nonexistence under Plan 9.
9ers do it with du, don't they?
Ralph.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
2002-06-27 11:43 ` Ralph Corderoy
@ 2002-06-27 11:04 ` Sam
0 siblings, 0 replies; 88+ messages in thread
From: Sam @ 2002-06-27 11:04 UTC (permalink / raw)
To: 9fans
> Hi,
>
> > I guess "ls -R" would almost work as a file tree walker (as input to
> > some shell procedure), except for its irregular output format and its
> > apparent nonexistence under Plan 9.
>
> 9ers do it with du, don't they?
Indeed. I use du on linux systems now as well. It's simple, easy to
remember, and doesn't really take all that long. Friends laugh and
repeatedly ask, "why don't you use slocate?" Then, while they're waiting
ten minutes for slocate -u to update its database because they can't
find a file, I've long since gone back to work.
Simplicity is good. Human beings aren't *that* different from
machines. The more store you use maintaining lists of flags to
seldom run programs the less you can use for really complex tasks.
What's so hard to grasp here? (sorry, intentionally catty)
;)
Cheers,
Sam
^ permalink raw reply [flat|nested] 88+ messages in thread
[parent not found: <nemo@plan9.escet.urjc.es>]
* Re: [9fans] dumb question
@ 2002-06-25 17:06 ` Fco.J.Ballesteros
2002-06-25 18:01 ` Scott Schwartz
2002-06-26 8:41 ` Andrew Stitt
0 siblings, 2 replies; 88+ messages in thread
From: Fco.J.Ballesteros @ 2002-06-25 17:06 UTC (permalink / raw)
To: 9fans
Again, from rsc tiny tools :-)
; cat /bin/dircp
#!/bin/rc
switch($#*){
case 2
@{cd $1 && tar c .}|@{cd $2 && tar x}
case *
echo usage: dircp from to >[1=2]
}
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
2002-06-25 17:06 ` Fco.J.Ballesteros
@ 2002-06-25 18:01 ` Scott Schwartz
2002-06-26 8:41 ` Andrew Stitt
1 sibling, 0 replies; 88+ messages in thread
From: Scott Schwartz @ 2002-06-25 18:01 UTC (permalink / raw)
To: 9fans
> @{cd $1 && tar c .}|@{cd $2 && tar x}
What about path length limits in tar?
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
2002-06-25 17:06 ` Fco.J.Ballesteros
2002-06-25 18:01 ` Scott Schwartz
@ 2002-06-26 8:41 ` Andrew Stitt
2002-06-26 9:14 ` Nigel Roles
1 sibling, 1 reply; 88+ messages in thread
From: Andrew Stitt @ 2002-06-26 8:41 UTC (permalink / raw)
To: 9fans
On Tue, 25 Jun 2002, Fco.J.Ballesteros wrote:
> Again, from rsc tiny tools :-)
>
> ; cat /bin/dircp
> #!/bin/rc
>
> switch($#*){
> case 2
> @{cd $1 && tar c .}|@{cd $2 && tar x}
> case *
> echo usage: dircp from to >[1=2]
> }
>
>
why must i needlessly shove all the files into a tar, then unpack them
again? thats incredibly inefficient! that uses roughly twice the space
that should be required, it has to copy the files twice, and it has the
overhead of having to needless run the data through tar. Is there a better
solution to this?
Andrew
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
2002-06-26 8:41 ` Andrew Stitt
@ 2002-06-26 9:14 ` Nigel Roles
2002-06-26 17:41 ` Andrew Stitt
0 siblings, 1 reply; 88+ messages in thread
From: Nigel Roles @ 2002-06-26 9:14 UTC (permalink / raw)
To: 9fans
On Wed, 26 Jun 2002 08:41:06 GMT, Andrew Stitt wrote:
>On Tue, 25 Jun 2002, Fco.J.Ballesteros wrote:
>
>> Again, from rsc tiny tools :-)
>>
>> ; cat /bin/dircp
>> #!/bin/rc
>>
>> switch($#*){
>> case 2
>> @{cd $1 && tar c .}|@{cd $2 && tar x}
>> case *
>> echo usage: dircp from to >[1=2]
>> }
>>
>>
>why must i needlessly shove all the files into a tar, then unpack them
>again? thats incredibly inefficient! that uses roughly twice the space
>that should be required, it has to copy the files twice, and it has the
>overhead of having to needless run the data through tar. Is there a better
>solution to this?
> Andrew
This does not use any more space. The tar commands are piped together.
I doubt a specific cp -r type command would be particularly more efficient.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
2002-06-26 9:14 ` Nigel Roles
@ 2002-06-26 17:41 ` Andrew Stitt
2002-06-26 16:08 ` Ish Rattan
0 siblings, 1 reply; 88+ messages in thread
From: Andrew Stitt @ 2002-06-26 17:41 UTC (permalink / raw)
To: 9fans
On Wed, 26 Jun 2002, Nigel Roles wrote:
> On Wed, 26 Jun 2002 08:41:06 GMT, Andrew Stitt wrote:
>
> >On Tue, 25 Jun 2002, Fco.J.Ballesteros wrote:
> >
> >> Again, from rsc tiny tools :-)
> >>
> >> ; cat /bin/dircp
> >> #!/bin/rc
> >>
> >> switch($#*){
> >> case 2
> >> @{cd $1 && tar c .}|@{cd $2 && tar x}
> >> case *
> >> echo usage: dircp from to >[1=2]
> >> }
> >>
> >>
> >why must i needlessly shove all the files into a tar, then unpack them
> >again? thats incredibly inefficient! that uses roughly twice the space
> >that should be required, it has to copy the files twice, and it has the
> >overhead of having to needless run the data through tar. Is there a better
> >solution to this?
> > Andrew
>
> This does not use any more space. The tar commands are piped together.
> I doubt a specific cp -r type command would be particularly more efficient.
>
>
>
>
>
i beg to differ, tar uses memory, it uses system resources, i fail to see
how you think this is just as good as just recursively copying files. The
point is I shouldnt have to needlessly use this other program (for Tape
ARchives) to copy directorys. If this is on a fairly busy file server
needlessly running tar twice is simply wasteful and unacceptable when you
could just follow the directory tree.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
2002-06-26 17:41 ` Andrew Stitt
@ 2002-06-26 16:08 ` Ish Rattan
0 siblings, 0 replies; 88+ messages in thread
From: Ish Rattan @ 2002-06-26 16:08 UTC (permalink / raw)
To: 9fans
On Wed, 26 Jun 2002, Andrew Stitt wrote:
> On Wed, 26 Jun 2002, Nigel Roles wrote:
>
> > On Wed, 26 Jun 2002 08:41:06 GMT, Andrew Stitt wrote:
> >
> > >On Tue, 25 Jun 2002, Fco.J.Ballesteros wrote:
> > >
> > >> Again, from rsc tiny tools :-)
> > >>
> > >> ; cat /bin/dircp
> > >> #!/bin/rc
> > >>
> > >> switch($#*){
> > >> case 2
> > >> @{cd $1 && tar c .}|@{cd $2 && tar x}
> > >> case *
> > >> echo usage: dircp from to >[1=2]
> > >> }
> > >>
> > >>
> > >why must i needlessly shove all the files into a tar, then unpack them
> > >again? thats incredibly inefficient! that uses roughly twice the space
> > >that should be required, it has to copy the files twice, and it has the
> > >overhead of having to needless run the data through tar. Is there a better
> > >solution to this?
> > > Andrew
> >
> > This does not use any more space. The tar commands are piped together.
> > I doubt a specific cp -r type command would be particularly more efficient.
> >
> >
> >
> >
> >
> i beg to differ, tar uses memory, it uses system resources, i fail to see
Keep it up :-)
-ishwar
^ permalink raw reply [flat|nested] 88+ messages in thread
* [9fans] dumb question
@ 2002-06-25 16:48 Andrew Stitt
2002-06-26 8:45 ` arisawa
0 siblings, 1 reply; 88+ messages in thread
From: Andrew Stitt @ 2002-06-25 16:48 UTC (permalink / raw)
To: 9fans
ok so im totally stumped on this: how do i copy a directory? cp can copy
files only, but has no nifty recursive option, what do i do? thanks
Andrew
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
2002-06-25 16:48 Andrew Stitt
@ 2002-06-26 8:45 ` arisawa
2002-06-26 17:36 ` Andrew Stitt
0 siblings, 1 reply; 88+ messages in thread
From: arisawa @ 2002-06-26 8:45 UTC (permalink / raw)
To: 9fans
Hello,
>ok so im totally stumped on this: how do i copy a directory?
>cp can copy files only, but has no nifty recursive option,
>what do i do? thanks
> Andrew
look at http://plan9.aichi-u.ac.jp/netlib/cmd/cpdir/
The following is a part of README file.
Kenji Arisawa
E-mail: arisawa@aichi-u.ac.jp
Usage: cpdir [-vugRm] [-l list] source destination [path ...]
Options and arguments:
-v: verbose
-u: copy owner info
-g: copy group info
-R: remove redundant files and directories
-m: merge
-l list:
path list to be processed. path list must be a path
per line.
source:
source dir from which copy is made.
destination:
destination dir in which copy is made.
path ... : pathes in source to be processed.
Function:
`cpdir' makes effort to make a copy of source/path
in destination/path.
`cpdir' copies only those files that are out of date.
`cpdir' tries to preserve the following info.:
- contents (judged from modification time)
- modification time
- permissions
- group (under -g option)
- owner if possible. (under -u option)
and under -R option, `cpdir' removes files and directories
in destination/path
that are non-existent in source/path.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
2002-06-26 8:45 ` arisawa
@ 2002-06-26 17:36 ` Andrew Stitt
0 siblings, 0 replies; 88+ messages in thread
From: Andrew Stitt @ 2002-06-26 17:36 UTC (permalink / raw)
To: 9fans
thank you this is very helpful.
On Wed, 26 Jun 2002 arisawa@ar.aichi-u.ac.jp wrote:
> Hello,
>
> >ok so im totally stumped on this: how do i copy a directory?
> >cp can copy files only, but has no nifty recursive option,
> >what do i do? thanks
> > Andrew
>
> look at http://plan9.aichi-u.ac.jp/netlib/cmd/cpdir/
>
> The following is a part of README file.
>
> Kenji Arisawa
> E-mail: arisawa@aichi-u.ac.jp
>
>
> Usage: cpdir [-vugRm] [-l list] source destination [path ...]
> Options and arguments:
> -v: verbose
> -u: copy owner info
> -g: copy group info
> -R: remove redundant files and directories
> -m: merge
> -l list:
> path list to be processed. path list must be a path
> per line.
> source:
> source dir from which copy is made.
> destination:
> destination dir in which copy is made.
> path ... : pathes in source to be processed.
>
> Function:
> `cpdir' makes effort to make a copy of source/path
> in destination/path.
> `cpdir' copies only those files that are out of date.
> `cpdir' tries to preserve the following info.:
> - contents (judged from modification time)
> - modification time
> - permissions
> - group (under -g option)
> - owner if possible. (under -u option)
> and under -R option, `cpdir' removes files and directories
> in destination/path
> that are non-existent in source/path.
>
>
^ permalink raw reply [flat|nested] 88+ messages in thread
end of thread, other threads:[~2003-07-15 3:23 UTC | newest]
Thread overview: 88+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2002-06-26 8:55 [9fans] dumb question Stephen Parker
2002-06-26 17:18 ` Andrew Stitt
-- strict thread matches above, loose matches on Subject: below --
2003-07-15 0:37 [9fans] Dumb question Dan Cross
2003-07-15 3:23 ` boyd, rounin
2002-07-05 9:43 [9fans] dumb question Geoff Collyer
2002-07-01 11:03 rog
2002-07-02 8:53 ` Douglas A. Gwyn
[not found] <rob@plan9.bell-labs.com>
2002-06-28 16:49 ` rob pike, esq.
2002-06-29 2:23 ` Scott Schwartz
2002-06-28 16:39 rob pike, esq.
2002-06-28 16:31 rog
2002-06-28 16:14 rob pike, esq.
2002-06-28 14:10 rob pike, esq.
2002-06-28 15:42 ` Douglas A. Gwyn
2002-06-28 14:08 rog
2002-06-28 7:52 Fco.J.Ballesteros
2002-06-27 22:59 Geoff Collyer
2002-06-28 4:32 ` Lucio De Re
2002-06-27 18:38 forsyth
2002-06-28 8:45 ` Douglas A. Gwyn
2002-06-27 17:07 rog
2002-06-27 17:07 forsyth
2002-06-27 16:33 ` Sam
2002-06-27 14:43 Richard Miller
2002-06-27 14:33 forsyth
2002-06-27 13:20 rob pike, esq.
2002-07-05 9:07 ` Maarit Maliniemi
2002-06-27 13:19 rob pike, esq.
2002-06-27 13:21 ` david presotto
2002-06-27 15:40 ` Douglas A. Gwyn
2002-06-27 12:27 rog
2002-06-27 12:12 Fco.J.Ballesteros
2002-06-27 12:06 Fco.J.Ballesteros
2002-06-27 11:13 ` Sam
2002-06-27 11:15 ` Sam
2002-06-27 10:29 nigel
2002-06-27 9:44 Stephen Parker
2002-06-27 9:43 Fco.J.Ballesteros
2002-06-27 9:36 Fco.J.Ballesteros
2002-06-27 1:22 okamoto
2002-06-27 1:05 okamoto
2002-06-26 23:08 forsyth
[not found] ` <171e27d5f2cfd12a9303117e58818e5b@plan9.bell-labs.com>
2002-06-26 18:27 ` Andrew Stitt
2002-06-26 17:39 ` Sam
2002-06-27 9:17 ` Andrew Stitt
2002-06-27 9:33 ` Ish Rattan
2002-06-27 9:59 ` Lucio De Re
2002-06-27 10:05 ` Boyd Roberts
2002-06-27 10:21 ` Lucio De Re
2002-06-27 15:40 ` Douglas A. Gwyn
2002-06-27 16:57 ` Lucio De Re
2002-06-28 8:45 ` Douglas A. Gwyn
2002-06-28 9:31 ` Boyd Roberts
2002-06-28 14:30 ` Douglas A. Gwyn
2002-06-28 15:06 ` Boyd Roberts
2002-07-01 9:46 ` Ralph Corderoy
2002-06-27 15:36 ` Douglas A. Gwyn
2002-06-27 15:12 ` Sam
2002-06-28 8:44 ` Douglas A. Gwyn
2002-06-26 22:07 ` Micah Stetson
2002-06-26 22:59 ` Chris Hollis-Locke
2002-06-27 0:03 ` Micah Stetson
2002-06-27 0:44 ` Micah Stetson
2002-06-27 9:17 ` Ralph Corderoy
2002-06-27 9:48 ` Boyd Roberts
2002-06-27 10:07 ` John Murdie
2002-06-27 15:40 ` Douglas A. Gwyn
2002-06-26 19:04 rog
2002-06-26 19:00 Warrier, Sadanand (Sadanand)
2002-06-27 0:13 ` Steve Arons
2002-06-26 18:41 forsyth
2002-06-26 18:40 David Gordon Hogan
2002-06-26 17:53 rog
2002-06-26 17:45 rob pike, esq.
2002-06-27 9:16 ` Andrew Stitt
2002-06-27 9:17 ` Douglas A. Gwyn
2002-06-26 19:53 ` arisawa
2002-06-27 11:43 ` Ralph Corderoy
2002-06-27 11:04 ` Sam
[not found] <nemo@plan9.escet.urjc.es>
2002-06-25 17:06 ` Fco.J.Ballesteros
2002-06-25 18:01 ` Scott Schwartz
2002-06-26 8:41 ` Andrew Stitt
2002-06-26 9:14 ` Nigel Roles
2002-06-26 17:41 ` Andrew Stitt
2002-06-26 16:08 ` Ish Rattan
2002-06-25 16:48 Andrew Stitt
2002-06-26 8:45 ` arisawa
2002-06-26 17:36 ` Andrew Stitt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).