9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
* Re: [9fans] dumb question
@ 2002-06-26 23:08 forsyth
       [not found] ` <171e27d5f2cfd12a9303117e58818e5b@plan9.bell-labs.com>
  0 siblings, 1 reply; 88+ messages in thread
From: forsyth @ 2002-06-26 23:08 UTC (permalink / raw)
  To: 9fans

you could see if iostats(4) offers a hint.
quite apart from retaining Plan 9 modes and long names correctly,
mkfs archives use a file header that's more compact than tar's (1 line per file not
512 bytes)



^ permalink raw reply	[flat|nested] 88+ messages in thread
* [9fans] Dumb question.
@ 2003-07-15  0:37 Dan Cross
  2003-07-15  3:23 ` boyd, rounin
  0 siblings, 1 reply; 88+ messages in thread
From: Dan Cross @ 2003-07-15  0:37 UTC (permalink / raw)
  To: 9fans

Don't shoot me.

Does a Java implementation of 9p2000 exist?

	- Dan C.

(Don't ask.)


^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-07-05  9:43 Geoff Collyer
  0 siblings, 0 replies; 88+ messages in thread
From: Geoff Collyer @ 2002-07-05  9:43 UTC (permalink / raw)
  To: 9fans

> i have also damaged whole file trees by doing
> 'tar cf adir - | {cd /missspelled/dir; tar xf -}'

I did too, a long time ago, and the lesson I drew is: you shouldn't
use "cd directory; tar", you should write
	cd directory && tar
.  In shell scripts, the shell will generally exit if cd fails, but
obviously interactive shells don't.

On Plan 9 at least, you can avoid destroying the archive (at least by
confusing "c" and "x") by not using the "f" key letter and letting tar
read standard input and write standard output, thus:

	tar c >tarfile afile
	tar x <tarfile afile

(Performance is a little poorer than using the "f" key letter, but
correctness seems more important here.)  To destroy the archive, you
have to get "<" and ">" mixed up, which seems less likely.  If it's
that much of a problem, protect your archives ("chmod a-w tarfile").



^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-07-01 11:03 rog
  2002-07-02  8:53 ` Douglas A. Gwyn
  0 siblings, 1 reply; 88+ messages in thread
From: rog @ 2002-07-01 11:03 UTC (permalink / raw)
  To: 9fans

> It strikes me that the arguments people are making against having
> blanks in file names are the same arguments I heard against converting
> Plan 9 to Unicode and UTF

the benefits of converting to utf-8 throughout were obvious.  does the
ability to type ' ' rather than (say) ALT-' ' really confer comparable
advantages?



^ permalink raw reply	[flat|nested] 88+ messages in thread
[parent not found: <rob@plan9.bell-labs.com>]
* Re: [9fans] dumb question
@ 2002-06-28 16:39 rob pike, esq.
  0 siblings, 0 replies; 88+ messages in thread
From: rob pike, esq. @ 2002-06-28 16:39 UTC (permalink / raw)
  To: 9fans

> diffs to dosfs? :-)

Putting your fingers in your ears and humming does not constitute
a viable solution to an enduring problem.

-rob



^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-28 16:31 rog
  0 siblings, 0 replies; 88+ messages in thread
From: rog @ 2002-06-28 16:31 UTC (permalink / raw)
  To: 9fans

doug:
> The question is, what would be a similar mechanism for rc that you'd
> find acceptable?

the same mechanism applies in rc ($ifs rather than $IFS).

but just fixing it in the shell doesn't apply to the multitude of
little nasty things that still need tampering with to work in the
presence of quoted filenames.

whereas before it was possible to just manipulate text irrespective of
whether it was a filename or a word, now you have to unquote filenames
on input and quote on output, so many things that manipulate text will
need to understand rc-style quotes.

in 3e, the internal representation of a filename is the same as its
external representation, and a lot of tools gain simplicity from this
identity transformation.

there are so many things in the system that potentially or actually
break, i really don't see why we can't just transform spaces at the
boundaries of the system (dossrv, ftpfs, et al) and hence be assured
that *nothing* breaks, rather than adding hack after hack to existing
tools.

rob:
> it seems the shell can and should be fixed. i will entertain
> proposals that include code.

diffs to dosfs? :-)

  cheers,
    rog.



^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-28 16:14 rob pike, esq.
  0 siblings, 0 replies; 88+ messages in thread
From: rob pike, esq. @ 2002-06-28 16:14 UTC (permalink / raw)
  To: 9fans

Well, rc has $ifs so you can always use the same familiar trick, but
it's not intellectually satisfying.  I have been quietly pushing a
uniform quoting convention through Plan 9, with good library support
(tokenize, %q).  The result is that if `{} used tokenize I think that
not only would this specific example work, but the need for hacks like
$ifs could be reduced.  If programs consistently quoted their output,
consistency would improve.

I regret blanks in file names, but they're here to stay if you do any
interoperability with Windows.  Windows being a royal irritation
doesn't mean that the problems it raises are not more generally
apparent.  The quoting stuff in Plan 9 came in for other reasons
having to do with consistent output from commands and devices.  If it
also leads to sensible handling of blanks in file names, good for us.

-rob



^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-28 14:10 rob pike, esq.
  2002-06-28 15:42 ` Douglas A. Gwyn
  0 siblings, 1 reply; 88+ messages in thread
From: rob pike, esq. @ 2002-06-28 14:10 UTC (permalink / raw)
  To: 9fans

	rm `{ls | grep -v precious}

	(which is now broken).

ls quotes its output, so i didn't see why this would fail.
Then I tried it:

rob%% date>'a b'
rob%% echo `{ls | grep ' '}
'a b'
rob%% rm `{ls | grep ' '}
rm: 'a: '''a' No such file or directory
rm: b': 'b''' No such file or directory
rob%%

it seems the shell can and should be fixed. i will entertain
proposals that include code.

-rob



^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-28 14:08 rog
  0 siblings, 0 replies; 88+ messages in thread
From: rog @ 2002-06-28 14:08 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 1813 bytes --]

>  From time to time I toy with the idea of implementing a
> fully correct xargs (the Unix one doesn't quote arguments).

if you're guaranteed not to have spaces or newlines in filenames, then
it doesn't matter.

i have a small C xargs (attached) that is probably still correct.  (i
think newlines are probably still banned in filenames).

> I wonder if structural r.e.s can express a
> complete and correct quoting algorithm.

well a normal r.e.  can express the rc quoting algorithm.  the problem
you can't just match it, you've got to translate it as well.

(([^ \t']+)|'([^']|\n|'')*')

the inferno shell provides primitives for quoting and unquoting lists,
which are useful at times.  (e.g.  variables in /env are stored in
quoted form rather than '\0' terminated).

> what we really need is a convenient way to feed macros to
> sam in a sed-like manner and somebody with the time and
> inclination to develop suitable macros and package it all up.

to be honest there's very little that beats the convenience of:

	rm `{ls | grep -v precious}

(which is now broken).

while i'm mentioning the inferno shell, nemo's example works nicely,
as you can pass shell fragments around as words.

e.g.
	walk /source {if {ftest -r $1} {echo $1}}

which lets the shell parse the argument command before passing it on
to the walk script:

	#!/dis/sh
	if {! ~ $#* 2} {
		echo 'usage: walk file cmd' >[1=2]
		exit usage
	}
	(file cmd)=$*
	du -a $file | getlines {
		(nil f) := ${split ' ' $line}
		$cmd $f
	}

if du produced a quoted word for the filename,

	(nil f) := ${split ' ' $line}

would just change to:

	(nil f) := ${unquote ' ' $line}

it's quite nice being able to do this sort of thing, but there are
tradeoffs, naturally.

  cheers,
    rog.


[-- Attachment #2: xargs.c.gz --]
[-- Type: application/octet-stream, Size: 765 bytes --]

^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-28  7:52 Fco.J.Ballesteros
  0 siblings, 0 replies; 88+ messages in thread
From: Fco.J.Ballesteros @ 2002-06-28  7:52 UTC (permalink / raw)
  To: 9fans

> What seems to me to make sense would be to cleanly segregate out
> the file-tree walker from *all* those applications, and design
> the application interfaces so they could operate in conjunction
> with the walker.  Sort of like find <criteria> -print | cpio ...

Do you mean something like this?


; cat walk
#!/bin/rc
# $f in cmd gets expanded to each file name.
rfork e
if (~ $#* 0 1) {
	echo 'usage: walk file cmd_using_$f...' >[1=2]
	exit usage
}
file=$1
shift
cd `{basename -d $file}
exec du -a $file | awk '{print $2}' | while (f=`{read}) { echo $* |rc }

I use it like in
	walk /tmp 'test -r $f && echo $f'

But perhaps I could also
	dst=/dst walk /source 'test -d $f && mkdir $dst/$f || cp $f $dst/$f'
although I'm just too used to tar for that purpose.



^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 22:59 Geoff Collyer
  2002-06-28  4:32 ` Lucio De Re
  0 siblings, 1 reply; 88+ messages in thread
From: Geoff Collyer @ 2002-06-27 22:59 UTC (permalink / raw)
  To: 9fans

Lucio, try this implementation of xargs.  It's what I use when very
long arg lists cause trouble.

#!/bin/rc
# xargs cmd [arg ...] - emulate Unix xargs
rfork ne
ramfs
split -n 500 -f /tmp/x
for (f in /tmp/x*)
	$* `{cat $f}



^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 18:38 forsyth
  2002-06-28  8:45 ` Douglas A. Gwyn
  0 siblings, 1 reply; 88+ messages in thread
From: forsyth @ 2002-06-27 18:38 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 212 bytes --]

i do something like that with some software i use for mirroring in Inferno,
where i take advantage of dynamically-loaded
module that traverses the structures and applies
another command or module as it goes.

[-- Attachment #2: Type: message/rfc822, Size: 2307 bytes --]

To: 9fans@cse.psu.edu
Subject: Re: [9fans] dumb question
Date: Thu, 27 Jun 2002 15:40:23 GMT
Message-ID: <3D1B2CCF.1FD49007@null.net>

Lucio De Re wrote:
> The penalty each instance of cp would have to pay if it included
> directory descent logic would be paid by every user, every time
> they use cp, while being used only a fraction of the time.
> Does it not make sense to leave the logic in tar, where it gets
> used under suitable circumstances and omit it in cp (and mv, for
> good measure and plenty other reasons)?

What seems to me to make sense would be to cleanly segregate out
the file-tree walker from *all* those applications, and design
the application interfaces so they could operate in conjunction
with the walker.  Sort of like find <criteria> -print | cpio ...

^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 17:07 rog
  0 siblings, 0 replies; 88+ messages in thread
From: rog @ 2002-06-27 17:07 UTC (permalink / raw)
  To: 9fans

> For the record, it doesn't have cp -R either.

ahem.
actually it does. i know 'cos i just cleaned up the source.
mind you at 247 lines and under 4K binary, it isn't huge.



^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 17:07 forsyth
  2002-06-27 16:33 ` Sam
  0 siblings, 1 reply; 88+ messages in thread
From: forsyth @ 2002-06-27 17:07 UTC (permalink / raw)
  To: 9fans

>>inferno.  For the record, it doesn't have cp -R either.

no, it's called cp -r.



^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 14:43 Richard Miller
  0 siblings, 0 replies; 88+ messages in thread
From: Richard Miller @ 2002-06-27 14:43 UTC (permalink / raw)
  To: 9fans

> For simplicity, I keep all my work in one huge file and use e.g.
> 	#ifdef ED
> 	....
> 	#endif ED
> to extract the specific lines of C source that make up ed when
> i need to work on ed...

This is pretty much the file system structure advocated by
Jef Raskin in The Humane Interface:

"... Documents are sequences of pages separated by document characters,
each typable, searchable, and deletable, just as is any other character.
There can be higher delimiters, such as folder and volume characters,
even section and library delimiters, but the number of levels depends
on the size of the data... An individual who insists on having explicit
document names can adopt the personal convention of placing the desired
names immediately following document characters... If you wanted a
directory, there could be a command that would assemble a document
comprising all instances of strings consisting of a document character,
followed by any other characters, up to and including the first
Return or higher delimiter." (pp 120-121)

-- Richard




^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 14:33 forsyth
  0 siblings, 0 replies; 88+ messages in thread
From: forsyth @ 2002-06-27 14:33 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 568 bytes --]

accident is one possible justification, and there's also a matter of
meaning.  if i say `copy this file to that file' i expect to see a
strict copy of the file as a result.  it's fairly clear cut.  on most
systems that provide directory `copying', if i say cp dir1 target (or
cp -r) and dir1 exists in the target, i usually obtain a merge (it's
perhaps one of the reasons that options bristle as finer control over
the operation is provided).  arguably, cp should instead remove files in the
existing destination directory in order to produce a strict copy.


[-- Attachment #2: Type: message/rfc822, Size: 1595 bytes --]

To: 9fans@cse.psu.edu
Subject: Re: [9fans] dumb question
Date: Thu, 27 Jun 2002 09:20:55 -0400
Message-ID: <3ba2a45af5c572e7280e465acb3a805a@plan9.bell-labs.com>

> Perhaps someone could explain why cp doesn't work recursively (without
> an option) when given a directory as a first argument.

Because running cp that way would probably happen more often by mistake
than by intention, and it could damage a lot of files very fast before the
user noticed.

-rob

^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 13:20 rob pike, esq.
  2002-07-05  9:07 ` Maarit Maliniemi
  0 siblings, 1 reply; 88+ messages in thread
From: rob pike, esq. @ 2002-06-27 13:20 UTC (permalink / raw)
  To: 9fans

> Perhaps someone could explain why cp doesn't work recursively (without
> an option) when given a directory as a first argument.

Because running cp that way would probably happen more often by mistake
than by intention, and it could damage a lot of files very fast before the
user noticed.

-rob



^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 13:19 rob pike, esq.
  2002-06-27 13:21 ` david presotto
  2002-06-27 15:40 ` Douglas A. Gwyn
  0 siblings, 2 replies; 88+ messages in thread
From: rob pike, esq. @ 2002-06-27 13:19 UTC (permalink / raw)
  To: 9fans

> Do you guys work mostly with flat file hierarchies or what?

For simplicity, I keep all my work in one huge file and use e.g.
	#ifdef ED
	....
	#endif ED
to extract the specific lines of C source that make up ed when
i need to work on ed. (I'm usually working on ed when I should
be working on cp; old habits die hard). This style of use means
for example there is only one copy of
	main(int argc, char *argv[])
on the system, which saves a lot of space.

I have some ideas for a similar technology to apply to 9fans.

-rob



^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 12:27 rog
  0 siblings, 0 replies; 88+ messages in thread
From: rog @ 2002-06-27 12:27 UTC (permalink / raw)
  To: 9fans

some great quotes have come out of this little exchange:

"9ers do it with du"
"...techno-pissing contest..."
"human beings aren't that different from machines"
"you keep replacing speed with blind simplicity"

the latter seems like a fair summary of what plan 9 is about:
sacrifice a few CPU cycles here for some more generality there.

> im arguing about the apparent acceptance of a solution which wastes
> resources

we accept many solutions that, strictly speaking, waste resources.

e.g.
	ls | wc

but do we care?  not unless the overhead impacts significantly on our
ability to get the job done...  and if it does, then measurement and
judicious optimisation is the way forward, as with any programming
task.

  cheers,
    rog.



^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 12:12 Fco.J.Ballesteros
  0 siblings, 0 replies; 88+ messages in thread
From: Fco.J.Ballesteros @ 2002-06-27 12:12 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 71 bytes --]

I see, you'd need to sign an NDA just to tell me.
Back to work then.

[-- Attachment #2: Type: message/rfc822, Size: 1613 bytes --]

From: Sam <sah@softcardsystems.com>
To: <9fans@cse.psu.edu>
Subject: Re: [9fans] dumb question
Date: Thu, 27 Jun 2002 07:13:49 -0400 (EDT)
Message-ID: <Pine.LNX.4.30.0206270712440.24208-100000@athena>

On Thu, 27 Jun 2002, Fco.J.Ballesteros wrote:

> :  Human beings aren't *that* different from
> :  machines.
>
> Which OS do you  run?

Model number / type is hard to come by since we're having a hard
reaching the author.

;-P

Sam


^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 12:06 Fco.J.Ballesteros
  2002-06-27 11:13 ` Sam
  0 siblings, 1 reply; 88+ messages in thread
From: Fco.J.Ballesteros @ 2002-06-27 12:06 UTC (permalink / raw)
  To: 9fans

:  Human beings aren't *that* different from
:  machines.

Which OS do you  run?

☺



^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27 10:29 nigel
  0 siblings, 0 replies; 88+ messages in thread
From: nigel @ 2002-06-27 10:29 UTC (permalink / raw)
  To: 9fans

> im arguing about the apparent acceptance of a solution which wastes
> resources, sure its 30k, but you can do quite a bit with 30k. and if you
> actually have lots of users, things start to slow down. point im trying to
> make: tar|tar uses more memory and resources then a recursive copy, ive
> thoroughly explained this many times. whats so hard to grasp here?

Perhaps you cannot grasp that the consensus agrees that this is not
the most efficient solution, but that it is "efficient enough" given
that recursive tree copy is not the most commonly used operation.
This is based on experience.

Members of the list have provided numerical evidence that it's
efficient enough.  But that's probably a mistake; it just started a
techno-pissing match from which there are no winners.  Telling the
list about disk sectors and DMA will not advance your argument.  The
majority know about those.  Really.

Plan 9 has a philosophy.  It has certain aims which are much more
important than the exact path by which data in files travel during a
tree copy.  One day, when everything else which is more important is
done, cp -r will get tuned till the magneto-optical domains squeak.
But that day will probably never come.

I think I'll close with the Presotto contention.  It runs like this;
"if you're unhappy with the way Plan 9 does does X, just wait until
you see Y".

In this case X = "file tree copies", and Y is always "web browsing".

Positively my last word on the subject.

Nigel



^ permalink raw reply	[flat|nested] 88+ messages in thread
* RE: [9fans] dumb question
@ 2002-06-27  9:44 Stephen Parker
  0 siblings, 0 replies; 88+ messages in thread
From: Stephen Parker @ 2002-06-27  9:44 UTC (permalink / raw)
  To: '9fans@cse.psu.edu'

most of us don't spend much time moving directories around.
maybe once per week.  maybe.
its not worth optimising this.

stephen

--
Stephen Parker.  Pi Technology.  +44 (0)1223 203438.

> -----Original Message-----
> From: Andrew Stitt [mailto:astitt@cats.ucsc.edu]
> Sent: 27 June 2002 10:17
> To: 9fans@cse.psu.edu
> Subject: Re: [9fans] dumb question
>
>
> On Wed, 26 Jun 2002, Sam wrote:
>
> > > cp is about 60k in plan9 and tar is 80k in plan9. cp on 3
> seperate unix
> > > machines (linux, SCO unix, os-X) in its overcomplicated
> copying directory
> > > tree glory is under 30k, on sunOS it happens to be 17k.
> to tar then untar
> > > requires two processes using a shared 80k tar, plus some
> intermediate data
> > > to archive and process the data, then immediatly reverse
> this process. cp
> > > _could_ be written to do all this in 17k but instead our
> 60k cp cant do
> > > it, and instead we need two entries in the process table
> and twice the
> > > number ofuser space pages.
> > I'm sorry, but there must be something wrong with my mail
> reader.  Are we
> > really arguing about 30k?  Am I dreaming?  Are you trying
> to run plan9
> > on a machine with 256k of memory?
> >
> > ...
> im arguing about the apparent acceptance of a solution which wastes
> resources, sure its 30k, but you can do quite a bit with 30k.
> and if you
> actually have lots of users, things start to slow down. point
> im trying to
> make: tar|tar uses more memory and resources then a recursive
> copy, ive
> thoroughly explained this many times. whats so hard to grasp here?
>


^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27  9:43 Fco.J.Ballesteros
  0 siblings, 0 replies; 88+ messages in thread
From: Fco.J.Ballesteros @ 2002-06-27  9:43 UTC (permalink / raw)
  To: 9fans

:  resources, sure its 30k, but you can do quite a bit with 30k. and if you
:  actually have lots of users, things start to slow down. point im trying to

Answer these questions:

How many users are doing dircps at the same time?
Have you actually seen that happen?

I think that it's not even worth discussing about saving 30k at
this place, but that's just IMHO.



^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27  9:36 Fco.J.Ballesteros
  0 siblings, 0 replies; 88+ messages in thread
From: Fco.J.Ballesteros @ 2002-06-27  9:36 UTC (permalink / raw)
  To: 9fans

:  > The command
:  > 	tar c ... | tar x ...
:  > will have almost identical cost to any explicit recursive copy, since
:  > it does the same amount of work.  The only extra overhead is writing
:
:  it does not do the same amount of work, it runs two processes which use at
:  least two pages instead of one, it also runs all the data through some

I think the point is that if you don't notice the `extra work'
done by the tars, then you'd better forget about optimizing it
(Who cares when nobody notices it?).



^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27  1:22 okamoto
  0 siblings, 0 replies; 88+ messages in thread
From: okamoto @ 2002-06-27  1:22 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 210 bytes --]

I forgot the essense to be written.

For cp -r, I suppose they thought that "can't you use bind(1) instead?".
If it's not, it must be somthing wrong usage of Plan 9, so you can use
it by tar...

Kenji


[-- Attachment #2: Type: message/rfc822, Size: 5549 bytes --]

[-- Attachment #2.1.1: Type: text/plain, Size: 338 bytes --]

Please let me say one very important thing. ^_^

If you feel something is missing in Plan 9, you'd better to think why they
eliminated it, particularly when it's related to basic unix commands.
I think there is no such thing which they eliminated carelessly in Plan 9.
I'm saying this from my many such experiences.  :-)

Kenji


[-- Attachment #2.1.2: Type: message/rfc822, Size: 3269 bytes --]

From: Andrew Stitt <astitt@cats.ucsc.edu>
To: 9fans@cse.psu.edu
Subject: Re: [9fans] dumb question
Date: Wed, 26 Jun 2002 17:41:06 GMT
Message-ID: <Pine.SOL.3.96.1020626101934.26964C-100000@teach.ic.ucsc.edu>

On Wed, 26 Jun 2002, Nigel Roles wrote:

> On Wed, 26 Jun 2002 08:41:06 GMT, Andrew Stitt wrote:
>
> >On Tue, 25 Jun 2002, Fco.J.Ballesteros wrote:
> >
> >> Again, from rsc tiny tools :-)
> >>
> >> ; cat /bin/dircp
> >> #!/bin/rc
> >>
> >> switch($#*){
> >> case 2
> >> 	@{cd $1 && tar c .}|@{cd $2 && tar x}
> >> case *
> >> 	echo usage: dircp from to >[1=2]
> >> }
> >>
> >>
> >why must i needlessly shove all the files into a tar, then unpack them
> >again? thats incredibly inefficient! that uses roughly twice the space
> >that should be required, it has to copy the files twice, and it has the
> >overhead of having to needless run the data through tar. Is there a better
> >solution to this?
> >      Andrew
>
> This does not use any more space. The tar commands are piped together.
> I doubt a specific cp -r type command would be particularly more efficient.
>
>
>
>
>
i beg to differ, tar uses memory, it uses system resources, i fail to see
how you think this is just as good as just recursively copying files. The
point is I shouldnt have to needlessly use this other program (for Tape
ARchives) to copy directorys. If this is on a fairly busy file server
needlessly running tar twice is simply wasteful and unacceptable when you
could just follow the directory tree.

^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-27  1:05 okamoto
  0 siblings, 0 replies; 88+ messages in thread
From: okamoto @ 2002-06-27  1:05 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 338 bytes --]

Please let me say one very important thing. ^_^

If you feel something is missing in Plan 9, you'd better to think why they
eliminated it, particularly when it's related to basic unix commands.
I think there is no such thing which they eliminated carelessly in Plan 9.
I'm saying this from my many such experiences.  :-)

Kenji


[-- Attachment #2: Type: message/rfc822, Size: 3269 bytes --]

From: Andrew Stitt <astitt@cats.ucsc.edu>
To: 9fans@cse.psu.edu
Subject: Re: [9fans] dumb question
Date: Wed, 26 Jun 2002 17:41:06 GMT
Message-ID: <Pine.SOL.3.96.1020626101934.26964C-100000@teach.ic.ucsc.edu>

On Wed, 26 Jun 2002, Nigel Roles wrote:

> On Wed, 26 Jun 2002 08:41:06 GMT, Andrew Stitt wrote:
>
> >On Tue, 25 Jun 2002, Fco.J.Ballesteros wrote:
> >
> >> Again, from rsc tiny tools :-)
> >>
> >> ; cat /bin/dircp
> >> #!/bin/rc
> >>
> >> switch($#*){
> >> case 2
> >> 	@{cd $1 && tar c .}|@{cd $2 && tar x}
> >> case *
> >> 	echo usage: dircp from to >[1=2]
> >> }
> >>
> >>
> >why must i needlessly shove all the files into a tar, then unpack them
> >again? thats incredibly inefficient! that uses roughly twice the space
> >that should be required, it has to copy the files twice, and it has the
> >overhead of having to needless run the data through tar. Is there a better
> >solution to this?
> >      Andrew
>
> This does not use any more space. The tar commands are piped together.
> I doubt a specific cp -r type command would be particularly more efficient.
>
>
>
>
>
i beg to differ, tar uses memory, it uses system resources, i fail to see
how you think this is just as good as just recursively copying files. The
point is I shouldnt have to needlessly use this other program (for Tape
ARchives) to copy directorys. If this is on a fairly busy file server
needlessly running tar twice is simply wasteful and unacceptable when you
could just follow the directory tree.

^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-26 19:04 rog
  0 siblings, 0 replies; 88+ messages in thread
From: rog @ 2002-06-26 19:04 UTC (permalink / raw)
  To: 9fans

> cp is about 60k in plan9 and tar is 80k in plan9.  cp on 3 seperate
> unix machines (linux, SCO unix, os-X) in its overcomplicated copying
> directory tree glory is under 30k, on sunOS it happens to be 17k.

this is a little over-harsh on plan 9, i think.

to start with none of the plan 9 binaries are stripped, which takes
cp and tar down to 40K and 50K respectively. also all plan 9 binaries
are statically linked. a look at a closeby FreeBSD box gives cp at
65K statically linked and stripped, which seems like a fairer comparison.
(remembering that dynamic libraries impose their own overhead too,
when starting up a program).

mind you, a quick look at the 3rd edition cp gives 15K stripped,
and the cp source hasn't changed that much. i wonder what's using
that extra 20K of text. i suspect the new print library.

  cheers,
    rog.



^ permalink raw reply	[flat|nested] 88+ messages in thread
* RE: [9fans] dumb question
@ 2002-06-26 19:00 Warrier, Sadanand (Sadanand)
  2002-06-27  0:13 ` Steve Arons
  0 siblings, 1 reply; 88+ messages in thread
From: Warrier, Sadanand (Sadanand) @ 2002-06-26 19:00 UTC (permalink / raw)
  To: '9fans@cse.psu.edu'

On my solaris machine

ls -la /usr/sbin/tar

-r-xr-xr-x   1  bin   bin 63544 Nov 24 1998 /usr/sbin/tar*

ls -la /opt/exp/gnu/bin/tar

-rwxrwxr-x   1  exptools  software 161328  Jul  4  2000
/opt/exp/gnu/bin/tar*

S

-----Original Message-----
From: forsyth@caldo.demon.co.uk [mailto:forsyth@caldo.demon.co.uk]
Sent: Wednesday, June 26, 2002 2:41 PM
To: 9fans@cse.psu.edu
Subject: Re: [9fans] dumb question


> cp is about 60k in plan9 and tar is 80k in plan9. cp on 3 seperate unix
> machines (linux, SCO unix, os-X) in its overcomplicated copying directory
> tree glory is under 30k, on sunOS it happens to be 17k. to tar then untar
> requires two processes using a shared 80k tar, plus some intermediate data
> to archive and process the data, then immediatly reverse this process. cp
> _could_ be written to do all this in 17k but instead our 60k cp cant do
> it, and instead we need two entries in the process table and twice the
> number ofuser space pages.

there's a useful point to be made here with respect to other such
comparisons:
you're overlooking the cost on the other systems of the (usually)
large memory space consumed by the dynamically-linked shared libraries.
that's why their cp seems `small' and plan9's seems `large'.  with plan 9,
cp really does take 60k (code, plus data+stack), on the others, it's
anyone's guess: the 17k is the tip of the iceberg.


^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-26 18:41 forsyth
  0 siblings, 0 replies; 88+ messages in thread
From: forsyth @ 2002-06-26 18:41 UTC (permalink / raw)
  To: 9fans

> cp is about 60k in plan9 and tar is 80k in plan9. cp on 3 seperate unix
> machines (linux, SCO unix, os-X) in its overcomplicated copying directory
> tree glory is under 30k, on sunOS it happens to be 17k. to tar then untar
> requires two processes using a shared 80k tar, plus some intermediate data
> to archive and process the data, then immediatly reverse this process. cp
> _could_ be written to do all this in 17k but instead our 60k cp cant do
> it, and instead we need two entries in the process table and twice the
> number ofuser space pages.

there's a useful point to be made here with respect to other such comparisons:
you're overlooking the cost on the other systems of the (usually)
large memory space consumed by the dynamically-linked shared libraries.
that's why their cp seems `small' and plan9's seems `large'.  with plan 9,
cp really does take 60k (code, plus data+stack), on the others, it's
anyone's guess: the 17k is the tip of the iceberg.



^ permalink raw reply	[flat|nested] 88+ messages in thread
* RE: [9fans] dumb question
@ 2002-06-26 18:40 David Gordon Hogan
  0 siblings, 0 replies; 88+ messages in thread
From: David Gordon Hogan @ 2002-06-26 18:40 UTC (permalink / raw)
  To: 9fans

> do you plan on doing this a lot?

I hope not.



^ permalink raw reply	[flat|nested] 88+ messages in thread
* RE: [9fans] dumb question
@ 2002-06-26 17:53 rog
  0 siblings, 0 replies; 88+ messages in thread
From: rog @ 2002-06-26 17:53 UTC (permalink / raw)
  To: 9fans

> possibly, keep in mind its a complete waste of file server resources to
> repeatedly tar/untar a directory tree, all i want is the ability to copy a
> directory tree without having to use a program that wasnt really designed
> for that anyways.

actually it's quite possible that the tar pipeline is *more* efficient
than a traditionally implemented cp -r, as it's doing reading and
writing in parallel.

i wouldn't be surprised to see the tar pipeline win out over cp -r
when copying between different disks.

the most valid objection is that tar doesn't cope with the new, longer
filenames in plan 9 (if that's true).  that could be annoying on
occasion.

  rog.



^ permalink raw reply	[flat|nested] 88+ messages in thread
* Re: [9fans] dumb question
@ 2002-06-26 17:45 rob pike, esq.
  2002-06-27  9:16 ` Andrew Stitt
  2002-06-27  9:17 ` Douglas A. Gwyn
  0 siblings, 2 replies; 88+ messages in thread
From: rob pike, esq. @ 2002-06-26 17:45 UTC (permalink / raw)
  To: astitt, 9fans

> i beg to differ, tar uses memory, it uses system resources, i fail to see
> how you think this is just as good as just recursively copying files. The
> point is I shouldnt have to needlessly use this other program (for Tape
> ARchives) to copy directorys. If this is on a fairly busy file server
> needlessly running tar twice is simply wasteful and unacceptable when you
> could just follow the directory tree.

The command
	tar c ... | tar x ...
will have almost identical cost to any explicit recursive copy, since
it does the same amount of work.  The only extra overhead is writing
to and reading from the pipe, which is so cheap - and entirely local -
that it is insignificant.

Whether you cp or tar, you must read the files from one tree and write
them to another.

-rob



^ permalink raw reply	[flat|nested] 88+ messages in thread
* RE: [9fans] dumb question
@ 2002-06-26  8:55 Stephen Parker
  2002-06-26 17:18 ` Andrew Stitt
  0 siblings, 1 reply; 88+ messages in thread
From: Stephen Parker @ 2002-06-26  8:55 UTC (permalink / raw)
  To: '9fans@cse.psu.edu'

do you plan on doing this a lot?

--
Stephen Parker.  Pi Technology.  +44 (0)1223 203438.

> why must i needlessly shove all the files into a tar, then unpack them
> again? thats incredibly inefficient! that uses roughly twice the space
> that should be required, it has to copy the files twice, and
> it has the
> overhead of having to needless run the data through tar. Is
> there a better
> solution to this?



^ permalink raw reply	[flat|nested] 88+ messages in thread
[parent not found: <nemo@plan9.escet.urjc.es>]
* [9fans] dumb question
@ 2002-06-25 16:48 Andrew Stitt
  2002-06-26  8:45 ` arisawa
  0 siblings, 1 reply; 88+ messages in thread
From: Andrew Stitt @ 2002-06-25 16:48 UTC (permalink / raw)
  To: 9fans

ok so im totally stumped on this: how do i copy a directory? cp can copy
files only, but has no nifty recursive option, what do i do? thanks
                 Andrew


^ permalink raw reply	[flat|nested] 88+ messages in thread

end of thread, other threads:[~2003-07-15  3:23 UTC | newest]

Thread overview: 88+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2002-06-26 23:08 [9fans] dumb question forsyth
     [not found] ` <171e27d5f2cfd12a9303117e58818e5b@plan9.bell-labs.com>
2002-06-26 18:27   ` Andrew Stitt
2002-06-26 17:39     ` Sam
2002-06-27  9:17       ` Andrew Stitt
2002-06-27  9:33         ` Ish Rattan
2002-06-27  9:59         ` Lucio De Re
2002-06-27 10:05           ` Boyd Roberts
2002-06-27 10:21             ` Lucio De Re
2002-06-27 15:40           ` Douglas A. Gwyn
2002-06-27 16:57             ` Lucio De Re
2002-06-28  8:45               ` Douglas A. Gwyn
2002-06-28  9:31               ` Boyd Roberts
2002-06-28 14:30                 ` Douglas A. Gwyn
2002-06-28 15:06                   ` Boyd Roberts
2002-07-01  9:46                   ` Ralph Corderoy
2002-06-27 15:36       ` Douglas A. Gwyn
2002-06-27 15:12         ` Sam
2002-06-28  8:44           ` Douglas A. Gwyn
2002-06-26 22:07     ` Micah Stetson
2002-06-26 22:59       ` Chris Hollis-Locke
2002-06-27  0:03         ` Micah Stetson
2002-06-27  0:44           ` Micah Stetson
2002-06-27  9:17     ` Ralph Corderoy
2002-06-27  9:48       ` Boyd Roberts
2002-06-27 10:07         ` John Murdie
2002-06-27 15:40         ` Douglas A. Gwyn
  -- strict thread matches above, loose matches on Subject: below --
2003-07-15  0:37 [9fans] Dumb question Dan Cross
2003-07-15  3:23 ` boyd, rounin
2002-07-05  9:43 [9fans] dumb question Geoff Collyer
2002-07-01 11:03 rog
2002-07-02  8:53 ` Douglas A. Gwyn
     [not found] <rob@plan9.bell-labs.com>
2002-06-28 16:49 ` rob pike, esq.
2002-06-29  2:23   ` Scott Schwartz
2002-06-28 16:39 rob pike, esq.
2002-06-28 16:31 rog
2002-06-28 16:14 rob pike, esq.
2002-06-28 14:10 rob pike, esq.
2002-06-28 15:42 ` Douglas A. Gwyn
2002-06-28 14:08 rog
2002-06-28  7:52 Fco.J.Ballesteros
2002-06-27 22:59 Geoff Collyer
2002-06-28  4:32 ` Lucio De Re
2002-06-27 18:38 forsyth
2002-06-28  8:45 ` Douglas A. Gwyn
2002-06-27 17:07 rog
2002-06-27 17:07 forsyth
2002-06-27 16:33 ` Sam
2002-06-27 14:43 Richard Miller
2002-06-27 14:33 forsyth
2002-06-27 13:20 rob pike, esq.
2002-07-05  9:07 ` Maarit Maliniemi
2002-06-27 13:19 rob pike, esq.
2002-06-27 13:21 ` david presotto
2002-06-27 15:40 ` Douglas A. Gwyn
2002-06-27 12:27 rog
2002-06-27 12:12 Fco.J.Ballesteros
2002-06-27 12:06 Fco.J.Ballesteros
2002-06-27 11:13 ` Sam
2002-06-27 11:15   ` Sam
2002-06-27 10:29 nigel
2002-06-27  9:44 Stephen Parker
2002-06-27  9:43 Fco.J.Ballesteros
2002-06-27  9:36 Fco.J.Ballesteros
2002-06-27  1:22 okamoto
2002-06-27  1:05 okamoto
2002-06-26 19:04 rog
2002-06-26 19:00 Warrier, Sadanand (Sadanand)
2002-06-27  0:13 ` Steve Arons
2002-06-26 18:41 forsyth
2002-06-26 18:40 David Gordon Hogan
2002-06-26 17:53 rog
2002-06-26 17:45 rob pike, esq.
2002-06-27  9:16 ` Andrew Stitt
2002-06-27  9:17 ` Douglas A. Gwyn
2002-06-26 19:53   ` arisawa
2002-06-27 11:43   ` Ralph Corderoy
2002-06-27 11:04     ` Sam
2002-06-26  8:55 Stephen Parker
2002-06-26 17:18 ` Andrew Stitt
     [not found] <nemo@plan9.escet.urjc.es>
2002-06-25 17:06 ` Fco.J.Ballesteros
2002-06-25 18:01   ` Scott Schwartz
2002-06-26  8:41   ` Andrew Stitt
2002-06-26  9:14     ` Nigel Roles
2002-06-26 17:41       ` Andrew Stitt
2002-06-26 16:08         ` Ish Rattan
2002-06-25 16:48 Andrew Stitt
2002-06-26  8:45 ` arisawa
2002-06-26 17:36   ` Andrew Stitt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).