From: presotto@plan9.bell-labs.com
To: 9fans@cse.psu.edu
Subject: Re: [9fans] env walk
Date: Thu, 10 May 2001 08:58:01 -0400 [thread overview]
Message-ID: <20010510125808.B6A6E199FB@mail.cse.psu.edu> (raw)
[-- Attachment #1: Type: text/plain, Size: 2141 bytes --]
We stuck your change into devenv.c. devproc.c was pretty bad too, since our
cpu servers often have upwards of 300 procs. It was n**2 for ps which got
unwieldy for a tool rob is building, so we added a hash table to proc.c
to map pid to proctab slot. Time for ps of our cpu servers goes from
4 seconds to milliseconds. We'll include both changes in the next update
wrap.
We should probably rethink xxxgen, but for now we'll just speed up the
high runners; even the proc.c hash was only a few lines of code.
By the way, it's setting all the env's in your example that takes the
time, not forking in rc. Rc doesn't reread the /env's. If you
do an iostats on the example you'll see no opens of /env after
setting the vars:
% eval X^(`{seq 400})^('=1')
% echo /env/*|wc
1 448 4443
% iostats rc
% rfork e
% time echo `{echo x | wc -c}
2
0.00u 0.00s 0.17r echo 2
% ^D
read 157530 bytes, 9.022221 Kb/sec
write 95 bytes, 1.325316 Kb/sec
protocol 161425 bytes, 8.131304 Kb/sec
rpc 128 count
Message Count Low High Time Averg in out
clone 17 0 0 0 0 ms 119 85 bytes
walk 30 0 297 1092 36 ms 990 338 bytes
open 9 32 469 1142 126 ms 54 117 bytes
read 50 0 13593 17051 341 ms 750 157930 bytes
write 5 1 62 70 14 ms 175 35 bytes
clunk 14 0 16 31 2 ms 70 70 bytes
stat 2 0 1 1 0 ms 10 242 bytes
attach 1 0 0 0 0 ms 146 26 bytes
Opens Reads (bytes) Writes (bytes) File
2 9 28100 0 0 /bin/echo
1 3 36 0 0 (stdin)
1 0 0 1 2 (stdout)
1 0 0 4 93 (stderr)
1 2 579 0 0 /rc/lib/rcmain
1 23 87864 0 0 /bin/rc
1 6 18680 0 0 /bin/time
1 7 22271 0 0 /bin/wc
%
[-- Attachment #2: Type: message/rfc822, Size: 4405 bytes --]
From: Peter Schay <pschay@pobox.com>
To: 9fans@cse.psu.edu
Subject: Re: [9fans] env walk
Date: Thu, 10 May 2001 08:33:48 GMT
Message-ID: <3AF9CB94.C507F463@pobox.com>
Thanks for looking into this! Sorry for beating it to death, but...
I still think more time is spent in envgen, because, as you said,
the algorithm is O(n**3).
Also devdir and all the extra qlocking contribute as well.
(see the kprof results below).
I changed your example to be triply nested,
int i, j, k;
for(i = 0; i < nelem(x)-1; i++)
x[i].next = &x[i+1];
for(i = 0; i < nelem(x); i++)
for(j = 0; j < nelem(x); j++)
for(k = 0, p = x; k < j && p != nil; p = p->next, k++)
;
and it took around 11 seconds instead of .01. This is a good
example of what happens with the triply nested loops of
rc, devwalk, and envgen.
Sorry, the original example I gave was misleading for two reasons.
First, you are right and it does walk to all the /env entries
of course so I should have said this was O(n**3).
I was talking about the time to walk to a *single* entry
being O(n**2). Indeed the time to walk to all the entries is
O(n**3) which is why it really starts to get slow.
My example was also misleading because the time taken is
not because of ls /env. You can get a similar delay from forking
any process because rc walks to all the /env entries. (I think).
The following example is also slow because of the O(n**3) effect,
and it doesn't use ls /env.
term% echo startclr > /dev/kpctl
term% time /fd/0 <<.
#!/bin/rc
rfork e
eval X^(`{seq 400})^('=1')
echo `{echo x | wc -c}
.
2
0.14u 3.14s 3.68r /fd/0
term% echo stop > /dev/kpctl
term% kprof /n/9fat/9pcdisk /dev/kpdata | sed 12q
total: 3684 in kernel text: 3528 outside kernel text: 156
KTZERO 80100000
ms % sym
732 20.7 envgen
540 15.3 i8259isr
348 9.8 sched
348 9.8 runproc
216 6.1 lock
132 3.7 devdir
132 3.7 cycletimer
132 3.7 memmove
96 2.7 qunlock
term%
-Pete
presotto@plan9.bell-labs.com wrote:
>
> The n**2 comes from the ls M* walking each entry in the environment
> with envwalk walking from the first to that entry every time. All the
> time is spent in the devdir+convM2D in procgen. The
>
> for(e = eg->entries; e && s; e = e->link)
> s--;
>
> is pretty much fleeting even for 1000 entries. Otherwise the algorithm for
> your example would be n**3.
>
> for example:
>
> #include <u.h>
> #include <libc.h>
>
> struct X
> {
> struct X *next;
> };
>
> struct X x[1000];
>
> void
> main(void)
> {
> struct X *p;
> int i, j;
>
> for(i = 0; i < nelem(x)-1; i++)
> x[i].next = &x[i+1];
>
> for(i = 0; i < nelem(x); i++)
> for(j = 0, p = x; j < i && p != nil; p = p->next, j++)
> ;
>
> }
>
> takes .01 seconds on my crufty old machine.
> The advantage to your loop is that you take all the useless stuff
> that devdir does out of the inner loop, not too mention the call
> itself.
next reply other threads:[~2001-05-10 12:58 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2001-05-10 12:58 presotto [this message]
-- strict thread matches above, loose matches on Subject: below --
2001-05-10 14:48 rog
2001-05-10 14:41 presotto
2001-05-10 13:49 rob pike
2001-05-10 13:39 rog
2001-05-10 13:04 presotto
2001-05-09 16:58 presotto
2001-05-09 17:34 ` Sam
2001-05-09 17:39 ` Sam
2001-05-10 8:33 ` Peter Schay
2001-05-09 13:23 presotto
2001-05-09 15:45 ` Peter Schay
2001-05-09 11:54 Peter Schay
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20010510125808.B6A6E199FB@mail.cse.psu.edu \
--to=presotto@plan9.bell-labs.com \
--cc=9fans@cse.psu.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).