9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
* [9fans] netpipe results
@ 2002-03-09  2:44 andrey mirtchovski
  0 siblings, 0 replies; 3+ messages in thread
From: andrey mirtchovski @ 2002-03-09  2:44 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 986 bytes --]

We ported netpipe natively...  It has some nasty code in it that has
nothing to do with optimization.  It's simply not written following
TPOP.

Attached is a jpg plot of the data received from netpipe.  Measured
are Linux (RH7.2), FreeBSD (4.4) and Plan9, where Plan 9 is tested on
both 100baseT and Gigabit Ethernet (setup as before)...

If you need the netpipe code mail me privately and I'll send it to you
monday, but have in mind that it's rather nasty.

Some strange behaviour to be noted -- in both cases (100mbit and gige)
the test slowed down to a significant crawl when the datasize became
larger.  The GigE interface showed very large variations at the end
(it is plotted with points simply because the lines were too
discouraging).  Again, this could be a consequence of the port.

Both Linux and FreeBSD completed all 127(128?)  iterations, but data
at the end was removed (it continued the shape of the curve without
significant change).


andrey


[-- Attachment #2: plot.jpg --]
[-- Type: image/jpeg, Size: 33258 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [9fans] netpipe results
  2002-03-09 13:52 presotto
@ 2002-03-13  1:00 ` Dean Prichard
  0 siblings, 0 replies; 3+ messages in thread
From: Dean Prichard @ 2002-03-13  1:00 UTC (permalink / raw)
  To: presotto; +Cc: 9fans


Andrey and I spend some time trying out different values
of QMAX in /sys/src/9/ip/tcp.c. Increasing the size of QMAX
seems to improve netpipe large packet behavior quite a bit.
All tests were done using 2 800Mhz PIII machines with onboard
i82557 ethernet. The graphs are available from
http://www.acl.lanl.gov/plan9/netpipe/i82557.html
The curves still flatten out above 32k data size, but it is much
better than before once QMAX >= 256k.

-dp

On Sat, 9 Mar 2002 presotto@plan9.bell-labs.com wrote:

> Date: Sat, 9 Mar 2002 08:52:00 -0500
> From: presotto@plan9.bell-labs.com
> Reply-To: 9fans@cse.psu.edu
> To: 9fans@cse.psu.edu
> Subject: Re: [9fans] netpipe results
>
> Looks good.  Is the 'data size' the size of a single write i.e. what the
> NetPIPE paper calls 'block size'?  If the latter, I can guess at some
> explanations to start working at improving things.
>
> Looks like things start to fall off around 32K byte writes.  Just looking
> at numerology, that's what Maxatomic is set to in port/qio.c.  It's
> the point we start breaking writes into multiple blocks.  I assume that
> will have some affect though not a huge one.  The drop we see at that
> point is more than I would expect.  The real nose dive happens
> twixt 64k and 100k.  64k is the size of the per channel output queue.
> After that, the system has to wait for the queue to half empty before
> queuing any more.  Just for a wild guess, it looks like we're getting
> screwed by interaction twixt that histeresis and the process getting
> rescheduled.  There's similar problems with the size of the input
> per channel queue (64k).  Perhaps we should have spent more time
> reading Kleinrock.
>
> These are just wild guesses.  I'll start playing to see if I can explain
> it, and preferably make it go away.  I welcome anyone else doing the
> same.
>
> Thanks much.  It'll make a fun distraction.
>



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [9fans] netpipe results
@ 2002-03-09 13:52 presotto
  2002-03-13  1:00 ` Dean Prichard
  0 siblings, 1 reply; 3+ messages in thread
From: presotto @ 2002-03-09 13:52 UTC (permalink / raw)
  To: 9fans

Looks good.  Is the 'data size' the size of a single write i.e. what the
NetPIPE paper calls 'block size'?  If the latter, I can guess at some
explanations to start working at improving things.

Looks like things start to fall off around 32K byte writes.  Just looking
at numerology, that's what Maxatomic is set to in port/qio.c.  It's
the point we start breaking writes into multiple blocks.  I assume that
will have some affect though not a huge one.  The drop we see at that
point is more than I would expect.  The real nose dive happens
twixt 64k and 100k.  64k is the size of the per channel output queue.
After that, the system has to wait for the queue to half empty before
queuing any more.  Just for a wild guess, it looks like we're getting
screwed by interaction twixt that histeresis and the process getting
rescheduled.  There's similar problems with the size of the input
per channel queue (64k).  Perhaps we should have spent more time
reading Kleinrock.

These are just wild guesses.  I'll start playing to see if I can explain
it, and preferably make it go away.  I welcome anyone else doing the
same.

Thanks much.  It'll make a fun distraction.


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2002-03-13  1:00 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2002-03-09  2:44 [9fans] netpipe results andrey mirtchovski
2002-03-09 13:52 presotto
2002-03-13  1:00 ` Dean Prichard

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).