9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
* Re: [9fans] network throughput measurements
@ 2002-03-08 22:19 andrey mirtchovski
  0 siblings, 0 replies; 7+ messages in thread
From: andrey mirtchovski @ 2002-03-08 22:19 UTC (permalink / raw)
  To: 9fans

> I'm not complaining, I like seeing numbers.  However, these two statements
> don't jibe.

that was an example of future creep :) it started simple
but then someone thought 'why not try with more data' and
then 'why not try with more iterations to make sure'...


> want to do to improve the numbers for plan 9, we can do it trivially by
> just making sure we buffer all the 1 byte writes before sending anything
>  out.  Then you're just measuring the time for 10000 1 byte writes which is
> moderately quick.  However, I suspect that's not what you want to speed
> up.  I suspect that the BSD stack is doing significantly better than we are
> at ganging bytes/packet and hence causes fewer interrupts, runs up the IP
> stack, system calls, etc.

we discussed that here and came to pretty much the same conclusion...




^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [9fans] network throughput measurements
@ 2002-03-08 22:27 presotto
  0 siblings, 0 replies; 7+ messages in thread
From: presotto @ 2002-03-08 22:27 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 18 bytes --]

That would be nice

[-- Attachment #2: Type: message/rfc822, Size: 2136 bytes --]

From: Ronald G Minnich <rminnich@lanl.gov>
To: <9fans@cse.psu.edu>
Subject: Re: [9fans] network throughput measurements
Date: Fri, 8 Mar 2002 15:14:49 -0700 (MST)
Message-ID: <Pine.LNX.4.33.0203081514250.1424-100000@snaresland.acl.lanl.gov>

On Fri, 8 Mar 2002 presotto@plan9.bell-labs.com wrote:

> I'm actually interested in the number of bytes/packet seen on the receiver.
> All you measured what the amount of time to queue the messages on the
> sender, not the time to send them, the average time for each byte to make
> the journey, or the time to get the first byte to the wire.

hence the need for a native netpipe port.

ron

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [9fans] network throughput measurements
  2002-03-08 21:46 presotto
@ 2002-03-08 22:14 ` Ronald G Minnich
  0 siblings, 0 replies; 7+ messages in thread
From: Ronald G Minnich @ 2002-03-08 22:14 UTC (permalink / raw)
  To: 9fans

On Fri, 8 Mar 2002 presotto@plan9.bell-labs.com wrote:

> I'm actually interested in the number of bytes/packet seen on the receiver.
> All you measured what the amount of time to queue the messages on the
> sender, not the time to send them, the average time for each byte to make
> the journey, or the time to get the first byte to the wire.

hence the need for a native netpipe port.

ron



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [9fans] network throughput measurements
@ 2002-03-08 21:46 presotto
  2002-03-08 22:14 ` Ronald G Minnich
  0 siblings, 1 reply; 7+ messages in thread
From: presotto @ 2002-03-08 21:46 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 1316 bytes --]

>actually we were more interested in measuring the latency p9 imposes
>on transmission, i.e.  the time it takes for the first byte to start
>appearing on the network...

+

>	buf = mallocz(i, 1);
>	tbegin = nsec();
>	for(j = 0; j < 10000; j++)
>		write(fd, buf, i);
>	tend = nsec();

I'm not complaining, I like seeing numbers.  However, these two statements
don't jibe.

> at one point the receiver side was printing the total number of bytes
> received, which was in line with calculations, so no huge discrepancy
> there..

I'm actually interested in the number of bytes/packet seen on the receiver.
All you measured what the amount of time to queue the messages on the
sender, not the time to send them, the average time for each byte to make
the journey, or the time to get the first byte to the wire.  If all you
want to do to improve the numbers for plan 9, we can do it trivially by
just making sure we buffer all the 1 byte writes before sending anything
 out.  Then you're just measuring the time for 10000 1 byte writes which is
moderately quick.  However, I suspect that's not what you want to speed
up.  I suspect that the BSD stack is doing significantly better than we are
at ganging bytes/packet and hence causes fewer interrupts, runs up the IP
stack, system calls, etc.

[-- Attachment #2: Type: message/rfc822, Size: 1779 bytes --]

From: andrey mirtchovski <andrey@lanl.gov>
To: 9fans@cse.psu.edu
Subject: Re: [9fans] network throughput measurements
Date: Fri, 8 Mar 2002 14:29:03 -0700
Message-ID: <20020308212840.DEFC61999B@mail.cse.psu.edu>

actually we were more interested in measuring the latency p9 imposes
on transmission, i.e.  the time it takes for the first byte to start
appearing on the network...

yes, this is not the best way to measure performance, but it was a
good exercise before porting netpipe natively to p9.  the results
seemed interesting enough to "publish" :)

at one point the receiver side was printing the total number of bytes
received, which was in line with calculations, so no huge discrepancy
there..

andrey

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [9fans] network throughput measurements
@ 2002-03-08 21:29 andrey mirtchovski
  0 siblings, 0 replies; 7+ messages in thread
From: andrey mirtchovski @ 2002-03-08 21:29 UTC (permalink / raw)
  To: 9fans

actually we were more interested in measuring the latency p9 imposes
on transmission, i.e.  the time it takes for the first byte to start
appearing on the network...

yes, this is not the best way to measure performance, but it was a
good exercise before porting netpipe natively to p9.  the results
seemed interesting enough to "publish" :)

at one point the receiver side was printing the total number of bytes
received, which was in line with calculations, so no huge discrepancy
there..

andrey



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [9fans] network throughput measurements
@ 2002-03-08 21:14 presotto
  0 siblings, 0 replies; 7+ messages in thread
From: presotto @ 2002-03-08 21:14 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 360 bytes --]

You might want to send a byte back at the end to make sure the
receiver is done.  What you're measuring is the amount of time
to get the data into the pipeon the write side, not the amount
of time to complete the transmission.  I don't see that you are
measuring latency anywhere.

As is, I'ld say you're measuring our inability to do Nagel's
algorithm.

[-- Attachment #2: Type: message/rfc822, Size: 10691 bytes --]

[-- Attachment #2.1.1: Type: text/plain, Size: 1167 bytes --]

Hello, 

Here are a few numbers with regards to network throughput of p9 and
other operating systems.  No substantial discussion is presented, but
we would like to draw your attention towards the small data size
transmission results.


The setup is as follows: EtherExpress100 network cards on pentuim 800s
for plan9, Netgear GA620 for the GigE tests.

The test involves:


Sender:
for(i = 1; i < somesize; i= i<<1) {
	buf = mallocz(i, 1);
	tbegin = nsec();
	for(j = 0; j < 10000; j++)
		write(fd, buf, i);
	tend = nsec();

	/* calculate and print result */
}


Receiver:
buf[8192];
dnull = open("/dev/null", OWRITE);
while((n = read(fd, buf, 8192)) > 0) 
	write(dnull, buf, n);


The same code was used for linux/fbsd but all the headers and crappy
TCP initialization crud was added in front of it.

Also, the Linux/FBSD code uses ftime(2) for calculating the elapsed
time, which gives resolutions down to a microsecond.  We found no
better way to measure elapsed time.

Note: we're currently looking at ways to improve the latency penalty
for small data size packets and suggestions are most welcome.


cheers, andrey


[-- Attachment #2.1.2: Type: message/rfc822, Size: 7937 bytes --]


100BaseT:

10000 iterations over data size 1:     0.18 mbits/s...     0.43 seconds
10000 iterations over data size 2:     0.37 mbits/s...     0.43 seconds
10000 iterations over data size 4:     0.87 mbits/s...     0.37 seconds
10000 iterations over data size 8:     1.72 mbits/s...     0.37 seconds
10000 iterations over data size 16:     3.53 mbits/s...     0.36 seconds
10000 iterations over data size 32:     7.18 mbits/s...     0.36 seconds
10000 iterations over data size 64:    14.12 mbits/s...     0.36 seconds
10000 iterations over data size 128:    30.32 mbits/s...     0.34 seconds
10000 iterations over data size 256:    52.98 mbits/s...     0.39 seconds
10000 iterations over data size 512:    92.56 mbits/s...     0.44 seconds
10000 iterations over data size 1024:    94.13 mbits/s...     0.87 seconds
10000 iterations over data size 2048:    94.44 mbits/s...     1.73 seconds
10000 iterations over data size 4096:    94.58 mbits/s...     3.46 seconds
10000 iterations over data size 8192:    94.45 mbits/s...     6.94 seconds
10000 iterations over data size 16384:    93.68 mbits/s...    13.99 seconds
10000 iterations over data size 32768:    92.27 mbits/s...    28.41 seconds
10000 iterations over data size 65536:    92.15 mbits/s...    56.89 seconds


GigE:

10000 iterations over data size 1:     0.23 mbits/s...     0.35 seconds
10000 iterations over data size 2:     0.49 mbits/s...     0.33 seconds
10000 iterations over data size 4:     1.09 mbits/s...     0.29 seconds
10000 iterations over data size 8:     2.32 mbits/s...     0.28 seconds
10000 iterations over data size 16:     4.79 mbits/s...     0.27 seconds
10000 iterations over data size 32:    10.01 mbits/s...     0.26 seconds
10000 iterations over data size 64:    19.99 mbits/s...     0.26 seconds
10000 iterations over data size 128:    40.50 mbits/s...     0.25 seconds
10000 iterations over data size 256:    68.98 mbits/s...     0.30 seconds
10000 iterations over data size 512:   133.30 mbits/s...     0.31 seconds
10000 iterations over data size 1024:   160.94 mbits/s...     0.51 seconds
10000 iterations over data size 2048:   170.62 mbits/s...     0.96 seconds
10000 iterations over data size 4096:   171.85 mbits/s...     1.91 seconds
10000 iterations over data size 8192:   168.40 mbits/s...     3.89 seconds
10000 iterations over data size 16384:   158.23 mbits/s...     8.28 seconds
10000 iterations over data size 32768:   153.45 mbits/s...    17.08 seconds
10000 iterations over data size 65536:   152.23 mbits/s...    34.44 seconds




IL over 100BaseT:

10000 iterations over data size 1:     0.25 mbits/s...     0.32 seconds
10000 iterations over data size 2:     0.50 mbits/s...     0.32 seconds
10000 iterations over data size 4:     0.98 mbits/s...     0.33 seconds
10000 iterations over data size 8:     1.98 mbits/s...     0.32 seconds
10000 iterations over data size 16:     3.94 mbits/s...     0.32 seconds
10000 iterations over data size 32:     7.84 mbits/s...     0.33 seconds
10000 iterations over data size 64:    15.48 mbits/s...     0.33 seconds
10000 iterations over data size 128:    30.50 mbits/s...     0.34 seconds
10000 iterations over data size 256:    59.05 mbits/s...     0.35 seconds
10000 iterations over data size 512:    90.30 mbits/s...     0.45 seconds
10000 iterations over data size 1024:    94.99 mbits/s...     0.86 seconds
10000 iterations over data size 2048:    94.31 mbits/s...     1.74 seconds
10000 iterations over data size 4096:    95.91 mbits/s...     3.42 seconds
10000 iterations over data size 8192:    95.90 mbits/s...     6.83 seconds
10000 iterations over data size 16384:    95.81 mbits/s...    13.68 seconds


--- linux ---

GigE on Linux (Alpha):

10000 iterations over data size 1:     6.15 mbits/s...     0.01 seconds
10000 iterations over data size 2:    11.43 mbits/s...     0.01 seconds
10000 iterations over data size 4:    22.86 mbits/s...     0.01 seconds
10000 iterations over data size 8:    49.23 mbits/s...     0.01 seconds
10000 iterations over data size 16:    85.33 mbits/s...     0.01 seconds
10000 iterations over data size 32:   150.59 mbits/s...     0.02 seconds
10000 iterations over data size 64:   256.00 mbits/s...     0.02 seconds
10000 iterations over data size 128:   393.85 mbits/s...     0.03 seconds
10000 iterations over data size 256:   553.51 mbits/s...     0.04 seconds
10000 iterations over data size 512:   546.13 mbits/s...     0.07 seconds
10000 iterations over data size 1024:   549.80 mbits/s...     0.15 seconds
10000 iterations over data size 2048:   547.96 mbits/s...     0.30 seconds
10000 iterations over data size 4096:   549.80 mbits/s...     0.60 seconds
10000 iterations over data size 8192:   549.80 mbits/s...     1.19 seconds
10000 iterations over data size 16384:   549.80 mbits/s...     2.38 seconds
10000 iterations over data size 32768:   553.98 mbits/s...     4.73 seconds
10000 iterations over data size 65536:   554.39 mbits/s...     9.46 seconds
10000 iterations over data size 131072:   554.71 mbits/s...    18.90 seconds
10000 iterations over data size 262144:   556.79 mbits/s...    37.66 seconds




100BaseT on Linux (RH 7.2):

10000 iterations over data size 1:     2.96 mbits/s...     0.03 seconds
10000 iterations over data size 2:     5.93 mbits/s...     0.03 seconds
10000 iterations over data size 4:    12.80 mbits/s...     0.03 seconds
10000 iterations over data size 8:    27.83 mbits/s...     0.02 seconds
10000 iterations over data size 16:    51.20 mbits/s...     0.03 seconds
10000 iterations over data size 32:    91.43 mbits/s...     0.03 seconds
10000 iterations over data size 64:   111.30 mbits/s...     0.05 seconds
10000 iterations over data size 128:    93.94 mbits/s...     0.11 seconds
10000 iterations over data size 256:    93.94 mbits/s...     0.22 seconds
10000 iterations over data size 512:    93.94 mbits/s...     0.44 seconds
10000 iterations over data size 1024:    94.16 mbits/s...     0.87 seconds
10000 iterations over data size 2048:    94.27 mbits/s...     1.74 seconds
10000 iterations over data size 4096:    94.13 mbits/s...     3.48 seconds
10000 iterations over data size 8192:    94.12 mbits/s...     6.96 seconds
10000 iterations over data size 16384:    94.15 mbits/s...    13.92 seconds
10000 iterations over data size 32768:    94.15 mbits/s...    27.84 seconds
10000 iterations over data size 65536:    94.14 mbits/s...    55.69 seconds
10000 iterations over data size 131072:    94.12 mbits/s...   111.41 seconds


--- freebsd -- 

100BaseT on FBSD (4.4):


10000 iterations over data size 1:     3.64 mbits/s...     0.02 seconds
10000 iterations over data size 2:     7.62 mbits/s...     0.02 seconds
10000 iterations over data size 4:    14.55 mbits/s...     0.02 seconds
10000 iterations over data size 8:    29.09 mbits/s...     0.02 seconds
10000 iterations over data size 16:    49.23 mbits/s...     0.03 seconds
10000 iterations over data size 32:    85.33 mbits/s...     0.03 seconds
10000 iterations over data size 64:    94.81 mbits/s...     0.05 seconds
10000 iterations over data size 128:    93.94 mbits/s...     0.11 seconds
10000 iterations over data size 256:    94.38 mbits/s...     0.22 seconds
10000 iterations over data size 512:    94.38 mbits/s...     0.43 seconds
10000 iterations over data size 1024:    94.16 mbits/s...     0.87 seconds
10000 iterations over data size 2048:    94.11 mbits/s...     1.74 seconds
10000 iterations over data size 4096:    94.08 mbits/s...     3.48 seconds
10000 iterations over data size 8192:    94.05 mbits/s...     6.97 seconds
10000 iterations over data size 16384:    94.05 mbits/s...    13.94 seconds
10000 iterations over data size 32768:    94.08 mbits/s...    27.86 seconds
10000 iterations over data size 65536:    94.08 mbits/s...    55.73 seconds


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [9fans] network throughput measurements
@ 2002-03-08 21:07 andrey mirtchovski
  0 siblings, 0 replies; 7+ messages in thread
From: andrey mirtchovski @ 2002-03-08 21:07 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 1167 bytes --]

Hello, 

Here are a few numbers with regards to network throughput of p9 and
other operating systems.  No substantial discussion is presented, but
we would like to draw your attention towards the small data size
transmission results.


The setup is as follows: EtherExpress100 network cards on pentuim 800s
for plan9, Netgear GA620 for the GigE tests.

The test involves:


Sender:
for(i = 1; i < somesize; i= i<<1) {
	buf = mallocz(i, 1);
	tbegin = nsec();
	for(j = 0; j < 10000; j++)
		write(fd, buf, i);
	tend = nsec();

	/* calculate and print result */
}


Receiver:
buf[8192];
dnull = open("/dev/null", OWRITE);
while((n = read(fd, buf, 8192)) > 0) 
	write(dnull, buf, n);


The same code was used for linux/fbsd but all the headers and crappy
TCP initialization crud was added in front of it.

Also, the Linux/FBSD code uses ftime(2) for calculating the elapsed
time, which gives resolutions down to a microsecond.  We found no
better way to measure elapsed time.

Note: we're currently looking at ways to improve the latency penalty
for small data size packets and suggestions are most welcome.


cheers, andrey


[-- Attachment #2: Type: message/rfc822, Size: 7937 bytes --]


100BaseT:

10000 iterations over data size 1:     0.18 mbits/s...     0.43 seconds
10000 iterations over data size 2:     0.37 mbits/s...     0.43 seconds
10000 iterations over data size 4:     0.87 mbits/s...     0.37 seconds
10000 iterations over data size 8:     1.72 mbits/s...     0.37 seconds
10000 iterations over data size 16:     3.53 mbits/s...     0.36 seconds
10000 iterations over data size 32:     7.18 mbits/s...     0.36 seconds
10000 iterations over data size 64:    14.12 mbits/s...     0.36 seconds
10000 iterations over data size 128:    30.32 mbits/s...     0.34 seconds
10000 iterations over data size 256:    52.98 mbits/s...     0.39 seconds
10000 iterations over data size 512:    92.56 mbits/s...     0.44 seconds
10000 iterations over data size 1024:    94.13 mbits/s...     0.87 seconds
10000 iterations over data size 2048:    94.44 mbits/s...     1.73 seconds
10000 iterations over data size 4096:    94.58 mbits/s...     3.46 seconds
10000 iterations over data size 8192:    94.45 mbits/s...     6.94 seconds
10000 iterations over data size 16384:    93.68 mbits/s...    13.99 seconds
10000 iterations over data size 32768:    92.27 mbits/s...    28.41 seconds
10000 iterations over data size 65536:    92.15 mbits/s...    56.89 seconds


GigE:

10000 iterations over data size 1:     0.23 mbits/s...     0.35 seconds
10000 iterations over data size 2:     0.49 mbits/s...     0.33 seconds
10000 iterations over data size 4:     1.09 mbits/s...     0.29 seconds
10000 iterations over data size 8:     2.32 mbits/s...     0.28 seconds
10000 iterations over data size 16:     4.79 mbits/s...     0.27 seconds
10000 iterations over data size 32:    10.01 mbits/s...     0.26 seconds
10000 iterations over data size 64:    19.99 mbits/s...     0.26 seconds
10000 iterations over data size 128:    40.50 mbits/s...     0.25 seconds
10000 iterations over data size 256:    68.98 mbits/s...     0.30 seconds
10000 iterations over data size 512:   133.30 mbits/s...     0.31 seconds
10000 iterations over data size 1024:   160.94 mbits/s...     0.51 seconds
10000 iterations over data size 2048:   170.62 mbits/s...     0.96 seconds
10000 iterations over data size 4096:   171.85 mbits/s...     1.91 seconds
10000 iterations over data size 8192:   168.40 mbits/s...     3.89 seconds
10000 iterations over data size 16384:   158.23 mbits/s...     8.28 seconds
10000 iterations over data size 32768:   153.45 mbits/s...    17.08 seconds
10000 iterations over data size 65536:   152.23 mbits/s...    34.44 seconds




IL over 100BaseT:

10000 iterations over data size 1:     0.25 mbits/s...     0.32 seconds
10000 iterations over data size 2:     0.50 mbits/s...     0.32 seconds
10000 iterations over data size 4:     0.98 mbits/s...     0.33 seconds
10000 iterations over data size 8:     1.98 mbits/s...     0.32 seconds
10000 iterations over data size 16:     3.94 mbits/s...     0.32 seconds
10000 iterations over data size 32:     7.84 mbits/s...     0.33 seconds
10000 iterations over data size 64:    15.48 mbits/s...     0.33 seconds
10000 iterations over data size 128:    30.50 mbits/s...     0.34 seconds
10000 iterations over data size 256:    59.05 mbits/s...     0.35 seconds
10000 iterations over data size 512:    90.30 mbits/s...     0.45 seconds
10000 iterations over data size 1024:    94.99 mbits/s...     0.86 seconds
10000 iterations over data size 2048:    94.31 mbits/s...     1.74 seconds
10000 iterations over data size 4096:    95.91 mbits/s...     3.42 seconds
10000 iterations over data size 8192:    95.90 mbits/s...     6.83 seconds
10000 iterations over data size 16384:    95.81 mbits/s...    13.68 seconds


--- linux ---

GigE on Linux (Alpha):

10000 iterations over data size 1:     6.15 mbits/s...     0.01 seconds
10000 iterations over data size 2:    11.43 mbits/s...     0.01 seconds
10000 iterations over data size 4:    22.86 mbits/s...     0.01 seconds
10000 iterations over data size 8:    49.23 mbits/s...     0.01 seconds
10000 iterations over data size 16:    85.33 mbits/s...     0.01 seconds
10000 iterations over data size 32:   150.59 mbits/s...     0.02 seconds
10000 iterations over data size 64:   256.00 mbits/s...     0.02 seconds
10000 iterations over data size 128:   393.85 mbits/s...     0.03 seconds
10000 iterations over data size 256:   553.51 mbits/s...     0.04 seconds
10000 iterations over data size 512:   546.13 mbits/s...     0.07 seconds
10000 iterations over data size 1024:   549.80 mbits/s...     0.15 seconds
10000 iterations over data size 2048:   547.96 mbits/s...     0.30 seconds
10000 iterations over data size 4096:   549.80 mbits/s...     0.60 seconds
10000 iterations over data size 8192:   549.80 mbits/s...     1.19 seconds
10000 iterations over data size 16384:   549.80 mbits/s...     2.38 seconds
10000 iterations over data size 32768:   553.98 mbits/s...     4.73 seconds
10000 iterations over data size 65536:   554.39 mbits/s...     9.46 seconds
10000 iterations over data size 131072:   554.71 mbits/s...    18.90 seconds
10000 iterations over data size 262144:   556.79 mbits/s...    37.66 seconds




100BaseT on Linux (RH 7.2):

10000 iterations over data size 1:     2.96 mbits/s...     0.03 seconds
10000 iterations over data size 2:     5.93 mbits/s...     0.03 seconds
10000 iterations over data size 4:    12.80 mbits/s...     0.03 seconds
10000 iterations over data size 8:    27.83 mbits/s...     0.02 seconds
10000 iterations over data size 16:    51.20 mbits/s...     0.03 seconds
10000 iterations over data size 32:    91.43 mbits/s...     0.03 seconds
10000 iterations over data size 64:   111.30 mbits/s...     0.05 seconds
10000 iterations over data size 128:    93.94 mbits/s...     0.11 seconds
10000 iterations over data size 256:    93.94 mbits/s...     0.22 seconds
10000 iterations over data size 512:    93.94 mbits/s...     0.44 seconds
10000 iterations over data size 1024:    94.16 mbits/s...     0.87 seconds
10000 iterations over data size 2048:    94.27 mbits/s...     1.74 seconds
10000 iterations over data size 4096:    94.13 mbits/s...     3.48 seconds
10000 iterations over data size 8192:    94.12 mbits/s...     6.96 seconds
10000 iterations over data size 16384:    94.15 mbits/s...    13.92 seconds
10000 iterations over data size 32768:    94.15 mbits/s...    27.84 seconds
10000 iterations over data size 65536:    94.14 mbits/s...    55.69 seconds
10000 iterations over data size 131072:    94.12 mbits/s...   111.41 seconds


--- freebsd -- 

100BaseT on FBSD (4.4):


10000 iterations over data size 1:     3.64 mbits/s...     0.02 seconds
10000 iterations over data size 2:     7.62 mbits/s...     0.02 seconds
10000 iterations over data size 4:    14.55 mbits/s...     0.02 seconds
10000 iterations over data size 8:    29.09 mbits/s...     0.02 seconds
10000 iterations over data size 16:    49.23 mbits/s...     0.03 seconds
10000 iterations over data size 32:    85.33 mbits/s...     0.03 seconds
10000 iterations over data size 64:    94.81 mbits/s...     0.05 seconds
10000 iterations over data size 128:    93.94 mbits/s...     0.11 seconds
10000 iterations over data size 256:    94.38 mbits/s...     0.22 seconds
10000 iterations over data size 512:    94.38 mbits/s...     0.43 seconds
10000 iterations over data size 1024:    94.16 mbits/s...     0.87 seconds
10000 iterations over data size 2048:    94.11 mbits/s...     1.74 seconds
10000 iterations over data size 4096:    94.08 mbits/s...     3.48 seconds
10000 iterations over data size 8192:    94.05 mbits/s...     6.97 seconds
10000 iterations over data size 16384:    94.05 mbits/s...    13.94 seconds
10000 iterations over data size 32768:    94.08 mbits/s...    27.86 seconds
10000 iterations over data size 65536:    94.08 mbits/s...    55.73 seconds


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2002-03-08 22:27 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2002-03-08 22:19 [9fans] network throughput measurements andrey mirtchovski
  -- strict thread matches above, loose matches on Subject: below --
2002-03-08 22:27 presotto
2002-03-08 21:46 presotto
2002-03-08 22:14 ` Ronald G Minnich
2002-03-08 21:29 andrey mirtchovski
2002-03-08 21:14 presotto
2002-03-08 21:07 andrey mirtchovski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).