9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
* Re: [9fans] NOW it's like lightning
@ 2004-06-09  1:46 Matt Pidd-Cheshire
  2004-06-09  1:56 ` Geoff Collyer
  0 siblings, 1 reply; 3+ messages in thread
From: Matt Pidd-Cheshire @ 2004-06-09  1:46 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

Rob Pike perceptively observed :

> i wonder if you have DMA turned off on your disk.
> also, i've known CPUs to disable their caches
> sometimes. is the machine healthy when running
> other programs?

it seemed like a worthwhile wheeze ...

echo 'dma on' > /dev/sdC0/ctl
echo 'dma on' > /dev/sdD0/ctl

and Bling! Comparable speed!
My hat is raised to you sir.

In fact that brings to mind a thought ... does this
DMA business also apply to the ether drivers - as this
would explain the ~250Kb/sec transfer Plan 9 <-> BSD
versus the 2.5Mb/sec transfer WindazXP <-> BSD ...

and begs the question, pray why might these be off
by default ...  ?

and since this is a job-lot; can ftpfs operate in a mode
where it doesn't first make a local copy of a file to ship
across the link but instead reads the file itself ?

many thanks


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [9fans] NOW it's like lightning
  2004-06-09  1:46 [9fans] NOW it's like lightning Matt Pidd-Cheshire
@ 2004-06-09  1:56 ` Geoff Collyer
  2004-06-09  2:18   ` Geoff Collyer
  0 siblings, 1 reply; 3+ messages in thread
From: Geoff Collyer @ 2004-06-09  1:56 UTC (permalink / raw)
  To: 9fans

Some IDE disks or controllers have buggy DMA implementations, so DMA
is off by default to err on the side of caution.

The performance of your Ethernet interface probably depends mostly on
the quality of the card.  Worst case is probably the old NE2000, which
I think was all programmed I/O, no real DMA (the PC world has funny
definitions of `DMA').  The worst modern-day case is likely the
RTL8139, which is pretty dumb.  Best cases are probably the Intel and
National Semiconductor controllers, which maintain ring buffers of
incoming and outgoing packets and use DMA to transfer them.  Microsoft
may have information not available to the general public, or they may
just throw more CPU time at making dumb interfaces run fast.

Of course, you're probably not exercising the raw Ethernet interface,
so there are also TCP and IP speeds to consider.

How are you copying the data in question, using which version of IP (4
or 6), using what Ethernet interface on Plan 9, at what link speed?



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [9fans] NOW it's like lightning
  2004-06-09  1:56 ` Geoff Collyer
@ 2004-06-09  2:18   ` Geoff Collyer
  0 siblings, 0 replies; 3+ messages in thread
From: Geoff Collyer @ 2004-06-09  2:18 UTC (permalink / raw)
  To: 9fans

I just did a quick timing test and it would appear that the slowness
is a local problem, not a generic Plan 9 Ethernet problem.  Reading
~4MB (/386/lib/*) from a Ken fs with client-side caching disabled took
17.89 seconds the first time (Ken fs doesn't use DMA for IDE ever [for
historical reasons], though I might fix that), which is still roughly
260KB/s, an order of magnitude faster than the ~250Kb/sec cited.

Reading it a second time, with the data in RAM cache on the file
server, took 1.07 seconds, or 4.2MB/s, far faster than the 2.5Mb/sec
cited between Windows and BSD.  And that's over slow 100Mb/s Ethernet
because I haven't got around to upgrading the file server to Intel's
gigabit card yet.



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2004-06-09  2:18 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2004-06-09  1:46 [9fans] NOW it's like lightning Matt Pidd-Cheshire
2004-06-09  1:56 ` Geoff Collyer
2004-06-09  2:18   ` Geoff Collyer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).