Dan Kegel’s C10k site was 1999, wasn’t it? That’s probably a decent data point for c.2000
d
> On 10 Mar 2023, at 11:24, ron minnich <rminnich@gmail.com> wrote:
>
>
> Ca. 1981, if memory serves, having even small numbers of TCP connections was not common.
>
> I was told at some point that Sun used UDP for NFS for that reason. It was a reasonably big deal when we started to move to TCP for NFS ca 1990 (my memory of the date -- I know I did it on my own for SunOS as an experiment when I worked at the SRC -- it seemed to come into general use about that time).
>
> What kind of numbers for TCP connections would be reasonable in 1980, 90, 2000, 2010?
>
> I sort of think I know, but I sort of think I'm probably wrong.
[-- Attachment #1: Type: text/plain, Size: 856 bytes --] On Thu, Mar 9, 2023, 5:23 PM ron minnich <rminnich@gmail.com> wrote: > Ca. 1981, if memory serves, having even small numbers of TCP connections > was not common. > > I was told at some point that Sun used UDP for NFS for that reason. It was > a reasonably big deal when we started to move to TCP for NFS ca 1990 (my > memory of the date -- I know I did it on my own for SunOS as an experiment > when I worked at the SRC -- it seemed to come into general use about that > time). > > What kind of numbers for TCP connections would be reasonable in 1980, 90, > 2000, 2010? > Normal systems: 10s, 100s, 1000s, 10ks. Depends on what is a reasonable system... With the high end 10x or 100x that or a bit more. These days we do 100ks of video streams at work on our high end boxes... Warner I sort of think I know, but I sort of think I'm probably wrong. > [-- Attachment #2: Type: text/html, Size: 1597 bytes --]
[-- Attachment #1: Type: text/plain, Size: 1408 bytes --] Sun chose UDP for NFS at a point when few if any people believed TCP could go fast. I remember (early 80s) being told that one couldn't use TCP/IP in LANs because they were WAN protocols. In the late 80s, WAN people were saying you couldn't use TCP/IP because they were LAN protocols. But UDP for NFS was more attractive because it was not byte stream oriented, and didn't require copying to save for retransmissions. And there was hope we'd be able to do zero copy transmissions from the servers - also the reason for inventing Jumbo packets to match the 8K page size of Sun3 systems. I did get zero copy serving working with ND (network disk block protocol) - but it was terribly specific to particular hardware components. On Thu, Mar 9, 2023 at 4:24 PM ron minnich <rminnich@gmail.com> wrote: > Ca. 1981, if memory serves, having even small numbers of TCP connections > was not common. > > I was told at some point that Sun used UDP for NFS for that reason. It was > a reasonably big deal when we started to move to TCP for NFS ca 1990 (my > memory of the date -- I know I did it on my own for SunOS as an experiment > when I worked at the SRC -- it seemed to come into general use about that > time). > > What kind of numbers for TCP connections would be reasonable in 1980, 90, > 2000, 2010? > > I sort of think I know, but I sort of think I'm probably wrong. > [-- Attachment #2: Type: text/html, Size: 1812 bytes --]
[-- Attachment #1: Type: text/plain, Size: 2377 bytes --] I have one datapoint. In late 1994 or early 1995 at Internet 1.0 startup Open Market we were working on web servers and IIRC had no trouble with 1000 (or 1024?) simultaneous clients using TCP. This was Alpha OSF/1 and probably had 256 MB or some such ungodly amount of RAM. The big change from the NCSA server was a single process model rather than one process per connection. I think there was a select(2) limit that was smaller, so perhaps there were a few processes rather than just one. I have the sources somewhere. Andy Payne was the lead engineer and I remember him slapping my fingers when I changed the way logging worked. Regarding TCP performance one story about the turning point was Van Jacobson tuning the most-likely path down to a very few vax instructions for packet receiption. That was what, Reno maybe? One of the later BSDs. -Larry > On Mar 9, 2023, at 7:59 PM, Tom Lyon <pugs78@gmail.com> wrote: > > Sun chose UDP for NFS at a point when few if any people believed TCP could go fast. > I remember (early 80s) being told that one couldn't use TCP/IP in LANs because they were WAN protocols. In the late 80s, WAN people were saying you couldn't use TCP/IP because they were LAN protocols. > > But UDP for NFS was more attractive because it was not byte stream oriented, and didn't require copying to save for retransmissions. And there was hope we'd be able to do zero copy transmissions from the servers - also the reason for inventing Jumbo packets to match the 8K page size of Sun3 systems. > > I did get zero copy serving working with ND (network disk block protocol) - but it was terribly specific to particular hardware components. > > On Thu, Mar 9, 2023 at 4:24 PM ron minnich <rminnich@gmail.com <mailto:rminnich@gmail.com>> wrote: >> Ca. 1981, if memory serves, having even small numbers of TCP connections was not common. >> >> I was told at some point that Sun used UDP for NFS for that reason. It was a reasonably big deal when we started to move to TCP for NFS ca 1990 (my memory of the date -- I know I did it on my own for SunOS as an experiment when I worked at the SRC -- it seemed to come into general use about that time). >> >> What kind of numbers for TCP connections would be reasonable in 1980, 90, 2000, 2010? >> >> I sort of think I know, but I sort of think I'm probably wrong. [-- Attachment #2: Type: text/html, Size: 3179 bytes --]
SGI made TCP go very fast on 200Mhz MIPS processors. The tricks were to mark the page copy on write on output so the driver could use the page without copying and to page flip on input. Only worked with multiples of page sized requests. SGI had very fast sequential I/O through XFS (the trick there was big I/O requests on files opened with O_DIRECT, the volume manager split the request into chunks sending each chunk to a disk controller; when you get the stupid 200Mhz processors out of the way and spin up a boat load of DMA engines, yeah, you can scale up until the bus is full). And they had fast TCP over Hippi. I showed up and asked why hasn't anyone plugged XFS into TCP? In a few weeks I wrote a user level server that demonstrated that it could work and that turned into this talk: http://mcvoy.com/lm/papers/bds.pdf If you don't go look it made O_DIRECT work on NFS and gave much faster NFS sequential I/O performance. Real NFS was 18MB/sec, my stuff was 67MB/sec for a single file, scaled to 100s of MB/sec. Everyone thought I was smart for doing that but the reality was the XFS/XLV folks and the TCP folks had done all the hard work, I just did some plumbing to connect the two fat pipes. I did eventually push it into the kernel because of stuff like the page cache coherency but even that was easy. And I found the SGI writeup of it, it has more perf numbers: http://mcvoy.com/lm/papers/bdspro.pdf If anyone cares I can post a link to the user level server code. On Thu, Mar 09, 2023 at 04:59:54PM -0800, Tom Lyon wrote: > Sun chose UDP for NFS at a point when few if any people believed TCP could > go fast. > I remember (early 80s) being told that one couldn't use TCP/IP in LANs > because they were WAN protocols. In the late 80s, WAN people were saying > you couldn't use TCP/IP because they were LAN protocols. > > But UDP for NFS was more attractive because it was not byte stream > oriented, and didn't require copying to save for retransmissions. And > there was hope we'd be able to do zero copy transmissions from the servers > - also the reason for inventing Jumbo packets to match the 8K page size of > Sun3 systems. > > I did get zero copy serving working with ND (network disk block protocol) - > but it was terribly specific to particular hardware components. > > On Thu, Mar 9, 2023 at 4:24???PM ron minnich <rminnich@gmail.com> wrote: > > > Ca. 1981, if memory serves, having even small numbers of TCP connections > > was not common. > > > > I was told at some point that Sun used UDP for NFS for that reason. It was > > a reasonably big deal when we started to move to TCP for NFS ca 1990 (my > > memory of the date -- I know I did it on my own for SunOS as an experiment > > when I worked at the SRC -- it seemed to come into general use about that > > time). > > > > What kind of numbers for TCP connections would be reasonable in 1980, 90, > > 2000, 2010? > > > > I sort of think I know, but I sort of think I'm probably wrong. > > -- --- Larry McVoy Retired to fishing http://www.mcvoy.com/lm/boat
Hi Larry, > SGI made TCP go very fast on 200Mhz MIPS processors. The tricks were > to mark the page copy on write on output so the driver could use the > page without copying I understand that bit. > and to page flip on input. Could you expand that to a sentence or two. -- Cheers, Ralph.
From reading a lot of papers on the origins of TCP I can confirm that people appear to have been thinking in terms of a dozen connections per machine, maybe half that on 16-bit hardware, around 1980. Maybe their expectations for PDP-10’s were higher, I have not looked into that much. > From: Tom Lyon <pugs78@gmail.com> > Sun chose UDP for NFS at a point when few if any people believed TCP could > go fast. > I remember (early 80s) being told that one couldn't use TCP/IP in LANs > because they were WAN protocols. In the late 80s, WAN people were saying > you couldn't use TCP/IP because they were LAN protocols. I’m not disputing the above, but there was a lot of focus on making TCP go fast enough for LAN usage in 1981-1984. For example see this 1981 post by Fabry/Joy in the TCP-IP mailing list: https://www.rfc-editor.org/in-notes/museum/tcp-ip-digest/tcp-ip-digest.v1n6.1 There are a few other similar messages to the list from around that time. An early issue was check-summing, with that initially taking 25% of CPU on a VAX750 when TCP was heavily used. Also ideas like having "trailing headers" (so that the data was block aligned) were driven by a search for LAN performance. Timeouts were reduced from 5s and 2s to 0.5s and 0.2s. Using a software interrupt instead of a kernel thread was another thing that made the stack more performant. It always seemed to me that the BBN-CSRG controversy over TCP code spurred both teams ahead with BBN more focussed on WAN use cases and CSRG more on LAN use cases. I would argue that no other contemporary network stack had this dual optimisation, with the possible exception of Datakit.
On Fri, Mar 10, 2023 at 10:14:01AM +0000, Ralph Corderoy wrote:
> Hi Larry,
>
> > SGI made TCP go very fast on 200Mhz MIPS processors. The tricks were
> > to mark the page copy on write on output so the driver could use the
> > page without copying
>
> I understand that bit.
>
> > and to page flip on input.
>
> Could you expand that to a sentence or two.
Sure. If you are using something like Hippi where the data size is
at least as big as a page, the networking stack can arrange to send
data that is a multiple of a page size. When that data works its
way up the stack to where a process is waiting in a read(), if the
process has asked for a read of a size that is a multiple of a
page size, it can just diddle the page tables and take out the
pages that the process had and give them to the kernel and put
in the pages that the kernel had and give them to the process.
That seems like a lot but it's basically what happens in TLB
miss and that code is very small and fast.
As I was thinking about this there was another thing SGI did that
was new to me at the time. Interrupt coalescing. When a packet
came in and the driver pulled it off the wire, the driver could
look to see if there was going to be another interrupt because
another packet had arrived. Then the driver grabbed that packet
as well, saving all the overhead of an interrupt.
MIPS cpus were not fast but SGI wall papered over that problem
by being very clever when they needed to be.