From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dan Cross Message-Id: <200111261911.OAA11962@augusta.math.psu.edu> To: 9fans@cse.psu.edu Subject: Re: [9fans] Nagle algorithm In-Reply-To: References: <3.0.6.32.20011122122050.00974e88@pop3.clear.net.nz> Cc: Date: Mon, 26 Nov 2001 14:11:04 -0500 Topicbox-Message-UUID: 2a25e638-eaca-11e9-9e20-41e7f4b1d025 In article you write: >Moving Nagle into the application, as is suggested later >in the thread, is awkward: it needs to know the segment >size, which is affected by the receiver's advertised MSS, >the local MTU and the path MTU. The latter two of these can >change during the lifetime of a TCP conversation. Err, why does it need to know those things? All it cares about is sending as much data as possible in a single write() call, right? It seems it could pack something like a 64k*o* buffer as full as possible, and then write that, and achieve the same effect. >The usual complaint about Nagle is that it punishes >applications which deliberately exchange short packets. >However, it is almost always possible to turn Nagle off on >a per-connection basis, for those (relatively rare) >applications which are upset by the increased transmit delay. > >The problem with simply "eliminating" Nagle is that a >bulk-transfer application which DELIBERATELY schedules >lots of small writes that do not queue at the TCP send >level, will generate lots of small packets. This causes >the TCP congestion window to inflate faster, and gives such >applications an unfair advantage over applications which >make larger writes that queue at the TCP send level, where >they can be coalesced. The inflated congestion window >means that when congestion does happen (TCP deliberately >provokes it), the small-write/small-segment application >will get a larger share of the bottleneck's bandwidth. >This is perverse behaviour, since the application in >question is a less efficient bandwidth consumer, since it >generates more header data per user datum. Err, if you can turn off Nagle on a per-connection basis, and the application you describe is being purposely malicious, what's preventing that application from turning off Nagle and then attempting the rather antisocial behavior you describe? A far better solution for the application writer, if s/he really cares so much about it, is to write into a buffer, and then send the entire buffer when it's full. This is what, eg, stdio and bio do. If nothing else, this will cut down on operating system overhead incurred from making lots of system calls, and moving data between user and kernel space, etc, etc, etc. It seems like a good thing all around, and it really does seem misguided to have the TCP trying to guess what I'm doing at the application level. I (presumably) know what I want my application to do; I'd rather the OS's network implementation not get in my way. >The generation of large amounts of small packets should >always been seen as antisocial. It seems, then, that certain classes of applications are inherently antisocial? - Dan C.