i think it's fundamentally misconceived and should at least never be on by default. dhog's description was ``[it] attempts to buffer up multiple writes and send them as a single TCP packet -- where the writes are small''. what that actually means is that if the writes are small it typically delays sending out anything at all (subject to certain specific conditions) until a given interval has elapsed or its idea of enough data has arrived, and that interval can be large. in other words, give it a few bytes you wish to send now and it demands more, delaying if that demand is not met. it's not just RPC that's affected. i claim it's not the TCP/IP subsystem's responsibility to delay sending something so that it can buffer up writes to make larger packets. that's for stdio or bio (or local OS equivalent). (an interesting variant was in a version of the Unix streams subsystem where there was a buffering module that could be pushed onto a tcp/ip stream.) it's fine for TCP/IP to adapt to the network and adjust the rate at which bytes are shuffled in and out, and adapt retransmission efforts to apparent error rates, and since TCP/IP doesn't preserve delimiters coalesce new data (from writes large or small) into already queued data to make larger output packets. those are all within its responsibility, i think, and any delays that result are arguably inherent in the network. if applications are stuttering bytes needlessly, deal with it there.