From mboxrd@z Thu Jan 1 00:00:00 1970 From: erik quanstrom Date: Wed, 27 Oct 2010 00:23:38 -0400 To: 9fans@9fans.net Message-ID: <259892ffa260c0013beefbfd66dde57e@plug.quanstro.net> In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Subject: Re: [9fans] A little more ado about async Tclunk Topicbox-Message-UUID: 6e90dca4-ead6-11e9-9d60-3106f5b1d025 > I just committed a very simple implementation of asynchronous TClunk to > inferno-npe. The implementation will only defer sending clunks when MCACHE > is specified on the mount (via the new -j option to Inferno's mount, > analogous to Plan 9's mount -C) and when the file is not marked ORCLOSE. The > implementation is very simple -- I queue the RPCs devmnt would issue > synchronously and send them from a single kproc, one-after-another. There is > no attempt to overlap the TClunks. There is no logic to handle flushing a > deferred clunks on an unmount yet. The diff is not as clean as I would like, > but it works. how do you flow control the clunking processes? are you using qio? if so, what's the size of the buffer? also, do you have any test results for different latentices? how about a local network with ~0.5ms rtt and a long distance connection at ~150ms? have you tried this in user space? one could imagine passing the fds to a seperate proc that does the closes without involving the kernel. - erik