From mboxrd@z Thu Jan 1 00:00:00 1970 MIME-Version: 1.0 In-Reply-To: <259892ffa260c0013beefbfd66dde57e@plug.quanstro.net> References: <259892ffa260c0013beefbfd66dde57e@plug.quanstro.net> Date: Wed, 27 Oct 2010 17:27:10 +1100 Message-ID: From: Bruce Ellis To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Subject: Re: [9fans] A little more ado about async Tclunk Topicbox-Message-UUID: 6ea72a18-ead6-11e9-9d60-3106f5b1d025 I explained to VS that Research Inferno has been doing it for 10 years. A much simpler approach. Solves your problems. Sorry that wing-commander can't package it for today. brucee On Wed, Oct 27, 2010 at 3:23 PM, erik quanstrom wro= te: >> I just committed a very simple implementation of asynchronous TClunk to >> inferno-npe. The implementation will only defer sending clunks when MCAC= HE >> is specified on the mount (via the new -j option to Inferno's mount, >> analogous to Plan 9's mount -C) and when the file is not marked ORCLOSE.= The >> implementation is very simple -- I queue the RPCs devmnt would issue >> synchronously and send them from a single kproc, one-after-another. Ther= e is >> no attempt to overlap the TClunks. There is no logic to handle flushing = a >> deferred clunks on an unmount yet. The diff is not as clean as I would l= ike, >> but it works. > > how do you flow control the clunking processes? =A0are you using > qio? =A0if so, what's the size of the buffer? > > also, do you have any test results for different latentices? =A0how > about a local network with ~0.5ms rtt and a long distance connection > at ~150ms? > > have you tried this in user space? =A0one could imagine passing the > fds to a seperate proc that does the closes without involving the kernel. > > - erik > >