From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=none autolearn=ham autolearn_force=no version=3.4.4 Received: (qmail 10693 invoked from network); 19 Jan 2022 02:46:33 -0000 Received: from 4ess.inri.net (216.126.196.42) by inbox.vuxu.org with ESMTPUTF8; 19 Jan 2022 02:46:33 -0000 Received: from mimir.eigenstate.org ([206.124.132.107]) by 4ess; Tue Jan 18 21:38:21 -0500 2022 Received: from abbatoir.myfiosgateway.com (pool-74-108-56-225.nycmny.fios.verizon.net [74.108.56.225]) by mimir.eigenstate.org (OpenSMTPD) with ESMTPSA id 2817cad1 (TLSv1.2:ECDHE-RSA-AES256-SHA:256:NO) for <9front@9front.org>; Tue, 18 Jan 2022 18:37:47 -0800 (PST) Message-ID: To: 9front@9front.org Date: Tue, 18 Jan 2022 21:37:46 -0500 From: ori@eigenstate.org In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit List-ID: <9front.9front.org> List-Help: X-Glyph: ➈ X-Bullshit: anonymous module-based grid general-purpose interface Subject: Re: [9front] Re: devfs performance Reply-To: 9front@9front.org Precedence: bulk Quoth Martijn Heil : > In the meantime I've started writing an (experimental) patch for > devfs, that should at least work for my use if I succeed. I'll gladly > share it if I consider it anywhere near worthy, but considering my > inexperience with this kernel development I have no expectations. > Nonetheless I'll try my best. > > In case anyone is interested in the technical details; > Block-interleaved I/O is handled by interio() at /sys/src/9/port/devfs:1134. > This in turn synchronously calls the blocking io() located at > /sys/src/9/port/devfs:1031 for each sequential block to read from the > inner devices. > My idea so far is to have one kproc worker per inner device which will > communicate with the interio() function by the use of two Qio queue's: > one for sending read/write requests to the worker and the other for > sending results back from the worker to interio(). The workers will > then call io() as requested, which of course, will block the worker. > I'm debating whether I should have those live for the duration of the > life of the fsdev device, or whether I should spin them up on-demand. > I think the best option is to have them live for the duration of the > devfs device, so that the overhead of spinning up several kprocs for > every little read and write can be avoided, in turn for some slight(?) > extra memory usage that the idle kprocs will take up. > > Again, I have no experience in this and no real expectations either. > Maybe I'm trying something really stupid. > Please feel free to correct. > It's certainly fun and educational. > > I'm really interested in any other suggestions on how to do this properly. > > Regards, > Martijn Heil > aka > ninjoh > Yes. this sounds pretty plausible -- though I suspect we can get a bigger speedup when doing writes, since we can enqueue the write, and return immediately (if we are willing to delay error reporting to the next interaction with the device. For reads, we may need to do work elsewhere in the system to handle larger, or at least more, reads, though doing them in parallel should get a nice speedup (eg, using fcp or very parallel builds)