9front - general discussion about 9front
 help / color / mirror / Atom feed
From: ori@eigenstate.org
To: 9front@9front.org
Subject: Re: [9front] Re: devfs performance
Date: Tue, 18 Jan 2022 21:37:46 -0500	[thread overview]
Message-ID: <F2DBFF8C0B9C52A996AECFE4930B3898@eigenstate.org> (raw)
In-Reply-To: <CA+61k5-7+QdG3zaee5+cMuyqqymMsmy+7zTxG+y+Cg9xkE2OFw@mail.gmail.com>

Quoth Martijn Heil <m.heil375@gmail.com>:
> In the meantime I've started writing an (experimental) patch for
> devfs, that should at least work for my use if I succeed. I'll gladly
> share it if I consider it anywhere near worthy, but considering my
> inexperience with this kernel development I have no expectations.
> Nonetheless I'll try my best.
> 
> In case anyone is interested in the technical details;
> Block-interleaved I/O is handled by interio() at /sys/src/9/port/devfs:1134.
> This in turn synchronously calls the blocking io() located at
> /sys/src/9/port/devfs:1031 for each sequential block to read from the
> inner devices.
> My idea so far is to have one kproc worker per inner device which will
> communicate with the interio() function by the use of two Qio queue's:
> one for sending read/write requests to the worker and the other for
> sending results back from the worker to interio(). The workers will
> then call io() as requested, which of course, will block the worker.
> I'm debating whether I should have those live for the duration of the
> life of the fsdev device, or whether I should spin them up on-demand.
> I think the best option is to have them live for the duration of the
> devfs device, so that the overhead of spinning up several kprocs for
> every little read and write can be avoided, in turn for some slight(?)
> extra memory usage that the idle kprocs will take up.
> 
> Again, I have no experience in this and no real expectations either.
> Maybe I'm trying something really stupid.
> Please feel free to correct.
> It's certainly fun and educational.
> 
> I'm really interested in any other suggestions on how to do this properly.
> 
> Regards,
> Martijn Heil
> aka
> ninjoh
> 

Yes. this sounds pretty plausible -- though
I suspect we can get a bigger speedup when
doing writes, since we can enqueue the write,
and return immediately (if we are willing to
delay error reporting to the next interaction
with the device.

For reads, we may need to do work elsewhere
in the system to handle larger, or at least
more, reads, though doing them in parallel
should get a nice speedup (eg, using fcp or
very parallel builds)


  reply	other threads:[~2022-01-19  2:46 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-16 15:27 [9front] " Martijn Heil
2022-01-17 15:27 ` [9front] " Martijn Heil
2022-01-19  2:37   ` ori [this message]
2022-01-19  7:51   ` Roberto E. Vargas Caballero

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=F2DBFF8C0B9C52A996AECFE4930B3898@eigenstate.org \
    --to=ori@eigenstate.org \
    --cc=9front@9front.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).