9front - general discussion about 9front
 help / color / mirror / Atom feed
From: ori@eigenstate.org
To: plan9fullfrontal@qs.co.nz, 9front@9front.org
Subject: Re: [9front] How is smp handled in file services?
Date: Tue, 10 Mar 2020 19:28:01 -0700	[thread overview]
Message-ID: <A54D7D7F98CD4432C351CDCF65C9F170@eigenstate.org> (raw)
In-Reply-To: <dd287bc4-34e3-104a-1bcb-fcd064d67699@qs.co.nz>

> On 03/11/2020 02:03 PM, ori@eigenstate.org wrote:
> 
> I dont mean to offend , but that seems more than a bit sucky.
> 
> Listening to Rob Pike wax lyrically about the benefits of CSP over 
> parallel , seems to ring a bit hollow in the face of
> all the evidence to the contrary.
> (I must admit his metaphor of 'book burning' to explain CSP now looks 
> quite prescient given the current climate of intolerance to free speech).
> 
> So 15 of my 16 mk tasks come to a shit shattering halt when ever I/O 
> needs to be done. This explains to me why ramfs does not provide much of 
> a speed increase when building.
> 
> This probably will translate very badly for performance when file 
> systems are stacked on top of each other. Rather than achieving
> high concurrency ,  it will degenerate into a linear flow through the 
> stack and back.
> 
> Probably also explains why 9p over the interweb is so slow.
> 
> What I don't see in the 'Plan' is how to introduce a pipeline into the 
> Fileserver processing to take advantage of SMP. Even if you
> break each I/O request into 10 steps, there is no way to separate this 
> out into 10 programs all working concurrently. So as you say
> each file service has to make it's own determination on how many tasks 
> to run concurrently, which is also sub-optimal.
> 
> The design of P9 is to have FS in userspace and not kernel so 
> optimizations there are not going to help.

Disk file systems like cwfs tend to be fairly parallel. Code can't
be automatically parallelized, you need to write it that way. Using
CSP if you like.

As for latency -- this is due to design choices in 9p that will
need to be fixed at some point, and adding parallelism in the file
server won't help.



  reply	other threads:[~2020-03-11  2:28 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-10 23:49 Trevor Higgins
2020-03-11  1:03 ` [9front] " ori
2020-03-11  2:22   ` Trevor Higgins
2020-03-11  2:28     ` ori [this message]
2020-03-13 16:25       ` hiro

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=A54D7D7F98CD4432C351CDCF65C9F170@eigenstate.org \
    --to=ori@eigenstate.org \
    --cc=9front@9front.org \
    --cc=plan9fullfrontal@qs.co.nz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).