9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
From: erik quanstrom <quanstro@quanstro.net>
To: 9fans@9fans.net
Subject: Re: [9fans] [GSOC] plan9 kernel scheduler
Date: Sun, 22 Jun 2014 08:10:51 -0400	[thread overview]
Message-ID: <65984946d00c8ebe35122418a4f83cb1@mikro.quanstro.net> (raw)
In-Reply-To: <CACebkGPwBHsHwcshzNqhMrxjn4f8XJQ9gwAYdE26vE-vmovDhA@mail.gmail.com>

On Sat Jun 21 21:47:37 EDT 2014, jyu@cowsay.org wrote:
> +1 what Yoann said. :-) On SMP systems, all maches share a global run
> queue, and maches tend to try grabbing procs that have run on it
> before (affinity). Take a look at port/proc.c in particular, where a
> lot of the scheduling logic is implemented.

the (base) plan 9 scheduler uses a global priority based round robin
scheduler.  priority is absolute; a process of priority n+1
will always run before a process of priority n.  processes within
a priority are run in round-robin fashion.  recent cpu usage lowers
a processes' priority.  there is soft affinity that tends to rerun a
process on the same cpu when practical, though this mechanism is
too hard inthe standard distribution. there is hard affinity; one can
wire a proc to a mach.

a key detail of the scheduler is the set of priorities with runnable
processes is kept in a bitvector named "runvec".  significantly less
than 32 priorities are used.

there is also an edf scheduler.  i can't find the iwp9 paper atm.
(other than /n/atom/plan9/sys/doc/real-time.ms) but i think that's
beyond the scope of the question.

the scheduler is inefficient for many cpus, and there are some bugs
that cause expectations to diverge strongly from reality.  so to make
life easier for a gsoc student who is working on proper changes to
the scheduler, i spent two days on some cheep improvements that
make the scheduler predectible and reasonable, if not terribly efficient.
they depend on having a scalable lock.

1.  given a scalable lock, then putting a monitor around the run queues
scan will result in 1 cacheline contention event, rather than many.
this also has the effect of building a queue of processors waiting in line
for their chance at the run queues

2.  replacing idlehands with monmwait of the runvec.

3.  reducing the soft affinity to n microseconds, rather than letting
all local-affinity processes come before non local processes, regardless
of time spent on run queues.  (i'm currently working with n=500µs)

these improvements have the net effect on my 8 thread machine
of lowering the nix kernel compile time about 40%.  i'm not sure what
to expect on larger machines.

i did not expect such large improvements.  so unfortunately this has
raised the bar quite a bit for the gsoc project.  nonetheless, i'm happy
to report that the new scheduler is already within 7% of the old!

- erik



  reply	other threads:[~2014-06-22 12:10 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-06-21 19:30 yan cui
2014-06-22  1:10 ` Yoann Padioleau
2014-06-22  1:45   ` Jessica Yu
2014-06-22 12:10     ` erik quanstrom [this message]
2014-06-23 16:06       ` Ramakrishnan Muthukrishnan
2014-06-23 16:30         ` erik quanstrom
2014-06-23 18:37           ` andrey mirtchovski
2014-06-24  1:33             ` Ramakrishnan Muthukrishnan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=65984946d00c8ebe35122418a4f83cb1@mikro.quanstro.net \
    --to=quanstro@quanstro.net \
    --cc=9fans@9fans.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).