9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
From: Gorka Guardiola <paurea@gmail.com>
To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
Subject: Re: [9fans] 9front pegs CPU on VMware
Date: Thu, 19 Dec 2013 16:57:01 +0100	[thread overview]
Message-ID: <CACm3i_h=eTBNsUarY3MMupKveP=goX4JKu3okXi5EVAn7CACLQ@mail.gmail.com> (raw)
In-Reply-To: <389ed08237f11e30c2f310f5847e34c1@brasstown.quanstro.net>

[-- Attachment #1: Type: text/plain, Size: 4664 bytes --]

On Thu, Dec 19, 2013 at 4:19 PM, erik quanstrom <quanstro@quanstro.net>wrote:

> for those without much mwait experience, mwait is a kernel-only primitive
> (as per the instructions) that pauses the processor until a change has been
> made in some range of memory.  the size is determined by probing the h/w,
> but think cacheline.  so the discussion of locking is kernel specific as
> well.
>
>
The original discussion started about the runq spin lock, but I think the
scope of the
problem is more general and the solution can be applied in user and kernel
space both.
While in user space you would do sleep(0) in the kernel you would sched()
or if you
are in the scheduler you would loop doing mwait (see my last email).

The manual I have available says:

"The MWAIT instruction can be executed at any privilege level. The MONITOR
CPUID feature flag (ECX[bit 3] when CPUID is executed with EAX = 1)
indicates the availability of the MONITOR and MWAIT instruction in a
processor. When set, the unconditional execution of MWAIT is supported at
privilege level 0 and conditional execution is supported at privilege
levels 1 through 3 (software should test for the appropriate support of
these instructions before unconditional use)."

There are also other extensions, which I have not tried.
I think the ideas can be used in the kernel or in user space, though I have
only tried
it the kernel and the implementation is only in the kernel right now.


> > > On 17 Dec 2013, at 12:00, cinap_lenrek@felloff.net wrote:
> > >
> > > thats a surprising result. by dog pile lock you mean the runq spinlock
> no?
> > >
> >
> > I guess it depends on the HW, but I don´t find that so surprising. You
> are looping
> > sending messages to the coherency fabric, which gets congested as a
> result.
> > I have seen that happen.
>
> i assume you mean that there is contention on the cacheline holding the
> runq lock?
> i don't think there's classical congestion.  as i believe cachelines not
> involved in the
> mwait would experience no hold up.
>

I mean congestion in the classical network sense. There are switches and
links to
exchange messages for the coherency protocol and some them get congested.
What I was seeing is the counter of messages growing very very fast and the
performance
degrading which I interpret as something getting congested.
I think when the lock possession is pingponged around (not necessarily
contented,
but many changes in who is holding the lock or maybe contention) many
messages are
generated and then the problem occurs. I certainly saw the HW counters for
messages go up
orders of magnitude when I was not using mwait.

>
> mwait() does improve things and one would expect the latency to always be
> better
> than spining*.  but as it turns out the current scheduler is pretty
> hopeless in its locking
> anyway.  simply grabbing the lock with lock rather than canlock makes more
> sense to me.
>

These kind of things are subtle, I spent a lot of time measuring and it is
difficult to know
what is happening always for sure and some of the results are
counterintuitive (at least to me)
and depend on the concrete hardware/benchmark/test. So take my conclusions
with a pinch of
salt :-).

I think the latency of mwait (this is what I remember
for the opterons I was measuring, probably different in intel and in other
amd models)
is actually worse (bigger) than with spinning,
but if you have enough processors doing the spinning (not necessarily on
the same locks, but
generating traffic) then at some point it reverses because
of the traffic in the coherency fabric (or thermal effects, I do remember
that
without the mwait all the fans would be up and with the mwait they would
turn off and the
machine would be noticeably cooler). Measure it in your hardware anyway
which will
a) probably be different with a better monitor/mway.
b) you can be sure it works for the loads you are interested in.


> also, using ticket locks (see 9atom nix kernel) will provide automatic
> backoff within the lock.
> ticket locks are a poor solution as they're not really scalable but they
> will scale to 24 cpus
> much better than tas locks.
>
> mcs locks or some other queueing-style lock is clearly the long-term
> solution.  but as
> charles points out one would really perfer to figure out a way to fit them
> to the lock
> api.  i have some test code, but testing queueing locks in user space is
> ... interesting.
> i need a new approach.
>
>
Let us know what your conclusions are after you implement and measure them
:-).

G.

[-- Attachment #2: Type: text/html, Size: 6413 bytes --]

  reply	other threads:[~2013-12-19 15:57 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-12-16  1:49 Blake McBride
2013-12-16  1:52 ` erik quanstrom
2013-12-16  2:35   ` Blake McBride
2013-12-16  2:58     ` andrey mirtchovski
2013-12-16  3:05       ` Blake McBride
2013-12-16 10:17     ` cinap_lenrek
2013-12-16 14:57       ` Blake McBride
2013-12-16 15:34         ` erik quanstrom
2013-12-16 16:25           ` Matthew Veety
2013-12-16 16:59             ` erik quanstrom
2013-12-17 11:00           ` cinap_lenrek
2013-12-17 13:38             ` erik quanstrom
2013-12-17 14:14               ` erik quanstrom
2013-12-19  9:01             ` Gorka Guardiola Muzquiz
2013-12-19 14:16               ` Gorka Guardiola
2013-12-19 15:19               ` erik quanstrom
2013-12-19 15:57                 ` Gorka Guardiola [this message]
2013-12-19 16:15                   ` erik quanstrom

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CACm3i_h=eTBNsUarY3MMupKveP=goX4JKu3okXi5EVAn7CACLQ@mail.gmail.com' \
    --to=paurea@gmail.com \
    --cc=9fans@9fans.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).