On Thu, Dec 19, 2013 at 4:19 PM, erik quanstrom <quanstro@quanstro.net> wrote:
for those without much mwait experience, mwait is a kernel-only primitive
(as per the instructions) that pauses the processor until a change has been
made in some range of memory.  the size is determined by probing the h/w,
but think cacheline.  so the discussion of locking is kernel specific as well.

 
The original discussion started about the runq spin lock, but I think the scope of the
problem is more general and the solution can be applied in user and kernel space both.
While in user space you would do sleep(0) in the kernel you would sched() or if you
are in the scheduler you would loop doing mwait (see my last email).

The manual I have available says:

"The MWAIT instruction can be executed at any privilege level. The MONITOR CPUID feature flag (ECX[bit 3] when CPUID is executed with EAX = 1) indicates the availability of the MONITOR and MWAIT instruction in a processor. When set, the unconditional execution of MWAIT is supported at privilege level 0 and conditional execution is supported at privilege levels 1 through 3 (software should test for the appropriate support of these instructions before unconditional use)."

There are also other extensions, which I have not tried.
I think the ideas can be used in the kernel or in user space, though I have only tried
it the kernel and the implementation is only in the kernel right now.
 
> > On 17 Dec 2013, at 12:00, cinap_lenrek@felloff.net wrote:
> >
> > thats a surprising result. by dog pile lock you mean the runq spinlock no?
> >
>
> I guess it depends on the HW, but I don´t find that so surprising. You are looping
> sending messages to the coherency fabric, which gets congested as a result.
> I have seen that happen.

i assume you mean that there is contention on the cacheline holding the runq lock?
i don't think there's classical congestion.  as i believe cachelines not involved in the
mwait would experience no hold up.

I mean congestion in the classical network sense. There are switches and links to
exchange messages for the coherency protocol and some them get congested.
What I was seeing is the counter of messages growing very very fast and the performance
degrading which I interpret as something getting congested.
I think when the lock possession is pingponged around (not necessarily contented,
but many changes in who is holding the lock or maybe contention) many messages are
generated and then the problem occurs. I certainly saw the HW counters for messages go up
orders of magnitude when I was not using mwait.

mwait() does improve things and one would expect the latency to always be better
than spining*.  but as it turns out the current scheduler is pretty hopeless in its locking
anyway.  simply grabbing the lock with lock rather than canlock makes more sense to me.

These kind of things are subtle, I spent a lot of time measuring and it is difficult to know
what is happening always for sure and some of the results are counterintuitive (at least to me)
and depend on the concrete hardware/benchmark/test. So take my conclusions with a pinch of
salt :-).

I think the latency of mwait (this is what I remember
for the opterons I was measuring, probably different in intel and in other amd models)
is actually worse (bigger) than with spinning,
but if you have enough processors doing the spinning (not necessarily on the same locks, but
generating traffic) then at some point it reverses because
of the traffic in the coherency fabric (or thermal effects, I do remember that
without the mwait all the fans would be up and with the mwait they would turn off and the
machine would be noticeably cooler). Measure it in your hardware anyway which will
a) probably be different with a better monitor/mway.
b) you can be sure it works for the loads you are interested in.


also, using ticket locks (see 9atom nix kernel) will provide automatic backoff within the lock.
ticket locks are a poor solution as they're not really scalable but they will scale to 24 cpus
much better than tas locks.

mcs locks or some other queueing-style lock is clearly the long-term solution.  but as
charles points out one would really perfer to figure out a way to fit them to the lock
api.  i have some test code, but testing queueing locks in user space is ... interesting.
i need a new approach.


Let us know what your conclusions are after you implement and measure them :-).

G.