mailing list of musl libc
 help / color / mirror / code / Atom feed
* [musl] Possible unfair scenarios in pthread_mutex_unlock
@ 2024-07-26  6:22 JinCheng Li
  2024-07-26 19:52 ` Markus Wichmann
  0 siblings, 1 reply; 4+ messages in thread
From: JinCheng Li @ 2024-07-26  6:22 UTC (permalink / raw)
  To: musl

[-- Attachment #1: Type: text/plain, Size: 2074 bytes --]

Hi

I have two questions in pthread_mutex_unlock.

  1.
Why we need a vmlock in only pshared mutex, not in other private mutex?
  2.
For pshared mutex, after swap _m_lock to unlock state, we need to wait for vmunlock before wake other waiters. During this period, if someone trylocks this pshared lock, queue jumping may occur. Is this unfair to the preious waiters? They may have to wait longer to get woken up. This can cause performance issues, do we have any good way to avoid this?

int __pthread_mutex_unlock(pthread_mutex_t *m)
{
    pthread_t self;
    int waiters = m->_m_waiters;
    int cont;
    int type = m->_m_type & 15;
    int priv = (m->_m_type & 128) ^ 128;
    int new = 0;
    int old;

    if (type != PTHREAD_MUTEX_NORMAL) {
        self = __pthread_self();
        old = m->_m_lock;
        int own = old & 0x3fffffff;
        if (own != self->tid)
            return EPERM;
        if ((type&3) == PTHREAD_MUTEX_RECURSIVE && m->_m_count)
            return m->_m_count--, 0;
        if ((type&4) && (old&0x40000000))
            new = 0x7fffffff;
        if (!priv) {
            self->robust_list.pending = &m->_m_next;
            __vm_lock(); // why need lock only here?
        }
        volatile void *prev = m->_m_prev;
        volatile void *next = m->_m_next;
        *(volatile void *volatile *)prev = next;
        if (next != &self->robust_list.head) *(volatile void *volatile *)
            ((char *)next - sizeof(void *)) = prev;
    }
    if (type&8) {
        if (old<0 || a_cas(&m->_m_lock, old, new)!=old) {
            if (new) a_store(&m->_m_waiters, -1);
            __syscall(SYS_futex, &m->_m_lock, FUTEX_UNLOCK_PI|priv);
        }
        cont = 0;
        waiters = 0;
    } else {
        cont = a_swap(&m->_m_lock, new);
    }
    if (type != PTHREAD_MUTEX_NORMAL && !priv) {
        self->robust_list.pending = 0;
        __vm_unlock(); // queue jumping caused by trylock
    }
    if (waiters || cont<0)
        __wake(&m->_m_lock, 1, priv);
    return 0;
}

Li

[-- Attachment #2: Type: text/html, Size: 14031 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [musl] Possible unfair scenarios in pthread_mutex_unlock
  2024-07-26  6:22 [musl] Possible unfair scenarios in pthread_mutex_unlock JinCheng Li
@ 2024-07-26 19:52 ` Markus Wichmann
  2024-07-27 13:38   ` JinCheng Li
  0 siblings, 1 reply; 4+ messages in thread
From: Markus Wichmann @ 2024-07-26 19:52 UTC (permalink / raw)
  To: musl; +Cc: JinCheng Li

Am Fri, Jul 26, 2024 at 06:22:50AM +0000 schrieb JinCheng Li:
> Hi
>
> I have two questions in pthread_mutex_unlock.
>
>   1.
> Why we need a vmlock in only pshared mutex, not in other private mutex?

Because only in the pshared case it is valid to have the mutex in a
shared memory, and unmap the shared memory immediately following an
unlock.

>   2.
> For pshared mutex, after swap _m_lock to unlock state, we need to wait
> for vmunlock before wake other waiters. During this period, if someone
> trylocks this pshared lock, queue jumping may occur. Is this unfair to
> the preious waiters? They may have to wait longer to get woken up.
> This can cause performance issues, do we have any good way to avoid
> this?

Fairness is not guaranteed for mutexes. It is not guaranteed for
anything. For any synchronization primitives, someone can snipe it. In a
situation where you constantly have new waiters coming into the mutex,
it is not guaranteed that any one waiter won't starve.

Ciao,
Markus

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [musl] Possible unfair scenarios in pthread_mutex_unlock
  2024-07-26 19:52 ` Markus Wichmann
@ 2024-07-27 13:38   ` JinCheng Li
  2024-07-27 15:56     ` Rich Felker
  0 siblings, 1 reply; 4+ messages in thread
From: JinCheng Li @ 2024-07-27 13:38 UTC (permalink / raw)
  To: musl; +Cc: Markus Wichmann

[-- Attachment #1: Type: text/plain, Size: 2250 bytes --]

Hi

> ​> I have two questions in pthread_mutex_unlock.
> ​>
> ​>   1.
> ​> Why we need a vmlock in only pshared mutex, not in other private mutex?

> ​ Because only in the pshared case it is valid to have the mutex in a
> ​​ shared memory, and unmap the shared memory immediately following an
> ​ unlock.

Sorry, I still don't fully understand.

  1.
 The vmlock is a private lock of a process. How does it work in pshared mutex(shared cross-process memory)? What's its real role?
  2.
 Why munmap have to do vm_wait? It looks like we need to do vm_wait even if I'm not munmapping shared memory. If I'm releasing a pshared lock, are all munmaps blocked until the mutex been unlocked?
  3.
Can you provide an example of this vm_lock in action(where this lock must exist and work)?


int __munmap(void *start, size_t len)
{
    __vm_wait();
    return syscall(SYS_munmap, start, len);
}



Best
Li
________________________________
From: Markus Wichmann <nullplan@gmx.net>
Sent: Saturday, July 27, 2024 3:52
To: musl@lists.openwall.com <musl@lists.openwall.com>
Cc: JinCheng Li <naiveli233@outlook.com>
Subject: Re: [musl] Possible unfair scenarios in pthread_mutex_unlock

Am Fri, Jul 26, 2024 at 06:22:50AM +0000 schrieb JinCheng Li:
> Hi
>
> I have two questions in pthread_mutex_unlock.
>
>   1.
> Why we need a vmlock in only pshared mutex, not in other private mutex?

Because only in the pshared case it is valid to have the mutex in a
shared memory, and unmap the shared memory immediately following an
unlock.

>   2.
> For pshared mutex, after swap _m_lock to unlock state, we need to wait
> for vmunlock before wake other waiters. During this period, if someone
> trylocks this pshared lock, queue jumping may occur. Is this unfair to
> the preious waiters? They may have to wait longer to get woken up.
> This can cause performance issues, do we have any good way to avoid
> this?

Fairness is not guaranteed for mutexes. It is not guaranteed for
anything. For any synchronization primitives, someone can snipe it. In a
situation where you constantly have new waiters coming into the mutex,
it is not guaranteed that any one waiter won't starve.

Ciao,
Markus

[-- Attachment #2: Type: text/html, Size: 7746 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [musl] Possible unfair scenarios in pthread_mutex_unlock
  2024-07-27 13:38   ` JinCheng Li
@ 2024-07-27 15:56     ` Rich Felker
  0 siblings, 0 replies; 4+ messages in thread
From: Rich Felker @ 2024-07-27 15:56 UTC (permalink / raw)
  To: JinCheng Li; +Cc: musl, Markus Wichmann

On Sat, Jul 27, 2024 at 01:38:55PM +0000, JinCheng Li wrote:
> Hi
> 
> > ​> I have two questions in pthread_mutex_unlock.
> > ​>
> > ​>   1.
> > ​> Why we need a vmlock in only pshared mutex, not in other private mutex?
> 
> > ​ Because only in the pshared case it is valid to have the mutex in a
> > ​​ shared memory, and unmap the shared memory immediately following an
> > ​ unlock.
> 
> Sorry, I still don't fully understand.
> 
>   1.
>  The vmlock is a private lock of a process. How does it work in pshared mutex(shared cross-process memory)? What's its real role?
>   2.
>  Why munmap have to do vm_wait? It looks like we need to do vm_wait even if I'm not munmapping shared memory. If I'm releasing a pshared lock, are all munmaps blocked until the mutex been unlocked?
>   3.
> Can you provide an example of this vm_lock in action(where this lock must exist and work)?

Mutex M, threads A and B. Initially A holds lock.

A: Stores pointer to M on robust list pending slot
A: Writes a 0 to lock word of M
B: Succeeds acquiring M
B: Does something with state protected by M
B: Unlocks and unmaps M
B: Maps a file with MAP_SHARED, happens to get same address M had.

At this point, suppose the process gets terminated by a signal. Now
the kernel walks the robust list and pending slot fields to find
mutexes it needs to put into OWNERDEAD state. It finds the address of
M in A's pending slot. Now the kernel inspects and possibly writes to
the memory at this address, but it's not writing to the mutex. Instead
it's overwriting contents of an unrelated file.

This is a fundamental design flaw in Linux's robust list handling, and
the only fix we could find is precluding munmap or mmap with MAP_FIXED
until the robust list pending slot is cleared.

Rich

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2024-07-27 15:57 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-07-26  6:22 [musl] Possible unfair scenarios in pthread_mutex_unlock JinCheng Li
2024-07-26 19:52 ` Markus Wichmann
2024-07-27 13:38   ` JinCheng Li
2024-07-27 15:56     ` Rich Felker

Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).