mailing list of musl libc
 help / color / mirror / code / Atom feed
* [musl] What's the purpose of the __vm_lock?
@ 2022-01-13  3:09 zhaohang (F)
  2022-01-13  3:43 ` Rich Felker
  0 siblings, 1 reply; 2+ messages in thread
From: zhaohang (F) @ 2022-01-13  3:09 UTC (permalink / raw)
  To: musl; +Cc: zhangwentao (M)

[-- Attachment #1: Type: text/plain, Size: 418 bytes --]

Hello,

I'm a little confused about the usefulness of __vm_lock. It seems like that __vm_lock was originally cited to prevent the data race condition between pthread_barrier_wait and virtual memory changing, but it can not ensure that the virtual memory of the barrier will not be changed before pthread_barrier_wait. So, what is the meaning that introduce the __vm_lock to prevent the data race?


Thank you.


[-- Attachment #2: Type: text/html, Size: 3050 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [musl] What's the purpose of the __vm_lock?
  2022-01-13  3:09 [musl] What's the purpose of the __vm_lock? zhaohang (F)
@ 2022-01-13  3:43 ` Rich Felker
  0 siblings, 0 replies; 2+ messages in thread
From: Rich Felker @ 2022-01-13  3:43 UTC (permalink / raw)
  To: zhaohang (F); +Cc: musl, zhangwentao (M)

On Thu, Jan 13, 2022 at 03:09:55AM +0000, zhaohang (F) wrote:
> Hello,
> 
> I'm a little confused about the usefulness of __vm_lock. It seems
> like that __vm_lock was originally cited to prevent the data race
> condition between pthread_barrier_wait and virtual memory changing,
> but it can not ensure that the virtual memory of the barrier will
> not be changed before pthread_barrier_wait. So, what is the meaning
> that introduce the __vm_lock to prevent the data race?

In a couple places, it's necessary to hold a lock that prevents
virtual addresses from being freed and possibly reused by something
else. Aside from the pthread_barrier_wait thing (which might be buggy;
there's a recent post to the list about that) the big user is
process-shared robust mutexes:

The futex address is put on a pending slot in the robust list
structure shared with the kernel while it's being unlocked. At that
moment, it's possible for another thread to obtain the mutex, release
it, destroy it, free the memory, and map some other shared memory in
its place. If the process is killed at the same time, and the
newly-mapped shared memory happens to contain the tid of the previous
owner thread at that location, the kernel will clobber that as part of
the exiting process cleanup. This could be a map of a file on disk,
meaning it's a potential disk corruption bug.

This is fundamentally a design flaw in the whole way Linux robust
mutex protocol works, but it's unfixable. So all we can do is take a
very heavy lock to prevent the situation from arising.

Note that, for musl, this applies to non-robust process-shared mutexes
too since we use the robust list for implementing permanently
unobtainable state when the owner exits without unlocking the mutex.
(glibc doesn't do this because it ignores the POSIX requirement and
lets a future thread that happens to get the same TID obtain false
ownership.)

The operations which are blocked by the vm lock are anything which
unmaps memory (including MAP_FIXED over other memory), as well as 
the pthread_mutex_destroy operation (since it might allow the memory
in a shared map to be reused for something else too).

Does this explanation help?

Rich

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-01-13  3:44 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-13  3:09 [musl] What's the purpose of the __vm_lock? zhaohang (F)
2022-01-13  3:43 ` Rich Felker

Code repositories for project(s) associated with this inbox:

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).