mailing list of musl libc
 help / color / mirror / code / Atom feed
* [musl] Status report and MT fork
@ 2020-10-26  0:50 Rich Felker
  2020-10-26  0:59 ` Rich Felker
  0 siblings, 1 reply; 42+ messages in thread
From: Rich Felker @ 2020-10-26  0:50 UTC (permalink / raw)
  To: musl

I just pushed a series of changes (up through 0b87551bdf) I've had
queued for a while now, some of which had minor issues that I think
have all been resolved now. They cover a range of bugs found in the
process of reviewing the possibility of making fork provide a
consistent execution environment for the child of a multithreaded
parent, and a couple unrelated fixes.

Based on distro experience with musl 1.2.1, I'm working on getting the
improved fork into 1.2.2. Despite the fact that 1.2.1 did not break
anything that wasn't already broken (apps invoking UB in MT-forked
children), prior to it most of the active breakage was hit with very
low probability, so there were a lot of packages people *thought* were
working, that weren't, and feedback from distros seems to be that
getting everything working as reliably as before (even if it was
imperfect and dangerous before) is not tractable in any reasonable
time frame. And in particular, I'm concerned about language runtimes
like Ruby that seem to have a contract with applications they host to
support MT-forked children. Fixing these is not a matter of fixing a
finite set of bugs but fixing a contract, which is likely not
tractable.

Assuming it goes through, the change here will be far more complete
than glibc's handling of MT-forked children, where most things other
than malloc don't actually work, but fail sufficiently infrequently
that they seem to work. While there are a lot of things I dislike
about this path, one major thing I do like is that it really makes
internal use of threads by library code (including third party libs)
transparent to the application, rather than "transparent, until you
use fork".

Will follow up with draft patch for testing.

Rich

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] Status report and MT fork
  2020-10-26  0:50 [musl] Status report and MT fork Rich Felker
@ 2020-10-26  0:59 ` Rich Felker
  2020-10-26  3:29   ` Rich Felker
  2020-10-27 21:17   ` Rich Felker
  0 siblings, 2 replies; 42+ messages in thread
From: Rich Felker @ 2020-10-26  0:59 UTC (permalink / raw)
  To: musl

[-- Attachment #1: Type: text/plain, Size: 2862 bytes --]

On Sun, Oct 25, 2020 at 08:50:29PM -0400, Rich Felker wrote:
> I just pushed a series of changes (up through 0b87551bdf) I've had
> queued for a while now, some of which had minor issues that I think
> have all been resolved now. They cover a range of bugs found in the
> process of reviewing the possibility of making fork provide a
> consistent execution environment for the child of a multithreaded
> parent, and a couple unrelated fixes.
> 
> Based on distro experience with musl 1.2.1, I'm working on getting the
> improved fork into 1.2.2. Despite the fact that 1.2.1 did not break
> anything that wasn't already broken (apps invoking UB in MT-forked
> children), prior to it most of the active breakage was hit with very
> low probability, so there were a lot of packages people *thought* were
> working, that weren't, and feedback from distros seems to be that
> getting everything working as reliably as before (even if it was
> imperfect and dangerous before) is not tractable in any reasonable
> time frame. And in particular, I'm concerned about language runtimes
> like Ruby that seem to have a contract with applications they host to
> support MT-forked children. Fixing these is not a matter of fixing a
> finite set of bugs but fixing a contract, which is likely not
> tractable.
> 
> Assuming it goes through, the change here will be far more complete
> than glibc's handling of MT-forked children, where most things other
> than malloc don't actually work, but fail sufficiently infrequently
> that they seem to work. While there are a lot of things I dislike
> about this path, one major thing I do like is that it really makes
> internal use of threads by library code (including third party libs)
> transparent to the application, rather than "transparent, until you
> use fork".
> 
> Will follow up with draft patch for testing.

Patch attached. It should suffice for testing and review of whether
there are any locks/state I overlooked. It could possibly be made less
ugly too.

Note that this does not strictly conform to past and current POSIX
that specify fork as AS-safe. POSIX-future has removed fork from the
AS-safe list, and seems to have considered its original inclusion
erroneous due to pthread_atfork and existing implementation practice.
The patch as written takes care to skip all locking in single-threaded
parents, so it does not break AS-safety property in single-threaded
programs that may have made use of it. -Dfork=_Fork can also be used
to get an AS-safe fork, but it's not equivalent to old semantics;
_Fork does not run atfork handlers. It's also possible to static-link
an alternate fork implementation that provides its own pthread_atfork
and calls _Fork, if really needed for a particular application.

Feedback from distro folks would be very helpful -- does this fix all
the packages that 1.2.1 "broke"?

Rich

[-- Attachment #2: mt-fork.diff --]
[-- Type: text/plain, Size: 11177 bytes --]

diff --git a/ldso/dynlink.c b/ldso/dynlink.c
index af983692..32e88508 100644
--- a/ldso/dynlink.c
+++ b/ldso/dynlink.c
@@ -21,6 +21,7 @@
 #include <semaphore.h>
 #include <sys/membarrier.h>
 #include "pthread_impl.h"
+#include "fork_impl.h"
 #include "libc.h"
 #include "dynlink.h"
 
@@ -1404,6 +1405,17 @@ void __libc_exit_fini()
 	}
 }
 
+void __ldso_atfork(int who)
+{
+	if (who<0) {
+		pthread_rwlock_wrlock(&lock);
+		pthread_mutex_lock(&init_fini_lock);
+	} else {
+		pthread_mutex_unlock(&init_fini_lock);
+		pthread_rwlock_unlock(&lock);
+	}
+}
+
 static struct dso **queue_ctors(struct dso *dso)
 {
 	size_t cnt, qpos, spos, i;
@@ -1462,6 +1474,12 @@ static struct dso **queue_ctors(struct dso *dso)
 	}
 	queue[qpos] = 0;
 	for (i=0; i<qpos; i++) queue[i]->mark = 0;
+	for (i=0; i<qpos; i++) if (queue[i]->ctor_visitor->tid < 0) {
+		error("State of %s is inconsistent due to multithreaded fork\n",
+			queue[i]->name);
+		free(queue);
+		if (runtime) longjmp(*rtld_fail, 1);
+	}
 
 	return queue;
 }
diff --git a/src/exit/at_quick_exit.c b/src/exit/at_quick_exit.c
index d3ce6522..e4b5d78d 100644
--- a/src/exit/at_quick_exit.c
+++ b/src/exit/at_quick_exit.c
@@ -1,12 +1,14 @@
 #include <stdlib.h>
 #include "libc.h"
 #include "lock.h"
+#include "fork_impl.h"
 
 #define COUNT 32
 
 static void (*funcs[COUNT])(void);
 static int count;
 static volatile int lock[1];
+volatile int *const __at_quick_exit_lockptr = lock;
 
 void __funcs_on_quick_exit()
 {
diff --git a/src/exit/atexit.c b/src/exit/atexit.c
index 160d277a..2804a1d7 100644
--- a/src/exit/atexit.c
+++ b/src/exit/atexit.c
@@ -2,6 +2,7 @@
 #include <stdint.h>
 #include "libc.h"
 #include "lock.h"
+#include "fork_impl.h"
 
 /* Ensure that at least 32 atexit handlers can be registered without malloc */
 #define COUNT 32
@@ -15,6 +16,7 @@ static struct fl
 
 static int slot;
 static volatile int lock[1];
+volatile int *const __atexit_lockptr = lock;
 
 void __funcs_on_exit()
 {
diff --git a/src/internal/fork_impl.h b/src/internal/fork_impl.h
new file mode 100644
index 00000000..0e116aae
--- /dev/null
+++ b/src/internal/fork_impl.h
@@ -0,0 +1,18 @@
+#include <features.h>
+
+extern hidden volatile int *const __at_quick_exit_lockptr;
+extern hidden volatile int *const __atexit_lockptr;
+extern hidden volatile int *const __dlerror_lockptr;
+extern hidden volatile int *const __gettext_lockptr;
+extern hidden volatile int *const __random_lockptr;
+extern hidden volatile int *const __sem_open_lockptr;
+extern hidden volatile int *const __stdio_ofl_lockptr;
+extern hidden volatile int *const __syslog_lockptr;
+extern hidden volatile int *const __timezone_lockptr;
+
+extern hidden volatile int *const __bump_lockptr;
+
+extern hidden volatile int *const __vmlock_lockptr;
+
+hidden void __malloc_atfork(int);
+hidden void __ldso_atfork(int);
diff --git a/src/ldso/dlerror.c b/src/ldso/dlerror.c
index 3fcc7779..009beecb 100644
--- a/src/ldso/dlerror.c
+++ b/src/ldso/dlerror.c
@@ -4,6 +4,7 @@
 #include "pthread_impl.h"
 #include "dynlink.h"
 #include "lock.h"
+#include "fork_impl.h"
 
 char *dlerror()
 {
@@ -19,6 +20,7 @@ char *dlerror()
 
 static volatile int freebuf_queue_lock[1];
 static void **freebuf_queue;
+volatile int *const __dlerror_lockptr = freebuf_queue_lock;
 
 void __dl_thread_cleanup(void)
 {
diff --git a/src/locale/dcngettext.c b/src/locale/dcngettext.c
index 4c304393..805fe83b 100644
--- a/src/locale/dcngettext.c
+++ b/src/locale/dcngettext.c
@@ -10,6 +10,7 @@
 #include "atomic.h"
 #include "pleval.h"
 #include "lock.h"
+#include "fork_impl.h"
 
 struct binding {
 	struct binding *next;
@@ -34,9 +35,11 @@ static char *gettextdir(const char *domainname, size_t *dirlen)
 	return 0;
 }
 
+static volatile int lock[1];
+volatile int *const __gettext_lockptr = lock;
+
 char *bindtextdomain(const char *domainname, const char *dirname)
 {
-	static volatile int lock[1];
 	struct binding *p, *q;
 
 	if (!domainname) return 0;
diff --git a/src/malloc/lite_malloc.c b/src/malloc/lite_malloc.c
index f8931ba5..f6736ec4 100644
--- a/src/malloc/lite_malloc.c
+++ b/src/malloc/lite_malloc.c
@@ -6,6 +6,7 @@
 #include "libc.h"
 #include "lock.h"
 #include "syscall.h"
+#include "fork_impl.h"
 
 #define ALIGN 16
 
@@ -31,10 +32,12 @@ static int traverses_stack_p(uintptr_t old, uintptr_t new)
 	return 0;
 }
 
+static volatile int lock[1];
+volatile int *const __bump_lockptr = lock;
+
 static void *__simple_malloc(size_t n)
 {
 	static uintptr_t brk, cur, end;
-	static volatile int lock[1];
 	static unsigned mmap_step;
 	size_t align=1;
 	void *p;
diff --git a/src/malloc/mallocng/glue.h b/src/malloc/mallocng/glue.h
index 16acd1ea..980ce396 100644
--- a/src/malloc/mallocng/glue.h
+++ b/src/malloc/mallocng/glue.h
@@ -56,7 +56,8 @@ __attribute__((__visibility__("hidden")))
 extern int __malloc_lock[1];
 
 #define LOCK_OBJ_DEF \
-int __malloc_lock[1];
+int __malloc_lock[1]; \
+void __malloc_atfork(int who) { malloc_atfork(who); }
 
 static inline void rdlock()
 {
@@ -73,5 +74,16 @@ static inline void unlock()
 static inline void upgradelock()
 {
 }
+static inline void resetlock()
+{
+	__malloc_lock[0] = 0;
+}
+
+static inline void malloc_atfork(int who)
+{
+	if (who<0) rdlock();
+	else if (who>0) resetlock();
+	else unlock();
+}
 
 #endif
diff --git a/src/malloc/oldmalloc/malloc.c b/src/malloc/oldmalloc/malloc.c
index c0997ad8..8ac68a24 100644
--- a/src/malloc/oldmalloc/malloc.c
+++ b/src/malloc/oldmalloc/malloc.c
@@ -9,6 +9,7 @@
 #include "atomic.h"
 #include "pthread_impl.h"
 #include "malloc_impl.h"
+#include "fork_impl.h"
 
 #if defined(__GNUC__) && defined(__PIC__)
 #define inline inline __attribute__((always_inline))
@@ -527,3 +528,21 @@ void __malloc_donate(char *start, char *end)
 	c->csize = n->psize = C_INUSE | (end-start);
 	__bin_chunk(c);
 }
+
+void __malloc_atfork(int who)
+{
+	if (who<0) {
+		lock(mal.split_merge_lock);
+		for (int i=0; i<64; i++)
+			lock(mal.bins[i].lock);
+	} else if (!who) {
+		for (int i=0; i<64; i++)
+			unlock(mal.bins[i].lock);
+		unlock(mal.split_merge_lock);
+	} else {
+		for (int i=0; i<64; i++)
+			mal.bins[i].lock[0] = mal.bins[i].lock[1] = 0;
+		mal.split_merge_lock[1] = 0;
+		mal.split_merge_lock[0] = 0;
+	}
+}
diff --git a/src/misc/syslog.c b/src/misc/syslog.c
index 13d4b0a6..7dc0c1be 100644
--- a/src/misc/syslog.c
+++ b/src/misc/syslog.c
@@ -10,6 +10,7 @@
 #include <errno.h>
 #include <fcntl.h>
 #include "lock.h"
+#include "fork_impl.h"
 
 static volatile int lock[1];
 static char log_ident[32];
@@ -17,6 +18,7 @@ static int log_opt;
 static int log_facility = LOG_USER;
 static int log_mask = 0xff;
 static int log_fd = -1;
+volatile int *const __syslog_lockptr = lock;
 
 int setlogmask(int maskpri)
 {
diff --git a/src/prng/random.c b/src/prng/random.c
index 633a17f6..d3780fa7 100644
--- a/src/prng/random.c
+++ b/src/prng/random.c
@@ -1,6 +1,7 @@
 #include <stdlib.h>
 #include <stdint.h>
 #include "lock.h"
+#include "fork_impl.h"
 
 /*
 this code uses the same lagged fibonacci generator as the
@@ -23,6 +24,7 @@ static int i = 3;
 static int j = 0;
 static uint32_t *x = init+1;
 static volatile int lock[1];
+volatile int *const __random_lockptr = lock;
 
 static uint32_t lcg31(uint32_t x) {
 	return (1103515245*x + 12345) & 0x7fffffff;
diff --git a/src/process/fork.c b/src/process/fork.c
index a12da01a..ecf7f376 100644
--- a/src/process/fork.c
+++ b/src/process/fork.c
@@ -1,13 +1,81 @@
 #include <unistd.h>
 #include "libc.h"
+#include "lock.h"
+#include "pthread_impl.h"
+#include "fork_impl.h"
+
+static volatile int *const dummy_lockptr = 0;
+
+weak_alias(dummy_lockptr, __at_quick_exit_lockptr);
+weak_alias(dummy_lockptr, __atexit_lockptr);
+weak_alias(dummy_lockptr, __dlerror_lockptr);
+weak_alias(dummy_lockptr, __gettext_lockptr);
+weak_alias(dummy_lockptr, __random_lockptr);
+weak_alias(dummy_lockptr, __sem_open_lockptr);
+weak_alias(dummy_lockptr, __stdio_ofl_lockptr);
+weak_alias(dummy_lockptr, __syslog_lockptr);
+weak_alias(dummy_lockptr, __timezone_lockptr);
+weak_alias(dummy_lockptr, __bump_lockptr);
+
+weak_alias(dummy_lockptr, __vmlock_lockptr);
+
+static volatile int *const *const atfork_locks[] = {
+	&__at_quick_exit_lockptr,
+	&__atexit_lockptr,
+	&__dlerror_lockptr,
+	&__gettext_lockptr,
+	&__random_lockptr,
+	&__sem_open_lockptr,
+	&__stdio_ofl_lockptr,
+	&__syslog_lockptr,
+	&__timezone_lockptr,
+	&__bump_lockptr,
+};
 
 static void dummy(int x) { }
 weak_alias(dummy, __fork_handler);
+weak_alias(dummy, __malloc_atfork);
+weak_alias(dummy, __ldso_atfork);
+
+static void dummy_0(void) { }
+weak_alias(dummy_0, __tl_lock);
+weak_alias(dummy_0, __tl_unlock);
 
 pid_t fork(void)
 {
+	sigset_t set;
 	__fork_handler(-1);
+	__block_app_sigs(&set);
+	int need_locks = libc.need_locks > 0;
+	if (need_locks) {
+		__ldso_atfork(-1);
+		__inhibit_ptc();
+		for (int i=0; i<sizeof atfork_locks/sizeof *atfork_locks; i++)
+			if (atfork_locks[i]) LOCK(*atfork_locks[i]);
+		__malloc_atfork(-1);
+		__tl_lock();
+	}
+	pthread_t self=__pthread_self(), next=self->next;
 	pid_t ret = _Fork();
+	if (need_locks) {
+		if (!ret) {
+			for (pthread_t td=next; td!=self; td=td->next)
+				td->tid = -1;
+			if (__vmlock_lockptr) {
+				__vmlock_lockptr[0] = 0;
+				__vmlock_lockptr[1] = 0;
+			}
+		}
+		__tl_unlock();
+		__malloc_atfork(!ret);
+		for (int i=0; i<sizeof atfork_locks/sizeof *atfork_locks; i++)
+			if (atfork_locks[i])
+				if (ret) UNLOCK(*atfork_locks[i]);
+				else **atfork_locks[i] = 0;
+		__release_ptc();
+		__ldso_atfork(!ret);
+	}
+	__restore_sigs(&set);
 	__fork_handler(!ret);
 	return ret;
 }
diff --git a/src/stdio/ofl.c b/src/stdio/ofl.c
index f2d3215a..aad3d171 100644
--- a/src/stdio/ofl.c
+++ b/src/stdio/ofl.c
@@ -1,8 +1,10 @@
 #include "stdio_impl.h"
 #include "lock.h"
+#include "fork_impl.h"
 
 static FILE *ofl_head;
 static volatile int ofl_lock[1];
+volatile int *const __stdio_ofl_lockptr = ofl_lock;
 
 FILE **__ofl_lock()
 {
diff --git a/src/thread/sem_open.c b/src/thread/sem_open.c
index de8555c5..f198056e 100644
--- a/src/thread/sem_open.c
+++ b/src/thread/sem_open.c
@@ -12,6 +12,7 @@
 #include <stdlib.h>
 #include <pthread.h>
 #include "lock.h"
+#include "fork_impl.h"
 
 static struct {
 	ino_t ino;
@@ -19,6 +20,7 @@ static struct {
 	int refcnt;
 } *semtab;
 static volatile int lock[1];
+volatile int *const __sem_open_lockptr = lock;
 
 #define FLAGS (O_RDWR|O_NOFOLLOW|O_CLOEXEC|O_NONBLOCK)
 
diff --git a/src/thread/vmlock.c b/src/thread/vmlock.c
index 75f3cb76..fa0a8e3c 100644
--- a/src/thread/vmlock.c
+++ b/src/thread/vmlock.c
@@ -1,6 +1,8 @@
 #include "pthread_impl.h"
+#include "fork_impl.h"
 
 static volatile int vmlock[2];
+volatile int *const __vmlock_lockptr = vmlock;
 
 void __vm_wait()
 {
diff --git a/src/time/__tz.c b/src/time/__tz.c
index 49a7371e..72c0d58b 100644
--- a/src/time/__tz.c
+++ b/src/time/__tz.c
@@ -6,6 +6,7 @@
 #include <sys/mman.h>
 #include "libc.h"
 #include "lock.h"
+#include "fork_impl.h"
 
 long  __timezone = 0;
 int   __daylight = 0;
@@ -30,6 +31,7 @@ static char *old_tz = old_tz_buf;
 static size_t old_tz_size = sizeof old_tz_buf;
 
 static volatile int lock[1];
+volatile int *const __timezone_lockptr = lock;
 
 static int getint(const char **p)
 {

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] Status report and MT fork
  2020-10-26  0:59 ` Rich Felker
@ 2020-10-26  3:29   ` Rich Felker
  2020-10-26 18:44     ` Érico Nogueira
  2020-10-27 21:17   ` Rich Felker
  1 sibling, 1 reply; 42+ messages in thread
From: Rich Felker @ 2020-10-26  3:29 UTC (permalink / raw)
  To: musl

On Sun, Oct 25, 2020 at 08:59:20PM -0400, Rich Felker wrote:
> diff --git a/ldso/dynlink.c b/ldso/dynlink.c
> index af983692..32e88508 100644
> --- a/ldso/dynlink.c
> +++ b/ldso/dynlink.c
> @@ -21,6 +21,7 @@
>  #include <semaphore.h>
>  #include <sys/membarrier.h>
>  #include "pthread_impl.h"
> +#include "fork_impl.h"
>  #include "libc.h"
>  #include "dynlink.h"
>  
> @@ -1404,6 +1405,17 @@ void __libc_exit_fini()
>  	}
>  }
>  
> +void __ldso_atfork(int who)
> +{
> +	if (who<0) {
> +		pthread_rwlock_wrlock(&lock);
> +		pthread_mutex_lock(&init_fini_lock);
> +	} else {
> +		pthread_mutex_unlock(&init_fini_lock);
> +		pthread_rwlock_unlock(&lock);
> +	}
> +}
> +
>  static struct dso **queue_ctors(struct dso *dso)
>  {
>  	size_t cnt, qpos, spos, i;
> @@ -1462,6 +1474,12 @@ static struct dso **queue_ctors(struct dso *dso)
>  	}
>  	queue[qpos] = 0;
>  	for (i=0; i<qpos; i++) queue[i]->mark = 0;
> +	for (i=0; i<qpos; i++) if (queue[i]->ctor_visitor->tid < 0) {
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Invalid access as-is, should be queue[i]->ctor_visitor && ...

> +		error("State of %s is inconsistent due to multithreaded fork\n",
> +			queue[i]->name);
> +		free(queue);
> +		if (runtime) longjmp(*rtld_fail, 1);
> +	}
>  
>  	return queue;
>  }

Rich

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] Status report and MT fork
  2020-10-26  3:29   ` Rich Felker
@ 2020-10-26 18:44     ` Érico Nogueira
  2020-10-26 19:52       ` Rich Felker
  0 siblings, 1 reply; 42+ messages in thread
From: Érico Nogueira @ 2020-10-26 18:44 UTC (permalink / raw)
  To: musl, musl

On Sun Oct 25, 2020 at 8:29 PM -03, Rich Felker wrote:
> > +	for (i=0; i<qpos; i++) if (queue[i]->ctor_visitor->tid < 0) {
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> Invalid access as-is, should be queue[i]->ctor_visitor && ...
>
> > +		error("State of %s is inconsistent due to multithreaded fork\n",
> > +			queue[i]->name);
> > +		free(queue);
> > +		if (runtime) longjmp(*rtld_fail, 1);
> > +	}
> >  
> >  	return queue;
> >  }
>
> Rich

As a warning, don't install the resulting libc.so on your system without
the above fix! It segfaulted even with simple applications here.

Re. the patches, I am now able to import an image into gscan2pdf (a Perl
GTK application) - though it required building Perl with a bigger thread
stack size. With musl 1.2.1 it simply hung on a futex syscall.

Cheers,
Érico

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] Status report and MT fork
  2020-10-26 18:44     ` Érico Nogueira
@ 2020-10-26 19:52       ` Rich Felker
  2020-10-26 20:11         ` Érico Nogueira
  0 siblings, 1 reply; 42+ messages in thread
From: Rich Felker @ 2020-10-26 19:52 UTC (permalink / raw)
  To: musl

On Mon, Oct 26, 2020 at 03:44:51PM -0300, Érico Nogueira wrote:
> On Sun Oct 25, 2020 at 8:29 PM -03, Rich Felker wrote:
> > > +	for (i=0; i<qpos; i++) if (queue[i]->ctor_visitor->tid < 0) {
> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> >
> > Invalid access as-is, should be queue[i]->ctor_visitor && ...
> >
> > > +		error("State of %s is inconsistent due to multithreaded fork\n",
> > > +			queue[i]->name);
> > > +		free(queue);
> > > +		if (runtime) longjmp(*rtld_fail, 1);
> > > +	}
> > >  
> > >  	return queue;
> > >  }
> 
> As a warning, don't install the resulting libc.so on your system without
> the above fix! It segfaulted even with simple applications here.

Yep, sorry about that. I'd rebased out an older version with
queue[i]->ctor_visitor being the tid itself rather than the pthread_t,
and hadn't retested it since adding the ->tid.

> Re. the patches, I am now able to import an image into gscan2pdf (a Perl
> GTK application) - though it required building Perl with a bigger thread
> stack size. With musl 1.2.1 it simply hung on a futex syscall.

Great to hear! Was it "working" in earlier musl though? The crash (as
discussed on irc) was clearly a stack overflow but it seems odd that
it would be newly introduced, unless it was just really borderline on
fitting before.

Rich

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] Status report and MT fork
  2020-10-26 19:52       ` Rich Felker
@ 2020-10-26 20:11         ` Érico Nogueira
  0 siblings, 0 replies; 42+ messages in thread
From: Érico Nogueira @ 2020-10-26 20:11 UTC (permalink / raw)
  To: musl, musl

On Mon Oct 26, 2020 at 12:52 PM -03, Rich Felker wrote:
> On Mon, Oct 26, 2020 at 03:44:51PM -0300, Érico Nogueira wrote:
> > Re. the patches, I am now able to import an image into gscan2pdf (a Perl
> > GTK application) - though it required building Perl with a bigger thread
> > stack size. With musl 1.2.1 it simply hung on a futex syscall.
>
> Great to hear! Was it "working" in earlier musl though? The crash (as
> discussed on irc) was clearly a stack overflow but it seems odd that
> it would be newly introduced, unless it was just really borderline on
> fitting before.
>
> Rich

I hadn't tested with any musl version but 1.2.1. A quick test with
1.1.24 shows that it didn't hang then, but hit the same segfault if
using perl without a fixed thread stack size.

The only reason the segfault seemed new to me was because the program
had never reached past the hang in the futex syscall :)

I will see if I can find the time to write some programs to torture test
the impl further, assuming that would help.

Cheers,
Érico

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] Status report and MT fork
  2020-10-26  0:59 ` Rich Felker
  2020-10-26  3:29   ` Rich Felker
@ 2020-10-27 21:17   ` Rich Felker
  2020-10-28 18:56     ` [musl] [PATCH v2] " Rich Felker
  1 sibling, 1 reply; 42+ messages in thread
From: Rich Felker @ 2020-10-27 21:17 UTC (permalink / raw)
  To: musl

On Sun, Oct 25, 2020 at 08:59:20PM -0400, Rich Felker wrote:
> On Sun, Oct 25, 2020 at 08:50:29PM -0400, Rich Felker wrote:
> > I just pushed a series of changes (up through 0b87551bdf) I've had
> > queued for a while now, some of which had minor issues that I think
> > have all been resolved now. They cover a range of bugs found in the
> > process of reviewing the possibility of making fork provide a
> > consistent execution environment for the child of a multithreaded
> > parent, and a couple unrelated fixes.
> > 
> > Based on distro experience with musl 1.2.1, I'm working on getting the
> > improved fork into 1.2.2. Despite the fact that 1.2.1 did not break
> > anything that wasn't already broken (apps invoking UB in MT-forked
> > children), prior to it most of the active breakage was hit with very
> > low probability, so there were a lot of packages people *thought* were
> > working, that weren't, and feedback from distros seems to be that
> > getting everything working as reliably as before (even if it was
> > imperfect and dangerous before) is not tractable in any reasonable
> > time frame. And in particular, I'm concerned about language runtimes
> > like Ruby that seem to have a contract with applications they host to
> > support MT-forked children. Fixing these is not a matter of fixing a
> > finite set of bugs but fixing a contract, which is likely not
> > tractable.
> > 
> > Assuming it goes through, the change here will be far more complete
> > than glibc's handling of MT-forked children, where most things other
> > than malloc don't actually work, but fail sufficiently infrequently
> > that they seem to work. While there are a lot of things I dislike
> > about this path, one major thing I do like is that it really makes
> > internal use of threads by library code (including third party libs)
> > transparent to the application, rather than "transparent, until you
> > use fork".
> > 
> > Will follow up with draft patch for testing.
> 
> Patch attached. It should suffice for testing and review of whether
> there are any locks/state I overlooked. It could possibly be made less
> ugly too.
> 
> Note that this does not strictly conform to past and current POSIX
> that specify fork as AS-safe. POSIX-future has removed fork from the
> AS-safe list, and seems to have considered its original inclusion
> erroneous due to pthread_atfork and existing implementation practice.
> The patch as written takes care to skip all locking in single-threaded
> parents, so it does not break AS-safety property in single-threaded
> programs that may have made use of it. -Dfork=_Fork can also be used
> to get an AS-safe fork, but it's not equivalent to old semantics;
> _Fork does not run atfork handlers. It's also possible to static-link
> an alternate fork implementation that provides its own pthread_atfork
> and calls _Fork, if really needed for a particular application.
> 
> Feedback from distro folks would be very helpful -- does this fix all
> the packages that 1.2.1 "broke"?

Another bug:

> diff --git a/src/process/fork.c b/src/process/fork.c
> index a12da01a..ecf7f376 100644
> --- a/src/process/fork.c
> +++ b/src/process/fork.c
> @@ -1,13 +1,81 @@
>  #include <unistd.h>
>  #include "libc.h"
> +#include "lock.h"
> +#include "pthread_impl.h"
> +#include "fork_impl.h"
> +
> +static volatile int *const dummy_lockptr = 0;
> +
> +weak_alias(dummy_lockptr, __at_quick_exit_lockptr);
> +weak_alias(dummy_lockptr, __atexit_lockptr);
> +weak_alias(dummy_lockptr, __dlerror_lockptr);
> +weak_alias(dummy_lockptr, __gettext_lockptr);
> +weak_alias(dummy_lockptr, __random_lockptr);
> +weak_alias(dummy_lockptr, __sem_open_lockptr);
> +weak_alias(dummy_lockptr, __stdio_ofl_lockptr);
> +weak_alias(dummy_lockptr, __syslog_lockptr);
> +weak_alias(dummy_lockptr, __timezone_lockptr);
> +weak_alias(dummy_lockptr, __bump_lockptr);
> +
> +weak_alias(dummy_lockptr, __vmlock_lockptr);
> +
> +static volatile int *const *const atfork_locks[] = {
> +	&__at_quick_exit_lockptr,
> +	&__atexit_lockptr,
> +	&__dlerror_lockptr,
> +	&__gettext_lockptr,
> +	&__random_lockptr,
> +	&__sem_open_lockptr,
> +	&__stdio_ofl_lockptr,
> +	&__syslog_lockptr,
> +	&__timezone_lockptr,
> +	&__bump_lockptr,
> +};
>  
>  static void dummy(int x) { }
>  weak_alias(dummy, __fork_handler);
> +weak_alias(dummy, __malloc_atfork);
> +weak_alias(dummy, __ldso_atfork);
> +
> +static void dummy_0(void) { }
> +weak_alias(dummy_0, __tl_lock);
> +weak_alias(dummy_0, __tl_unlock);
>  
>  pid_t fork(void)
>  {
> +	sigset_t set;
>  	__fork_handler(-1);
> +	__block_app_sigs(&set);
> +	int need_locks = libc.need_locks > 0;
> +	if (need_locks) {
> +		__ldso_atfork(-1);
> +		__inhibit_ptc();
> +		for (int i=0; i<sizeof atfork_locks/sizeof *atfork_locks; i++)
> +			if (atfork_locks[i]) LOCK(*atfork_locks[i]);
                            ^^^^^^^^^^^^^^^

Always non-null because it's missing a level of indirection; causes
static linked program to crash. Should be if (*atfork_locks[i]).

> +		__malloc_atfork(-1);
> +		__tl_lock();
> +	}
> +	pthread_t self=__pthread_self(), next=self->next;
>  	pid_t ret = _Fork();
> +	if (need_locks) {
> +		if (!ret) {
> +			for (pthread_t td=next; td!=self; td=td->next)
> +				td->tid = -1;
> +			if (__vmlock_lockptr) {
> +				__vmlock_lockptr[0] = 0;
> +				__vmlock_lockptr[1] = 0;
> +			}
> +		}
> +		__tl_unlock();
> +		__malloc_atfork(!ret);
> +		for (int i=0; i<sizeof atfork_locks/sizeof *atfork_locks; i++)
> +			if (atfork_locks[i])
                            ^^^^^^^^^^^^^^^

And same here.

> +				if (ret) UNLOCK(*atfork_locks[i]);
> +				else **atfork_locks[i] = 0;
> +		__release_ptc();
> +		__ldso_atfork(!ret);
> +	}
> +	__restore_sigs(&set);
>  	__fork_handler(!ret);
>  	return ret;
>  }

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [musl] [PATCH v2] MT fork
  2020-10-27 21:17   ` Rich Felker
@ 2020-10-28 18:56     ` Rich Felker
  2020-10-28 23:06       ` Milan P. Stanić
  0 siblings, 1 reply; 42+ messages in thread
From: Rich Felker @ 2020-10-28 18:56 UTC (permalink / raw)
  To: musl

[-- Attachment #1: Type: text/plain, Size: 498 bytes --]

On Tue, Oct 27, 2020 at 05:17:35PM -0400, Rich Felker wrote:
> > > 
> > > Will follow up with draft patch for testing.
> > 
> > Patch attached. It should suffice for testing and review of whether
> > there are any locks/state I overlooked. It could possibly be made less
> > ugly too.
> > [...]
> 
> Another bug:
> [...]

And an updated version of the patch with both previously reported bugs
fixed, for the purpose of users/distros wanting to test without
manually fixing up the patch. Attached.


[-- Attachment #2: mt-fork-v2.diff --]
[-- Type: text/plain, Size: 11289 bytes --]

diff --git a/ldso/dynlink.c b/ldso/dynlink.c
index f9ac0100..acd8e59b 100644
--- a/ldso/dynlink.c
+++ b/ldso/dynlink.c
@@ -21,6 +21,7 @@
 #include <semaphore.h>
 #include <sys/membarrier.h>
 #include "pthread_impl.h"
+#include "fork_impl.h"
 #include "libc.h"
 #include "dynlink.h"
 
@@ -1404,6 +1405,17 @@ void __libc_exit_fini()
 	}
 }
 
+void __ldso_atfork(int who)
+{
+	if (who<0) {
+		pthread_rwlock_wrlock(&lock);
+		pthread_mutex_lock(&init_fini_lock);
+	} else {
+		pthread_mutex_unlock(&init_fini_lock);
+		pthread_rwlock_unlock(&lock);
+	}
+}
+
 static struct dso **queue_ctors(struct dso *dso)
 {
 	size_t cnt, qpos, spos, i;
@@ -1462,6 +1474,13 @@ static struct dso **queue_ctors(struct dso *dso)
 	}
 	queue[qpos] = 0;
 	for (i=0; i<qpos; i++) queue[i]->mark = 0;
+	for (i=0; i<qpos; i++)
+		if (queue[i]->ctor_visitor && queue[i]->ctor_visitor->tid < 0) {
+			error("State of %s is inconsistent due to multithreaded fork\n",
+				queue[i]->name);
+			free(queue);
+			if (runtime) longjmp(*rtld_fail, 1);
+		}
 
 	return queue;
 }
diff --git a/src/exit/at_quick_exit.c b/src/exit/at_quick_exit.c
index d3ce6522..e4b5d78d 100644
--- a/src/exit/at_quick_exit.c
+++ b/src/exit/at_quick_exit.c
@@ -1,12 +1,14 @@
 #include <stdlib.h>
 #include "libc.h"
 #include "lock.h"
+#include "fork_impl.h"
 
 #define COUNT 32
 
 static void (*funcs[COUNT])(void);
 static int count;
 static volatile int lock[1];
+volatile int *const __at_quick_exit_lockptr = lock;
 
 void __funcs_on_quick_exit()
 {
diff --git a/src/exit/atexit.c b/src/exit/atexit.c
index 160d277a..2804a1d7 100644
--- a/src/exit/atexit.c
+++ b/src/exit/atexit.c
@@ -2,6 +2,7 @@
 #include <stdint.h>
 #include "libc.h"
 #include "lock.h"
+#include "fork_impl.h"
 
 /* Ensure that at least 32 atexit handlers can be registered without malloc */
 #define COUNT 32
@@ -15,6 +16,7 @@ static struct fl
 
 static int slot;
 static volatile int lock[1];
+volatile int *const __atexit_lockptr = lock;
 
 void __funcs_on_exit()
 {
diff --git a/src/internal/fork_impl.h b/src/internal/fork_impl.h
new file mode 100644
index 00000000..0e116aae
--- /dev/null
+++ b/src/internal/fork_impl.h
@@ -0,0 +1,18 @@
+#include <features.h>
+
+extern hidden volatile int *const __at_quick_exit_lockptr;
+extern hidden volatile int *const __atexit_lockptr;
+extern hidden volatile int *const __dlerror_lockptr;
+extern hidden volatile int *const __gettext_lockptr;
+extern hidden volatile int *const __random_lockptr;
+extern hidden volatile int *const __sem_open_lockptr;
+extern hidden volatile int *const __stdio_ofl_lockptr;
+extern hidden volatile int *const __syslog_lockptr;
+extern hidden volatile int *const __timezone_lockptr;
+
+extern hidden volatile int *const __bump_lockptr;
+
+extern hidden volatile int *const __vmlock_lockptr;
+
+hidden void __malloc_atfork(int);
+hidden void __ldso_atfork(int);
diff --git a/src/ldso/dlerror.c b/src/ldso/dlerror.c
index 3fcc7779..009beecb 100644
--- a/src/ldso/dlerror.c
+++ b/src/ldso/dlerror.c
@@ -4,6 +4,7 @@
 #include "pthread_impl.h"
 #include "dynlink.h"
 #include "lock.h"
+#include "fork_impl.h"
 
 char *dlerror()
 {
@@ -19,6 +20,7 @@ char *dlerror()
 
 static volatile int freebuf_queue_lock[1];
 static void **freebuf_queue;
+volatile int *const __dlerror_lockptr = freebuf_queue_lock;
 
 void __dl_thread_cleanup(void)
 {
diff --git a/src/locale/dcngettext.c b/src/locale/dcngettext.c
index 4c304393..805fe83b 100644
--- a/src/locale/dcngettext.c
+++ b/src/locale/dcngettext.c
@@ -10,6 +10,7 @@
 #include "atomic.h"
 #include "pleval.h"
 #include "lock.h"
+#include "fork_impl.h"
 
 struct binding {
 	struct binding *next;
@@ -34,9 +35,11 @@ static char *gettextdir(const char *domainname, size_t *dirlen)
 	return 0;
 }
 
+static volatile int lock[1];
+volatile int *const __gettext_lockptr = lock;
+
 char *bindtextdomain(const char *domainname, const char *dirname)
 {
-	static volatile int lock[1];
 	struct binding *p, *q;
 
 	if (!domainname) return 0;
diff --git a/src/malloc/lite_malloc.c b/src/malloc/lite_malloc.c
index f8931ba5..f6736ec4 100644
--- a/src/malloc/lite_malloc.c
+++ b/src/malloc/lite_malloc.c
@@ -6,6 +6,7 @@
 #include "libc.h"
 #include "lock.h"
 #include "syscall.h"
+#include "fork_impl.h"
 
 #define ALIGN 16
 
@@ -31,10 +32,12 @@ static int traverses_stack_p(uintptr_t old, uintptr_t new)
 	return 0;
 }
 
+static volatile int lock[1];
+volatile int *const __bump_lockptr = lock;
+
 static void *__simple_malloc(size_t n)
 {
 	static uintptr_t brk, cur, end;
-	static volatile int lock[1];
 	static unsigned mmap_step;
 	size_t align=1;
 	void *p;
diff --git a/src/malloc/mallocng/glue.h b/src/malloc/mallocng/glue.h
index 16acd1ea..980ce396 100644
--- a/src/malloc/mallocng/glue.h
+++ b/src/malloc/mallocng/glue.h
@@ -56,7 +56,8 @@ __attribute__((__visibility__("hidden")))
 extern int __malloc_lock[1];
 
 #define LOCK_OBJ_DEF \
-int __malloc_lock[1];
+int __malloc_lock[1]; \
+void __malloc_atfork(int who) { malloc_atfork(who); }
 
 static inline void rdlock()
 {
@@ -73,5 +74,16 @@ static inline void unlock()
 static inline void upgradelock()
 {
 }
+static inline void resetlock()
+{
+	__malloc_lock[0] = 0;
+}
+
+static inline void malloc_atfork(int who)
+{
+	if (who<0) rdlock();
+	else if (who>0) resetlock();
+	else unlock();
+}
 
 #endif
diff --git a/src/malloc/oldmalloc/malloc.c b/src/malloc/oldmalloc/malloc.c
index c0997ad8..8ac68a24 100644
--- a/src/malloc/oldmalloc/malloc.c
+++ b/src/malloc/oldmalloc/malloc.c
@@ -9,6 +9,7 @@
 #include "atomic.h"
 #include "pthread_impl.h"
 #include "malloc_impl.h"
+#include "fork_impl.h"
 
 #if defined(__GNUC__) && defined(__PIC__)
 #define inline inline __attribute__((always_inline))
@@ -527,3 +528,21 @@ void __malloc_donate(char *start, char *end)
 	c->csize = n->psize = C_INUSE | (end-start);
 	__bin_chunk(c);
 }
+
+void __malloc_atfork(int who)
+{
+	if (who<0) {
+		lock(mal.split_merge_lock);
+		for (int i=0; i<64; i++)
+			lock(mal.bins[i].lock);
+	} else if (!who) {
+		for (int i=0; i<64; i++)
+			unlock(mal.bins[i].lock);
+		unlock(mal.split_merge_lock);
+	} else {
+		for (int i=0; i<64; i++)
+			mal.bins[i].lock[0] = mal.bins[i].lock[1] = 0;
+		mal.split_merge_lock[1] = 0;
+		mal.split_merge_lock[0] = 0;
+	}
+}
diff --git a/src/misc/syslog.c b/src/misc/syslog.c
index 13d4b0a6..7dc0c1be 100644
--- a/src/misc/syslog.c
+++ b/src/misc/syslog.c
@@ -10,6 +10,7 @@
 #include <errno.h>
 #include <fcntl.h>
 #include "lock.h"
+#include "fork_impl.h"
 
 static volatile int lock[1];
 static char log_ident[32];
@@ -17,6 +18,7 @@ static int log_opt;
 static int log_facility = LOG_USER;
 static int log_mask = 0xff;
 static int log_fd = -1;
+volatile int *const __syslog_lockptr = lock;
 
 int setlogmask(int maskpri)
 {
diff --git a/src/prng/random.c b/src/prng/random.c
index 633a17f6..d3780fa7 100644
--- a/src/prng/random.c
+++ b/src/prng/random.c
@@ -1,6 +1,7 @@
 #include <stdlib.h>
 #include <stdint.h>
 #include "lock.h"
+#include "fork_impl.h"
 
 /*
 this code uses the same lagged fibonacci generator as the
@@ -23,6 +24,7 @@ static int i = 3;
 static int j = 0;
 static uint32_t *x = init+1;
 static volatile int lock[1];
+volatile int *const __random_lockptr = lock;
 
 static uint32_t lcg31(uint32_t x) {
 	return (1103515245*x + 12345) & 0x7fffffff;
diff --git a/src/process/fork.c b/src/process/fork.c
index 8d34a9c4..25808d85 100644
--- a/src/process/fork.c
+++ b/src/process/fork.c
@@ -1,15 +1,83 @@
 #include <unistd.h>
 #include <errno.h>
 #include "libc.h"
+#include "lock.h"
+#include "pthread_impl.h"
+#include "fork_impl.h"
+
+static volatile int *const dummy_lockptr = 0;
+
+weak_alias(dummy_lockptr, __at_quick_exit_lockptr);
+weak_alias(dummy_lockptr, __atexit_lockptr);
+weak_alias(dummy_lockptr, __dlerror_lockptr);
+weak_alias(dummy_lockptr, __gettext_lockptr);
+weak_alias(dummy_lockptr, __random_lockptr);
+weak_alias(dummy_lockptr, __sem_open_lockptr);
+weak_alias(dummy_lockptr, __stdio_ofl_lockptr);
+weak_alias(dummy_lockptr, __syslog_lockptr);
+weak_alias(dummy_lockptr, __timezone_lockptr);
+weak_alias(dummy_lockptr, __bump_lockptr);
+
+weak_alias(dummy_lockptr, __vmlock_lockptr);
+
+static volatile int *const *const atfork_locks[] = {
+	&__at_quick_exit_lockptr,
+	&__atexit_lockptr,
+	&__dlerror_lockptr,
+	&__gettext_lockptr,
+	&__random_lockptr,
+	&__sem_open_lockptr,
+	&__stdio_ofl_lockptr,
+	&__syslog_lockptr,
+	&__timezone_lockptr,
+	&__bump_lockptr,
+};
 
 static void dummy(int x) { }
 weak_alias(dummy, __fork_handler);
+weak_alias(dummy, __malloc_atfork);
+weak_alias(dummy, __ldso_atfork);
+
+static void dummy_0(void) { }
+weak_alias(dummy_0, __tl_lock);
+weak_alias(dummy_0, __tl_unlock);
 
 pid_t fork(void)
 {
+	sigset_t set;
 	__fork_handler(-1);
+	__block_app_sigs(&set);
+	int need_locks = libc.need_locks > 0;
+	if (need_locks) {
+		__ldso_atfork(-1);
+		__inhibit_ptc();
+		for (int i=0; i<sizeof atfork_locks/sizeof *atfork_locks; i++)
+			if (*atfork_locks[i]) LOCK(*atfork_locks[i]);
+		__malloc_atfork(-1);
+		__tl_lock();
+	}
+	pthread_t self=__pthread_self(), next=self->next;
 	pid_t ret = _Fork();
 	int errno_save = errno;
+	if (need_locks) {
+		if (!ret) {
+			for (pthread_t td=next; td!=self; td=td->next)
+				td->tid = -1;
+			if (__vmlock_lockptr) {
+				__vmlock_lockptr[0] = 0;
+				__vmlock_lockptr[1] = 0;
+			}
+		}
+		__tl_unlock();
+		__malloc_atfork(!ret);
+		for (int i=0; i<sizeof atfork_locks/sizeof *atfork_locks; i++)
+			if (*atfork_locks[i])
+				if (ret) UNLOCK(*atfork_locks[i]);
+				else **atfork_locks[i] = 0;
+		__release_ptc();
+		__ldso_atfork(!ret);
+	}
+	__restore_sigs(&set);
 	__fork_handler(!ret);
 	if (ret<0) errno = errno_save;
 	return ret;
diff --git a/src/stdio/ofl.c b/src/stdio/ofl.c
index f2d3215a..aad3d171 100644
--- a/src/stdio/ofl.c
+++ b/src/stdio/ofl.c
@@ -1,8 +1,10 @@
 #include "stdio_impl.h"
 #include "lock.h"
+#include "fork_impl.h"
 
 static FILE *ofl_head;
 static volatile int ofl_lock[1];
+volatile int *const __stdio_ofl_lockptr = ofl_lock;
 
 FILE **__ofl_lock()
 {
diff --git a/src/thread/sem_open.c b/src/thread/sem_open.c
index de8555c5..f198056e 100644
--- a/src/thread/sem_open.c
+++ b/src/thread/sem_open.c
@@ -12,6 +12,7 @@
 #include <stdlib.h>
 #include <pthread.h>
 #include "lock.h"
+#include "fork_impl.h"
 
 static struct {
 	ino_t ino;
@@ -19,6 +20,7 @@ static struct {
 	int refcnt;
 } *semtab;
 static volatile int lock[1];
+volatile int *const __sem_open_lockptr = lock;
 
 #define FLAGS (O_RDWR|O_NOFOLLOW|O_CLOEXEC|O_NONBLOCK)
 
diff --git a/src/thread/vmlock.c b/src/thread/vmlock.c
index 75f3cb76..fa0a8e3c 100644
--- a/src/thread/vmlock.c
+++ b/src/thread/vmlock.c
@@ -1,6 +1,8 @@
 #include "pthread_impl.h"
+#include "fork_impl.h"
 
 static volatile int vmlock[2];
+volatile int *const __vmlock_lockptr = vmlock;
 
 void __vm_wait()
 {
diff --git a/src/time/__tz.c b/src/time/__tz.c
index 49a7371e..72c0d58b 100644
--- a/src/time/__tz.c
+++ b/src/time/__tz.c
@@ -6,6 +6,7 @@
 #include <sys/mman.h>
 #include "libc.h"
 #include "lock.h"
+#include "fork_impl.h"
 
 long  __timezone = 0;
 int   __daylight = 0;
@@ -30,6 +31,7 @@ static char *old_tz = old_tz_buf;
 static size_t old_tz_size = sizeof old_tz_buf;
 
 static volatile int lock[1];
+volatile int *const __timezone_lockptr = lock;
 
 static int getint(const char **p)
 {

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-10-28 18:56     ` [musl] [PATCH v2] " Rich Felker
@ 2020-10-28 23:06       ` Milan P. Stanić
  2020-10-29 16:13         ` Szabolcs Nagy
  0 siblings, 1 reply; 42+ messages in thread
From: Milan P. Stanić @ 2020-10-28 23:06 UTC (permalink / raw)
  To: musl

On Wed, 2020-10-28 at 14:56, Rich Felker wrote:
> On Tue, Oct 27, 2020 at 05:17:35PM -0400, Rich Felker wrote:
> > > > 
> > > > Will follow up with draft patch for testing.
> > > 
> > > Patch attached. It should suffice for testing and review of whether
> > > there are any locks/state I overlooked. It could possibly be made less
> > > ugly too.
> > > [...]
> > 
> > Another bug:
> > [...]
> 
> And an updated version of the patch with both previously reported bugs
> fixed, for the purpose of users/distros wanting to test without
> manually fixing up the patch. Attached.
 
Applied this patch on top of current musl master, build it on Alpine and
installed.

Tested by building ruby lang. Works fine.
Also tested building zig lang, works fine.
But crystal lang builds fine, but running it hangs. strace shows:
-------------
[pid  5573] futex(0x7efc50fba9e4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
[pid  5568] futex(0x7efc5118f984, FUTEX_REQUEUE_PRIVATE, 0, 1, 0x7efc514b67a4) = 1
[pid  5568] futex(0x7efc514b67a4, FUTEX_WAKE_PRIVATE, 1) = 1
[pid  5571] <... futex resumed>)        = 0
[pid  5568] futex(0x7efc511099e4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
[pid  5571] futex(0x7efc510409e4, FUTEX_WAIT_PRIVATE, 2, NULL
-------------
where it hangs.

-- 
Regards

[...]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-10-28 23:06       ` Milan P. Stanić
@ 2020-10-29 16:13         ` Szabolcs Nagy
  2020-10-29 16:20           ` Rich Felker
  2020-10-29 20:55           ` Milan P. Stanić
  0 siblings, 2 replies; 42+ messages in thread
From: Szabolcs Nagy @ 2020-10-29 16:13 UTC (permalink / raw)
  To: Milan P. Stanić; +Cc: musl

* Milan P. Stanić <mps@arvanta.net> [2020-10-29 00:06:10 +0100]:
> On Wed, 2020-10-28 at 14:56, Rich Felker wrote:
> > On Tue, Oct 27, 2020 at 05:17:35PM -0400, Rich Felker wrote:
> > > > > 
> > > > > Will follow up with draft patch for testing.
> > > > 
> > > > Patch attached. It should suffice for testing and review of whether
> > > > there are any locks/state I overlooked. It could possibly be made less
> > > > ugly too.
> > > > [...]
> > > 
> > > Another bug:
> > > [...]
> > 
> > And an updated version of the patch with both previously reported bugs
> > fixed, for the purpose of users/distros wanting to test without
> > manually fixing up the patch. Attached.
>  
> Applied this patch on top of current musl master, build it on Alpine and
> installed.
> 
> Tested by building ruby lang. Works fine.
> Also tested building zig lang, works fine.
> But crystal lang builds fine, but running it hangs. strace shows:
> -------------
> [pid  5573] futex(0x7efc50fba9e4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
> [pid  5568] futex(0x7efc5118f984, FUTEX_REQUEUE_PRIVATE, 0, 1, 0x7efc514b67a4) = 1
> [pid  5568] futex(0x7efc514b67a4, FUTEX_WAKE_PRIVATE, 1) = 1
> [pid  5571] <... futex resumed>)        = 0
> [pid  5568] futex(0x7efc511099e4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
> [pid  5571] futex(0x7efc510409e4, FUTEX_WAIT_PRIVATE, 2, NULL
> -------------
> where it hangs.

try to attach gdb to the process that hang and do

thread apply all bt

(make sure musl-dbg is installed)

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-10-29 16:13         ` Szabolcs Nagy
@ 2020-10-29 16:20           ` Rich Felker
  2020-10-29 20:55           ` Milan P. Stanić
  1 sibling, 0 replies; 42+ messages in thread
From: Rich Felker @ 2020-10-29 16:20 UTC (permalink / raw)
  To: musl; +Cc: Milan P. Stanić

On Thu, Oct 29, 2020 at 05:13:48PM +0100, Szabolcs Nagy wrote:
> * Milan P. Stanić <mps@arvanta.net> [2020-10-29 00:06:10 +0100]:
> > On Wed, 2020-10-28 at 14:56, Rich Felker wrote:
> > > On Tue, Oct 27, 2020 at 05:17:35PM -0400, Rich Felker wrote:
> > > > > > 
> > > > > > Will follow up with draft patch for testing.
> > > > > 
> > > > > Patch attached. It should suffice for testing and review of whether
> > > > > there are any locks/state I overlooked. It could possibly be made less
> > > > > ugly too.
> > > > > [...]
> > > > 
> > > > Another bug:
> > > > [...]
> > > 
> > > And an updated version of the patch with both previously reported bugs
> > > fixed, for the purpose of users/distros wanting to test without
> > > manually fixing up the patch. Attached.
> >  
> > Applied this patch on top of current musl master, build it on Alpine and
> > installed.
> > 
> > Tested by building ruby lang. Works fine.
> > Also tested building zig lang, works fine.
> > But crystal lang builds fine, but running it hangs. strace shows:
> > -------------
> > [pid  5573] futex(0x7efc50fba9e4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
> > [pid  5568] futex(0x7efc5118f984, FUTEX_REQUEUE_PRIVATE, 0, 1, 0x7efc514b67a4) = 1
> > [pid  5568] futex(0x7efc514b67a4, FUTEX_WAKE_PRIVATE, 1) = 1
> > [pid  5571] <... futex resumed>)        = 0
> > [pid  5568] futex(0x7efc511099e4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
> > [pid  5571] futex(0x7efc510409e4, FUTEX_WAIT_PRIVATE, 2, NULL
> > -------------
> > where it hangs.
> 
> try to attach gdb to the process that hang and do
> 
> thread apply all bt

Thanks -- I meant to reply to this earlier but forgot to.

> (make sure musl-dbg is installed)

If using a replacement musl, I think you'd just ensure it's built with
-g rather than relying on distro musl-dbg package, which will be for a
mismatched build/version.

Rich

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-10-29 16:13         ` Szabolcs Nagy
  2020-10-29 16:20           ` Rich Felker
@ 2020-10-29 20:55           ` Milan P. Stanić
  2020-10-29 22:21             ` Szabolcs Nagy
  1 sibling, 1 reply; 42+ messages in thread
From: Milan P. Stanić @ 2020-10-29 20:55 UTC (permalink / raw)
  To: musl

On Thu, 2020-10-29 at 17:13, Szabolcs Nagy wrote:
> * Milan P. Stanić <mps@arvanta.net> [2020-10-29 00:06:10 +0100]:
> > On Wed, 2020-10-28 at 14:56, Rich Felker wrote:
> > > On Tue, Oct 27, 2020 at 05:17:35PM -0400, Rich Felker wrote:
> > > > > > 
> > > > > > Will follow up with draft patch for testing.
> > > > > 
> > > > > Patch attached. It should suffice for testing and review of whether
> > > > > there are any locks/state I overlooked. It could possibly be made less
> > > > > ugly too.
> > > > > [...]
> > > > 
> > > > Another bug:
> > > > [...]
> > > 
> > > And an updated version of the patch with both previously reported bugs
> > > fixed, for the purpose of users/distros wanting to test without
> > > manually fixing up the patch. Attached.
> >  
> > Applied this patch on top of current musl master, build it on Alpine and
> > installed.
> > 
> > Tested by building ruby lang. Works fine.
> > Also tested building zig lang, works fine.
> > But crystal lang builds fine, but running it hangs. strace shows:
> > -------------
> > [pid  5573] futex(0x7efc50fba9e4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
> > [pid  5568] futex(0x7efc5118f984, FUTEX_REQUEUE_PRIVATE, 0, 1, 0x7efc514b67a4) = 1
> > [pid  5568] futex(0x7efc514b67a4, FUTEX_WAKE_PRIVATE, 1) = 1
> > [pid  5571] <... futex resumed>)        = 0
> > [pid  5568] futex(0x7efc511099e4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
> > [pid  5571] futex(0x7efc510409e4, FUTEX_WAIT_PRIVATE, 2, NULL
> > -------------
> > where it hangs.
> 
> try to attach gdb to the process that hang and do
> 
> thread apply all bt
> 
> (make sure musl-dbg is installed)

I cannot attach gdb for running process because my Alpine development
boxes are lxc containers where CAP_SYS_PTRACE is not enabled afaik.

I installed musl-dbg and run program with 'gdb .build/crystal' and result is:
----------
Reading symbols from .build/crystal...
(gdb) run
Starting program: /home/mps/aports/community/crystal/src/crystal-0.35.1/.build/crystal

Program received signal SIGSEGV, Segmentation fault.
0x00007ffff401c463 in GC_find_limit_with_bound () from /usr/lib/libgc.so.1
(gdb) bt
#0  0x00007ffff401c463 in GC_find_limit_with_bound () from /usr/lib/libgc.so.1
#1  0x00007ffff401c522 in GC_init_linux_data_start () from /usr/lib/libgc.so.1
#2  0x00007ffff401b2e0 in GC_init () from /usr/lib/libgc.so.1
#3  0x00005555555668d8 in init () at
/home/mps/aports/community/crystal/src/crystal-0.35.1/src/gc/boehm.cr:127
#4  main () at /home/mps/aports/community/crystal/src/crystal-0.35.1/src/crystal/main.cr:35
#5  main () at /home/mps/aports/community/crystal/src/crystal-0.35.1/src/crystal/main.cr:114
(gdb)
----------

Same with 'thread apply all bt'.

gc lib (garbage collector, https://hboehm.info/gc/) version is
gc-dev-8.0.4-r1 on Alpine.

I can't test this on native machine because currently I don't have any
x86_64 with enough resources to build crystal. On aarch64 it even can't
be built with mt-fork patch, hangs always during build.

-- 
Regards

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-10-29 20:55           ` Milan P. Stanić
@ 2020-10-29 22:21             ` Szabolcs Nagy
  2020-10-29 23:00               ` Milan P. Stanić
  0 siblings, 1 reply; 42+ messages in thread
From: Szabolcs Nagy @ 2020-10-29 22:21 UTC (permalink / raw)
  To: Milan P. Stanić; +Cc: musl

* Milan P. Stanić <mps@arvanta.net> [2020-10-29 21:55:41 +0100]:
> On Thu, 2020-10-29 at 17:13, Szabolcs Nagy wrote:
> > * Milan P. Stanić <mps@arvanta.net> [2020-10-29 00:06:10 +0100]:
> > >  
> > > Applied this patch on top of current musl master, build it on Alpine and
> > > installed.
> > > 
> > > Tested by building ruby lang. Works fine.
> > > Also tested building zig lang, works fine.
> > > But crystal lang builds fine, but running it hangs. strace shows:
> > > -------------
> > > [pid  5573] futex(0x7efc50fba9e4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
> > > [pid  5568] futex(0x7efc5118f984, FUTEX_REQUEUE_PRIVATE, 0, 1, 0x7efc514b67a4) = 1
> > > [pid  5568] futex(0x7efc514b67a4, FUTEX_WAKE_PRIVATE, 1) = 1
> > > [pid  5571] <... futex resumed>)        = 0
> > > [pid  5568] futex(0x7efc511099e4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
> > > [pid  5571] futex(0x7efc510409e4, FUTEX_WAIT_PRIVATE, 2, NULL
> > > -------------
> > > where it hangs.
> > 
> > try to attach gdb to the process that hang and do
> > 
> > thread apply all bt
> > 
> > (make sure musl-dbg is installed)
> 
> I cannot attach gdb for running process because my Alpine development
> boxes are lxc containers where CAP_SYS_PTRACE is not enabled afaik.

there should be a way to config lxc to allow ptrace.

> I installed musl-dbg and run program with 'gdb .build/crystal' and result is:
> ----------
> Reading symbols from .build/crystal...
> (gdb) run
> Starting program: /home/mps/aports/community/crystal/src/crystal-0.35.1/.build/crystal
> 
> Program received signal SIGSEGV, Segmentation fault.

hm segfault is not a hang..

libgc tries to find stack bounds by registering a segfault handler
and walking stack pages until it triggers.

gdb stops at signals, this one is harmless, you should just continue
until you see a hang, then interrupt and bt.

> 0x00007ffff401c463 in GC_find_limit_with_bound () from /usr/lib/libgc.so.1
> (gdb) bt
> #0  0x00007ffff401c463 in GC_find_limit_with_bound () from /usr/lib/libgc.so.1
> #1  0x00007ffff401c522 in GC_init_linux_data_start () from /usr/lib/libgc.so.1
> #2  0x00007ffff401b2e0 in GC_init () from /usr/lib/libgc.so.1
> #3  0x00005555555668d8 in init () at
> /home/mps/aports/community/crystal/src/crystal-0.35.1/src/gc/boehm.cr:127
> #4  main () at /home/mps/aports/community/crystal/src/crystal-0.35.1/src/crystal/main.cr:35
> #5  main () at /home/mps/aports/community/crystal/src/crystal-0.35.1/src/crystal/main.cr:114
> (gdb)
> ----------
> 
> Same with 'thread apply all bt'.
> 
> gc lib (garbage collector, https://hboehm.info/gc/) version is
> gc-dev-8.0.4-r1 on Alpine.
> 
> I can't test this on native machine because currently I don't have any
> x86_64 with enough resources to build crystal. On aarch64 it even can't
> be built with mt-fork patch, hangs always during build.
> 
> -- 
> Regards

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-10-29 22:21             ` Szabolcs Nagy
@ 2020-10-29 23:00               ` Milan P. Stanić
  2020-10-29 23:27                 ` Rich Felker
  2020-10-29 23:32                 ` Szabolcs Nagy
  0 siblings, 2 replies; 42+ messages in thread
From: Milan P. Stanić @ 2020-10-29 23:00 UTC (permalink / raw)
  To: musl

[-- Attachment #1: Type: text/plain, Size: 3148 bytes --]

On Thu, 2020-10-29 at 23:21, Szabolcs Nagy wrote:
> * Milan P. Stanić <mps@arvanta.net> [2020-10-29 21:55:41 +0100]:
> > On Thu, 2020-10-29 at 17:13, Szabolcs Nagy wrote:
> > > * Milan P. Stanić <mps@arvanta.net> [2020-10-29 00:06:10 +0100]:
> > > >  
> > > > Applied this patch on top of current musl master, build it on Alpine and
> > > > installed.
> > > > 
> > > > Tested by building ruby lang. Works fine.
> > > > Also tested building zig lang, works fine.
> > > > But crystal lang builds fine, but running it hangs. strace shows:
> > > > -------------
> > > > [pid  5573] futex(0x7efc50fba9e4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
> > > > [pid  5568] futex(0x7efc5118f984, FUTEX_REQUEUE_PRIVATE, 0, 1, 0x7efc514b67a4) = 1
> > > > [pid  5568] futex(0x7efc514b67a4, FUTEX_WAKE_PRIVATE, 1) = 1
> > > > [pid  5571] <... futex resumed>)        = 0
> > > > [pid  5568] futex(0x7efc511099e4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
> > > > [pid  5571] futex(0x7efc510409e4, FUTEX_WAIT_PRIVATE, 2, NULL
> > > > -------------
> > > > where it hangs.
> > > 
> > > try to attach gdb to the process that hang and do
> > > 
> > > thread apply all bt
> > > 
> > > (make sure musl-dbg is installed)
> > 
> > I cannot attach gdb for running process because my Alpine development
> > boxes are lxc containers where CAP_SYS_PTRACE is not enabled afaik.
> 
> there should be a way to config lxc to allow ptrace.
> 
> > I installed musl-dbg and run program with 'gdb .build/crystal' and result is:
> > ----------
> > Reading symbols from .build/crystal...
> > (gdb) run
> > Starting program: /home/mps/aports/community/crystal/src/crystal-0.35.1/.build/crystal
> > 
> > Program received signal SIGSEGV, Segmentation fault.
> 
> hm segfault is not a hang..
> 
> libgc tries to find stack bounds by registering a segfault handler
> and walking stack pages until it triggers.
> 
> gdb stops at signals, this one is harmless, you should just continue
> until you see a hang, then interrupt and bt.
 
I did continue now and stopped it when it hangs. Attached is gdb log
produced by 'thread apply all bt'

> > 0x00007ffff401c463 in GC_find_limit_with_bound () from /usr/lib/libgc.so.1
> > (gdb) bt
> > #0  0x00007ffff401c463 in GC_find_limit_with_bound () from /usr/lib/libgc.so.1
> > #1  0x00007ffff401c522 in GC_init_linux_data_start () from /usr/lib/libgc.so.1
> > #2  0x00007ffff401b2e0 in GC_init () from /usr/lib/libgc.so.1
> > #3  0x00005555555668d8 in init () at
> > /home/mps/aports/community/crystal/src/crystal-0.35.1/src/gc/boehm.cr:127
> > #4  main () at /home/mps/aports/community/crystal/src/crystal-0.35.1/src/crystal/main.cr:35
> > #5  main () at /home/mps/aports/community/crystal/src/crystal-0.35.1/src/crystal/main.cr:114
> > (gdb)
> > ----------
> > 
> > Same with 'thread apply all bt'.
> > 
> > gc lib (garbage collector, https://hboehm.info/gc/) version is
> > gc-dev-8.0.4-r1 on Alpine.
> > 
> > I can't test this on native machine because currently I don't have any
> > x86_64 with enough resources to build crystal. On aarch64 it even can't
> > be built with mt-fork patch, hangs always during build.
> > 
> > -- 
> > Regards

[-- Attachment #2: gdb.txt.gz --]
[-- Type: application/gzip, Size: 1322 bytes --]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-10-29 23:00               ` Milan P. Stanić
@ 2020-10-29 23:27                 ` Rich Felker
  2020-10-30  0:13                   ` Rich Felker
  2020-10-30  7:47                   ` Milan P. Stanić
  2020-10-29 23:32                 ` Szabolcs Nagy
  1 sibling, 2 replies; 42+ messages in thread
From: Rich Felker @ 2020-10-29 23:27 UTC (permalink / raw)
  To: Milan P. Stanić; +Cc: musl

On Fri, Oct 30, 2020 at 12:00:07AM +0100, Milan P. Stanić wrote:
> On Thu, 2020-10-29 at 23:21, Szabolcs Nagy wrote:
> > * Milan P. Stanić <mps@arvanta.net> [2020-10-29 21:55:41 +0100]:
> > > On Thu, 2020-10-29 at 17:13, Szabolcs Nagy wrote:
> > > > * Milan P. Stanić <mps@arvanta.net> [2020-10-29 00:06:10 +0100]:
> > > > >  
> > > > > Applied this patch on top of current musl master, build it on Alpine and
> > > > > installed.
> > > > > 
> > > > > Tested by building ruby lang. Works fine.
> > > > > Also tested building zig lang, works fine.
> > > > > But crystal lang builds fine, but running it hangs. strace shows:
> > > > > -------------
> > > > > [pid  5573] futex(0x7efc50fba9e4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
> > > > > [pid  5568] futex(0x7efc5118f984, FUTEX_REQUEUE_PRIVATE, 0, 1, 0x7efc514b67a4) = 1
> > > > > [pid  5568] futex(0x7efc514b67a4, FUTEX_WAKE_PRIVATE, 1) = 1
> > > > > [pid  5571] <... futex resumed>)        = 0
> > > > > [pid  5568] futex(0x7efc511099e4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
> > > > > [pid  5571] futex(0x7efc510409e4, FUTEX_WAIT_PRIVATE, 2, NULL
> > > > > -------------
> > > > > where it hangs.
> > > > 
> > > > try to attach gdb to the process that hang and do
> > > > 
> > > > thread apply all bt
> > > > 
> > > > (make sure musl-dbg is installed)
> > > 
> > > I cannot attach gdb for running process because my Alpine development
> > > boxes are lxc containers where CAP_SYS_PTRACE is not enabled afaik.
> > 
> > there should be a way to config lxc to allow ptrace.
> > 
> > > I installed musl-dbg and run program with 'gdb .build/crystal' and result is:
> > > ----------
> > > Reading symbols from .build/crystal...
> > > (gdb) run
> > > Starting program: /home/mps/aports/community/crystal/src/crystal-0.35.1/.build/crystal
> > > 
> > > Program received signal SIGSEGV, Segmentation fault.
> > 
> > hm segfault is not a hang..
> > 
> > libgc tries to find stack bounds by registering a segfault handler
> > and walking stack pages until it triggers.
> > 
> > gdb stops at signals, this one is harmless, you should just continue
> > until you see a hang, then interrupt and bt.
>  
> I did continue now and stopped it when it hangs. Attached is gdb log
> produced by 'thread apply all bt'

Next time please don't gzip so it's readable and repliable. I've
expanded it out here though:

> Starting program: /home/mps/aports/community/crystal/src/crystal-0.35.1/.build/crystal 
> 
> Program received signal SIGSEGV, Segmentation fault.
> 0x00007ffff401c463 in GC_find_limit_with_bound () from /usr/lib/libgc.so.1
> Continuing.
> [New LWP 18867]
> [New LWP 18868]
> [New LWP 18869]
> [New LWP 18870]
> [New LWP 18871]
> [New LWP 18872]
> [New LWP 18873]
> [New LWP 18874]
> [New LWP 18875]
> [New LWP 18876]
> [New LWP 18877]
> [New LWP 18878]
> [New LWP 18879]
> [New LWP 18880]
> [New LWP 18881]
> 
> Thread 1 "crystal" received signal SIGINT, Interrupt.
> __syscall_cp_c (nr=202, u=140737488349316, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> 61	./arch/x86_64/syscall_arch.h: No such file or directory.
> 
> Thread 16 (LWP 18881):
> #0  __syscall_cp_c (nr=202, u=140737280690660, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff39f49e4) at src/thread/__timedwait.c:52
> #2  __timedwait_cp (addr=addr@entry=0x7ffff39f49e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> #7  0x00007ffff7fb9df7 in start (p=0x7ffff3a04aa0) at src/thread/pthread_create.c:196
> #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> Backtrace stopped: frame did not save the PC
> 
> Thread 15 (LWP 18880):
> #0  __syscall_cp_c (nr=202, u=140737280965092, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3a379e4) at src/thread/__timedwait.c:52
> #2  __timedwait_cp (addr=addr@entry=0x7ffff3a379e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> #7  0x00007ffff7fb9df7 in start (p=0x7ffff3a47aa0) at src/thread/pthread_create.c:196
> #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> Backtrace stopped: frame did not save the PC
> 
> Thread 14 (LWP 18879):
> #0  __syscall_cp_c (nr=202, u=140737281239524, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3a7a9e4) at src/thread/__timedwait.c:52
> #2  __timedwait_cp (addr=addr@entry=0x7ffff3a7a9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> #7  0x00007ffff7fb9df7 in start (p=0x7ffff3a8aaa0) at src/thread/pthread_create.c:196
> #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> Backtrace stopped: frame did not save the PC
> 
> Thread 13 (LWP 18878):
> #0  __syscall_cp_c (nr=202, u=140737281513956, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3abd9e4) at src/thread/__timedwait.c:52
> #2  __timedwait_cp (addr=addr@entry=0x7ffff3abd9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> #7  0x00007ffff7fb9df7 in start (p=0x7ffff3acdaa0) at src/thread/pthread_create.c:196
> #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> Backtrace stopped: frame did not save the PC
> 
> Thread 12 (LWP 18877):
> #0  __syscall_cp_c (nr=202, u=140737281788388, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3b009e4) at src/thread/__timedwait.c:52
> #2  __timedwait_cp (addr=addr@entry=0x7ffff3b009e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> #7  0x00007ffff7fb9df7 in start (p=0x7ffff3b10aa0) at src/thread/pthread_create.c:196
> #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> Backtrace stopped: frame did not save the PC
> 
> Thread 11 (LWP 18876):
> #0  __syscall_cp_c (nr=202, u=140737282062820, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3b439e4) at src/thread/__timedwait.c:52
> #2  __timedwait_cp (addr=addr@entry=0x7ffff3b439e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> #7  0x00007ffff7fb9df7 in start (p=0x7ffff3b53aa0) at src/thread/pthread_create.c:196
> #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> Backtrace stopped: frame did not save the PC
> 
> Thread 10 (LWP 18875):
> #0  __syscall_cp_c (nr=202, u=140737282337252, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3b869e4) at src/thread/__timedwait.c:52
> #2  __timedwait_cp (addr=addr@entry=0x7ffff3b869e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> #7  0x00007ffff7fb9df7 in start (p=0x7ffff3b96aa0) at src/thread/pthread_create.c:196
> #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> Backtrace stopped: frame did not save the PC
> 
> Thread 9 (LWP 18874):
> #0  __syscall_cp_c (nr=202, u=140737282611684, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3bc99e4) at src/thread/__timedwait.c:52
> #2  __timedwait_cp (addr=addr@entry=0x7ffff3bc99e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> #7  0x00007ffff7fb9df7 in start (p=0x7ffff3bd9aa0) at src/thread/pthread_create.c:196
> #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> Backtrace stopped: frame did not save the PC
> 
> Thread 8 (LWP 18873):
> #0  __syscall_cp_c (nr=202, u=140737282886020, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3c0c984) at src/thread/__timedwait.c:52
> #2  __timedwait_cp (addr=addr@entry=0x7ffff3c0c984, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> #5  0x00007ffff40186ca in GC_mark_local () from /usr/lib/libgc.so.1
> #6  0x00007ffff40188f4 in GC_help_marker () from /usr/lib/libgc.so.1
> #7  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> #8  0x00007ffff7fb9df7 in start (p=0x7ffff3c1caa0) at src/thread/pthread_create.c:196
> #9  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> Backtrace stopped: frame did not save the PC
> 
> Thread 7 (LWP 18872):
> #0  __syscall_cp_c (nr=202, u=140737283160548, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3c4f9e4) at src/thread/__timedwait.c:52
> #2  __timedwait_cp (addr=addr@entry=0x7ffff3c4f9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> #7  0x00007ffff7fb9df7 in start (p=0x7ffff3c5faa0) at src/thread/pthread_create.c:196
> #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> Backtrace stopped: frame did not save the PC
> 
> Thread 6 (LWP 18871):
> #0  __syscall_cp_c (nr=202, u=140737283434980, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3c929e4) at src/thread/__timedwait.c:52
> #2  __timedwait_cp (addr=addr@entry=0x7ffff3c929e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> #7  0x00007ffff7fb9df7 in start (p=0x7ffff3ca2aa0) at src/thread/pthread_create.c:196
> #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> Backtrace stopped: frame did not save the PC
> 
> Thread 5 (LWP 18870):
> #0  __syscall_cp_c (nr=202, u=140737283709412, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3cd59e4) at src/thread/__timedwait.c:52
> #2  __timedwait_cp (addr=addr@entry=0x7ffff3cd59e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> #7  0x00007ffff7fb9df7 in start (p=0x7ffff3ce5aa0) at src/thread/pthread_create.c:196
> #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> Backtrace stopped: frame did not save the PC
> 
> Thread 4 (LWP 18869):
> #0  __syscall_cp_c (nr=202, u=140737283983844, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3d189e4) at src/thread/__timedwait.c:52
> #2  __timedwait_cp (addr=addr@entry=0x7ffff3d189e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> #7  0x00007ffff7fb9df7 in start (p=0x7ffff3d28aa0) at src/thread/pthread_create.c:196
> #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> Backtrace stopped: frame did not save the PC
> 
> Thread 3 (LWP 18868):
> #0  __syscall_cp_c (nr=202, u=140737284258276, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3d5b9e4) at src/thread/__timedwait.c:52
> #2  __timedwait_cp (addr=addr@entry=0x7ffff3d5b9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> #7  0x00007ffff7fb9df7 in start (p=0x7ffff3d6baa0) at src/thread/pthread_create.c:196
> #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> Backtrace stopped: frame did not save the PC
> 
> Thread 2 (LWP 18867):
> #0  __syscall_cp_c (nr=202, u=140737284532708, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3d9e9e4) at src/thread/__timedwait.c:52
> #2  __timedwait_cp (addr=addr@entry=0x7ffff3d9e9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> #7  0x00007ffff7fb9df7 in start (p=0x7ffff3daeaa0) at src/thread/pthread_create.c:196
> #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> Backtrace stopped: frame did not save the PC
> 
> Thread 1 (LWP 18859):
> #0  __syscall_cp_c (nr=202, u=140737488349316, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7fffffffe884) at src/thread/__timedwait.c:52
> #2  __timedwait_cp (addr=addr@entry=0x7fffffffe884, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> #5  0x00007ffff4018852 in GC_do_parallel_mark () from /usr/lib/libgc.so.1
> #6  0x00007ffff401937c in GC_mark_some () from /usr/lib/libgc.so.1
> #7  0x00007ffff4010e5a in GC_stopped_mark () from /usr/lib/libgc.so.1
> #8  0x00007ffff40114d4 in GC_try_to_collect_inner () from /usr/lib/libgc.so.1
> #9  0x00007ffff401b47f in GC_init () from /usr/lib/libgc.so.1
> #10 0x00005555555668d8 in init () at /home/mps/aports/community/crystal/src/crystal-0.35.1/src/gc/boehm.cr:127
> #11 main () at /home/mps/aports/community/crystal/src/crystal-0.35.1/src/crystal/main.cr:35
> #12 main () at /home/mps/aports/community/crystal/src/crystal-0.35.1/src/crystal/main.cr:114

OK, I'm looking at Boehm-GC's GC_wait_marker and this function is
utterly bogus. It's using a condition variable without a predicate,
and the corresponding broadcast operation runs without the lock held,
so it's inherently subject both to spurious wakes and missed wakes.
Here's the source:

https://github.com/ivmai/bdwgc/blob/57b97be07c514fcc4b608b13768fd2bf637a5899/pthread_support.c#L2374

This code is only used if built with PARALLEL_MARK defined, and
obviously does not and cannot work, so I think the best solution is
just figuring out how to turn off PARALLEL_MARK.

Do you have reason to believe the problem here has anything to do with
the MT-fork patch? Was it maybe just hanging somewhere earlier without
the patch, and now hanging on an independent bug?

Rich

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-10-29 23:00               ` Milan P. Stanić
  2020-10-29 23:27                 ` Rich Felker
@ 2020-10-29 23:32                 ` Szabolcs Nagy
  1 sibling, 0 replies; 42+ messages in thread
From: Szabolcs Nagy @ 2020-10-29 23:32 UTC (permalink / raw)
  To: Milan P. Stanić; +Cc: musl

* Milan P. Stanić <mps@arvanta.net> [2020-10-30 00:00:07 +0100]:
> On Thu, 2020-10-29 at 23:21, Szabolcs Nagy wrote:
> > * Milan P. Stanić <mps@arvanta.net> [2020-10-29 21:55:41 +0100]:
> > > On Thu, 2020-10-29 at 17:13, Szabolcs Nagy wrote:
> > > > * Milan P. Stanić <mps@arvanta.net> [2020-10-29 00:06:10 +0100]:
> > > > >  
> > > > > Applied this patch on top of current musl master, build it on Alpine and
> > > > > installed.
> > > > > 
> > > > > Tested by building ruby lang. Works fine.
> > > > > Also tested building zig lang, works fine.
> > > > > But crystal lang builds fine, but running it hangs. strace shows:
> > > > > -------------
> > > > > [pid  5573] futex(0x7efc50fba9e4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
> > > > > [pid  5568] futex(0x7efc5118f984, FUTEX_REQUEUE_PRIVATE, 0, 1, 0x7efc514b67a4) = 1
> > > > > [pid  5568] futex(0x7efc514b67a4, FUTEX_WAKE_PRIVATE, 1) = 1
> > > > > [pid  5571] <... futex resumed>)        = 0
> > > > > [pid  5568] futex(0x7efc511099e4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
> > > > > [pid  5571] futex(0x7efc510409e4, FUTEX_WAIT_PRIVATE, 2, NULL
> > > > > -------------
> > > > > where it hangs.
> > > > 
> > > > try to attach gdb to the process that hang and do
> > > > 
> > > > thread apply all bt
...
> I did continue now and stopped it when it hangs. Attached is gdb log
> produced by 'thread apply all bt'

unfortunately it's hard to tell what's going on.
all threads wait on the same cond var in libgc.
but it does not look like a fork issue to me.

to get further i think we would need to look at the
libgc logic, to see how it uses those cond vars.

all 16 threads look like

#0  __syscall_cp_c (nr=202, u=140737284532708, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
#1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3d9e9e4) at src/thread/__timedwait.c:52
#2  __timedwait_cp (addr=addr@entry=0x7ffff3d9e9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
#3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
#4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
#5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
#6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
...

except the main thread is like

#0  __syscall_cp_c (nr=202, u=140737488349316, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
#1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7fffffffe884) at src/thread/__timedwait.c:52
#2  __timedwait_cp (addr=addr@entry=0x7fffffffe884, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
#3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
#4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
#5  0x00007ffff4018852 in GC_do_parallel_mark () from /usr/lib/libgc.so.1
#6  0x00007ffff401937c in GC_mark_some () from /usr/lib/libgc.so.1
...

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-10-29 23:27                 ` Rich Felker
@ 2020-10-30  0:13                   ` Rich Felker
  2020-10-30  7:47                   ` Milan P. Stanić
  1 sibling, 0 replies; 42+ messages in thread
From: Rich Felker @ 2020-10-30  0:13 UTC (permalink / raw)
  To: Milan P. Stanić; +Cc: musl

On Thu, Oct 29, 2020 at 07:27:44PM -0400, Rich Felker wrote:
> On Fri, Oct 30, 2020 at 12:00:07AM +0100, Milan P. Stanić wrote:
> > On Thu, 2020-10-29 at 23:21, Szabolcs Nagy wrote:
> > > * Milan P. Stanić <mps@arvanta.net> [2020-10-29 21:55:41 +0100]:
> > > > On Thu, 2020-10-29 at 17:13, Szabolcs Nagy wrote:
> > > > > * Milan P. Stanić <mps@arvanta.net> [2020-10-29 00:06:10 +0100]:
> > > > > >  
> > > > > > Applied this patch on top of current musl master, build it on Alpine and
> > > > > > installed.
> > > > > > 
> > > > > > Tested by building ruby lang. Works fine.
> > > > > > Also tested building zig lang, works fine.
> > > > > > But crystal lang builds fine, but running it hangs. strace shows:
> > > > > > -------------
> > > > > > [pid  5573] futex(0x7efc50fba9e4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
> > > > > > [pid  5568] futex(0x7efc5118f984, FUTEX_REQUEUE_PRIVATE, 0, 1, 0x7efc514b67a4) = 1
> > > > > > [pid  5568] futex(0x7efc514b67a4, FUTEX_WAKE_PRIVATE, 1) = 1
> > > > > > [pid  5571] <... futex resumed>)        = 0
> > > > > > [pid  5568] futex(0x7efc511099e4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
> > > > > > [pid  5571] futex(0x7efc510409e4, FUTEX_WAIT_PRIVATE, 2, NULL
> > > > > > -------------
> > > > > > where it hangs.
> > > > > 
> > > > > try to attach gdb to the process that hang and do
> > > > > 
> > > > > thread apply all bt
> > > > > 
> > > > > (make sure musl-dbg is installed)
> > > > 
> > > > I cannot attach gdb for running process because my Alpine development
> > > > boxes are lxc containers where CAP_SYS_PTRACE is not enabled afaik.
> > > 
> > > there should be a way to config lxc to allow ptrace.
> > > 
> > > > I installed musl-dbg and run program with 'gdb .build/crystal' and result is:
> > > > ----------
> > > > Reading symbols from .build/crystal...
> > > > (gdb) run
> > > > Starting program: /home/mps/aports/community/crystal/src/crystal-0.35.1/.build/crystal
> > > > 
> > > > Program received signal SIGSEGV, Segmentation fault.
> > > 
> > > hm segfault is not a hang..
> > > 
> > > libgc tries to find stack bounds by registering a segfault handler
> > > and walking stack pages until it triggers.
> > > 
> > > gdb stops at signals, this one is harmless, you should just continue
> > > until you see a hang, then interrupt and bt.
> >  
> > I did continue now and stopped it when it hangs. Attached is gdb log
> > produced by 'thread apply all bt'
> 
> Next time please don't gzip so it's readable and repliable. I've
> expanded it out here though:
> 
> > Starting program: /home/mps/aports/community/crystal/src/crystal-0.35.1/.build/crystal 
> > 
> > Program received signal SIGSEGV, Segmentation fault.
> > 0x00007ffff401c463 in GC_find_limit_with_bound () from /usr/lib/libgc.so.1
> > Continuing.
> > [New LWP 18867]
> > [New LWP 18868]
> > [New LWP 18869]
> > [New LWP 18870]
> > [New LWP 18871]
> > [New LWP 18872]
> > [New LWP 18873]
> > [New LWP 18874]
> > [New LWP 18875]
> > [New LWP 18876]
> > [New LWP 18877]
> > [New LWP 18878]
> > [New LWP 18879]
> > [New LWP 18880]
> > [New LWP 18881]
> > 
> > Thread 1 "crystal" received signal SIGINT, Interrupt.
> > __syscall_cp_c (nr=202, u=140737488349316, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > 61	./arch/x86_64/syscall_arch.h: No such file or directory.
> > 
> > Thread 16 (LWP 18881):
> > #0  __syscall_cp_c (nr=202, u=140737280690660, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff39f49e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff39f49e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3a04aa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 15 (LWP 18880):
> > #0  __syscall_cp_c (nr=202, u=140737280965092, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3a379e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3a379e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3a47aa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 14 (LWP 18879):
> > #0  __syscall_cp_c (nr=202, u=140737281239524, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3a7a9e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3a7a9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3a8aaa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 13 (LWP 18878):
> > #0  __syscall_cp_c (nr=202, u=140737281513956, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3abd9e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3abd9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3acdaa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 12 (LWP 18877):
> > #0  __syscall_cp_c (nr=202, u=140737281788388, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3b009e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3b009e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3b10aa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 11 (LWP 18876):
> > #0  __syscall_cp_c (nr=202, u=140737282062820, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3b439e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3b439e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3b53aa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 10 (LWP 18875):
> > #0  __syscall_cp_c (nr=202, u=140737282337252, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3b869e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3b869e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3b96aa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 9 (LWP 18874):
> > #0  __syscall_cp_c (nr=202, u=140737282611684, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3bc99e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3bc99e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3bd9aa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 8 (LWP 18873):
> > #0  __syscall_cp_c (nr=202, u=140737282886020, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3c0c984) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3c0c984, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40186ca in GC_mark_local () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40188f4 in GC_help_marker () from /usr/lib/libgc.so.1
> > #7  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #8  0x00007ffff7fb9df7 in start (p=0x7ffff3c1caa0) at src/thread/pthread_create.c:196
> > #9  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 7 (LWP 18872):
> > #0  __syscall_cp_c (nr=202, u=140737283160548, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3c4f9e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3c4f9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3c5faa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 6 (LWP 18871):
> > #0  __syscall_cp_c (nr=202, u=140737283434980, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3c929e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3c929e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3ca2aa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 5 (LWP 18870):
> > #0  __syscall_cp_c (nr=202, u=140737283709412, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3cd59e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3cd59e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3ce5aa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 4 (LWP 18869):
> > #0  __syscall_cp_c (nr=202, u=140737283983844, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3d189e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3d189e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3d28aa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 3 (LWP 18868):
> > #0  __syscall_cp_c (nr=202, u=140737284258276, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3d5b9e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3d5b9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3d6baa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 2 (LWP 18867):
> > #0  __syscall_cp_c (nr=202, u=140737284532708, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3d9e9e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3d9e9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3daeaa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 1 (LWP 18859):
> > #0  __syscall_cp_c (nr=202, u=140737488349316, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7fffffffe884) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7fffffffe884, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff4018852 in GC_do_parallel_mark () from /usr/lib/libgc.so.1
> > #6  0x00007ffff401937c in GC_mark_some () from /usr/lib/libgc.so.1
> > #7  0x00007ffff4010e5a in GC_stopped_mark () from /usr/lib/libgc.so.1
> > #8  0x00007ffff40114d4 in GC_try_to_collect_inner () from /usr/lib/libgc.so.1
> > #9  0x00007ffff401b47f in GC_init () from /usr/lib/libgc.so.1
> > #10 0x00005555555668d8 in init () at /home/mps/aports/community/crystal/src/crystal-0.35.1/src/gc/boehm.cr:127
> > #11 main () at /home/mps/aports/community/crystal/src/crystal-0.35.1/src/crystal/main.cr:35
> > #12 main () at /home/mps/aports/community/crystal/src/crystal-0.35.1/src/crystal/main.cr:114
> 
> OK, I'm looking at Boehm-GC's GC_wait_marker and this function is
> utterly bogus. It's using a condition variable without a predicate,
> and the corresponding broadcast operation runs without the lock held,
> so it's inherently subject both to spurious wakes and missed wakes.
> Here's the source:
> 
> https://github.com/ivmai/bdwgc/blob/57b97be07c514fcc4b608b13768fd2bf637a5899/pthread_support.c#L2374
> 
> This code is only used if built with PARALLEL_MARK defined, and
> obviously does not and cannot work, so I think the best solution is
> just figuring out how to turn off PARALLEL_MARK.
> 
> Do you have reason to believe the problem here has anything to do with
> the MT-fork patch? Was it maybe just hanging somewhere earlier without
> the patch, and now hanging on an independent bug?

Hmm, I'm not 100% sure anymore the above is correct. The caller of
GC_wait_marker, in:

https://github.com/ivmai/bdwgc/blob/57b97be07c514fcc4b608b13768fd2bf637a5899/mark.c#L1130

is calling it in a loop checking a condition. and can call
GC_notify_all_marker (the broadcast function) with the lock held.
There's another place where GC_notify_all_marker is called without the
lock held, but I don't see any problem there.

This definitely calls for more debugging, but turning off
PARALLEL_MARK might solve the problem for now. I do still want to know
if this has anything to do with fork; it sounds like not (or gdb would
have needed to be setup to follow fork).

Rich

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-10-29 23:27                 ` Rich Felker
  2020-10-30  0:13                   ` Rich Felker
@ 2020-10-30  7:47                   ` Milan P. Stanić
  2020-10-30 18:52                     ` Milan P. Stanić
  1 sibling, 1 reply; 42+ messages in thread
From: Milan P. Stanić @ 2020-10-30  7:47 UTC (permalink / raw)
  To: musl

On Thu, 2020-10-29 at 19:27, Rich Felker wrote:
> On Fri, Oct 30, 2020 at 12:00:07AM +0100, Milan P. Stanić wrote:
> > On Thu, 2020-10-29 at 23:21, Szabolcs Nagy wrote:
> > > * Milan P. Stanić <mps@arvanta.net> [2020-10-29 21:55:41 +0100]:
> > > > On Thu, 2020-10-29 at 17:13, Szabolcs Nagy wrote:
> > > > > * Milan P. Stanić <mps@arvanta.net> [2020-10-29 00:06:10 +0100]:
> > > > > >  
> > > > > > Applied this patch on top of current musl master, build it on Alpine and
> > > > > > installed.
> > > > > > 
> > > > > > Tested by building ruby lang. Works fine.
> > > > > > Also tested building zig lang, works fine.
> > > > > > But crystal lang builds fine, but running it hangs. strace shows:
> > > > > > -------------
> > > > > > [pid  5573] futex(0x7efc50fba9e4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
> > > > > > [pid  5568] futex(0x7efc5118f984, FUTEX_REQUEUE_PRIVATE, 0, 1, 0x7efc514b67a4) = 1
> > > > > > [pid  5568] futex(0x7efc514b67a4, FUTEX_WAKE_PRIVATE, 1) = 1
> > > > > > [pid  5571] <... futex resumed>)        = 0
> > > > > > [pid  5568] futex(0x7efc511099e4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
> > > > > > [pid  5571] futex(0x7efc510409e4, FUTEX_WAIT_PRIVATE, 2, NULL
> > > > > > -------------
> > > > > > where it hangs.
> > > > > 
> > > > > try to attach gdb to the process that hang and do
> > > > > 
> > > > > thread apply all bt
> > > > > 
> > > > > (make sure musl-dbg is installed)
> > > > 
> > > > I cannot attach gdb for running process because my Alpine development
> > > > boxes are lxc containers where CAP_SYS_PTRACE is not enabled afaik.
> > > 
> > > there should be a way to config lxc to allow ptrace.
> > > 
> > > > I installed musl-dbg and run program with 'gdb .build/crystal' and result is:
> > > > ----------
> > > > Reading symbols from .build/crystal...
> > > > (gdb) run
> > > > Starting program: /home/mps/aports/community/crystal/src/crystal-0.35.1/.build/crystal
> > > > 
> > > > Program received signal SIGSEGV, Segmentation fault.
> > > 
> > > hm segfault is not a hang..
> > > 
> > > libgc tries to find stack bounds by registering a segfault handler
> > > and walking stack pages until it triggers.
> > > 
> > > gdb stops at signals, this one is harmless, you should just continue
> > > until you see a hang, then interrupt and bt.
> >  
> > I did continue now and stopped it when it hangs. Attached is gdb log
> > produced by 'thread apply all bt'
> 
> Next time please don't gzip so it's readable and repliable. I've
> expanded it out here though:
> 
> > Starting program: /home/mps/aports/community/crystal/src/crystal-0.35.1/.build/crystal 
> > 
> > Program received signal SIGSEGV, Segmentation fault.
> > 0x00007ffff401c463 in GC_find_limit_with_bound () from /usr/lib/libgc.so.1
> > Continuing.
> > [New LWP 18867]
> > [New LWP 18868]
> > [New LWP 18869]
> > [New LWP 18870]
> > [New LWP 18871]
> > [New LWP 18872]
> > [New LWP 18873]
> > [New LWP 18874]
> > [New LWP 18875]
> > [New LWP 18876]
> > [New LWP 18877]
> > [New LWP 18878]
> > [New LWP 18879]
> > [New LWP 18880]
> > [New LWP 18881]
> > 
> > Thread 1 "crystal" received signal SIGINT, Interrupt.
> > __syscall_cp_c (nr=202, u=140737488349316, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > 61	./arch/x86_64/syscall_arch.h: No such file or directory.
> > 
> > Thread 16 (LWP 18881):
> > #0  __syscall_cp_c (nr=202, u=140737280690660, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff39f49e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff39f49e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3a04aa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 15 (LWP 18880):
> > #0  __syscall_cp_c (nr=202, u=140737280965092, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3a379e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3a379e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3a47aa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 14 (LWP 18879):
> > #0  __syscall_cp_c (nr=202, u=140737281239524, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3a7a9e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3a7a9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3a8aaa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 13 (LWP 18878):
> > #0  __syscall_cp_c (nr=202, u=140737281513956, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3abd9e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3abd9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3acdaa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 12 (LWP 18877):
> > #0  __syscall_cp_c (nr=202, u=140737281788388, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3b009e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3b009e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3b10aa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 11 (LWP 18876):
> > #0  __syscall_cp_c (nr=202, u=140737282062820, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3b439e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3b439e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3b53aa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 10 (LWP 18875):
> > #0  __syscall_cp_c (nr=202, u=140737282337252, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3b869e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3b869e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3b96aa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 9 (LWP 18874):
> > #0  __syscall_cp_c (nr=202, u=140737282611684, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3bc99e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3bc99e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3bd9aa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 8 (LWP 18873):
> > #0  __syscall_cp_c (nr=202, u=140737282886020, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3c0c984) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3c0c984, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40186ca in GC_mark_local () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40188f4 in GC_help_marker () from /usr/lib/libgc.so.1
> > #7  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #8  0x00007ffff7fb9df7 in start (p=0x7ffff3c1caa0) at src/thread/pthread_create.c:196
> > #9  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 7 (LWP 18872):
> > #0  __syscall_cp_c (nr=202, u=140737283160548, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3c4f9e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3c4f9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3c5faa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 6 (LWP 18871):
> > #0  __syscall_cp_c (nr=202, u=140737283434980, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3c929e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3c929e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3ca2aa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 5 (LWP 18870):
> > #0  __syscall_cp_c (nr=202, u=140737283709412, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3cd59e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3cd59e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3ce5aa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 4 (LWP 18869):
> > #0  __syscall_cp_c (nr=202, u=140737283983844, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3d189e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3d189e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3d28aa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 3 (LWP 18868):
> > #0  __syscall_cp_c (nr=202, u=140737284258276, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3d5b9e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3d5b9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3d6baa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 2 (LWP 18867):
> > #0  __syscall_cp_c (nr=202, u=140737284532708, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3d9e9e4) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7ffff3d9e9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3daeaa0) at src/thread/pthread_create.c:196
> > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > Backtrace stopped: frame did not save the PC
> > 
> > Thread 1 (LWP 18859):
> > #0  __syscall_cp_c (nr=202, u=140737488349316, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7fffffffe884) at src/thread/__timedwait.c:52
> > #2  __timedwait_cp (addr=addr@entry=0x7fffffffe884, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > #5  0x00007ffff4018852 in GC_do_parallel_mark () from /usr/lib/libgc.so.1
> > #6  0x00007ffff401937c in GC_mark_some () from /usr/lib/libgc.so.1
> > #7  0x00007ffff4010e5a in GC_stopped_mark () from /usr/lib/libgc.so.1
> > #8  0x00007ffff40114d4 in GC_try_to_collect_inner () from /usr/lib/libgc.so.1
> > #9  0x00007ffff401b47f in GC_init () from /usr/lib/libgc.so.1
> > #10 0x00005555555668d8 in init () at /home/mps/aports/community/crystal/src/crystal-0.35.1/src/gc/boehm.cr:127
> > #11 main () at /home/mps/aports/community/crystal/src/crystal-0.35.1/src/crystal/main.cr:35
> > #12 main () at /home/mps/aports/community/crystal/src/crystal-0.35.1/src/crystal/main.cr:114
> 
> OK, I'm looking at Boehm-GC's GC_wait_marker and this function is
> utterly bogus. It's using a condition variable without a predicate,
> and the corresponding broadcast operation runs without the lock held,
> so it's inherently subject both to spurious wakes and missed wakes.
> Here's the source:
> 
> https://github.com/ivmai/bdwgc/blob/57b97be07c514fcc4b608b13768fd2bf637a5899/pthread_support.c#L2374
> 
> This code is only used if built with PARALLEL_MARK defined, and
> obviously does not and cannot work, so I think the best solution is
> just figuring out how to turn off PARALLEL_MARK.
> 
> Do you have reason to believe the problem here has anything to do with
> the MT-fork patch? Was it maybe just hanging somewhere earlier without
> the patch, and now hanging on an independent bug?

It doesn't hangs with musl 1.2.1, but hangs on master branch with
mt-fork applied to it.

Also I tested with w3m browser and I see same behavior and same
(similar) backtrace.

Interesting thing is that it doesn't hangs on every invocations of
programs linked with libgc, sometimes programs work sometimes they
hangs.

-- 
Regards

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-10-30  7:47                   ` Milan P. Stanić
@ 2020-10-30 18:52                     ` Milan P. Stanić
  2020-10-30 18:57                       ` Rich Felker
  0 siblings, 1 reply; 42+ messages in thread
From: Milan P. Stanić @ 2020-10-30 18:52 UTC (permalink / raw)
  To: musl

On Fri, 2020-10-30 at 08:47, Milan P. Stanić wrote:
> On Thu, 2020-10-29 at 19:27, Rich Felker wrote:
> > On Fri, Oct 30, 2020 at 12:00:07AM +0100, Milan P. Stanić wrote:
> > > On Thu, 2020-10-29 at 23:21, Szabolcs Nagy wrote:
> > > > * Milan P. Stanić <mps@arvanta.net> [2020-10-29 21:55:41 +0100]:
> > > > > On Thu, 2020-10-29 at 17:13, Szabolcs Nagy wrote:
> > > > > > * Milan P. Stanić <mps@arvanta.net> [2020-10-29 00:06:10 +0100]:
> > > > > > >  
> > > > > > > Applied this patch on top of current musl master, build it on Alpine and
> > > > > > > installed.
> > > > > > > 
> > > > > > > Tested by building ruby lang. Works fine.
> > > > > > > Also tested building zig lang, works fine.
> > > > > > > But crystal lang builds fine, but running it hangs. strace shows:
> > > > > > > -------------
> > > > > > > [pid  5573] futex(0x7efc50fba9e4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
> > > > > > > [pid  5568] futex(0x7efc5118f984, FUTEX_REQUEUE_PRIVATE, 0, 1, 0x7efc514b67a4) = 1
> > > > > > > [pid  5568] futex(0x7efc514b67a4, FUTEX_WAKE_PRIVATE, 1) = 1
> > > > > > > [pid  5571] <... futex resumed>)        = 0
> > > > > > > [pid  5568] futex(0x7efc511099e4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
> > > > > > > [pid  5571] futex(0x7efc510409e4, FUTEX_WAIT_PRIVATE, 2, NULL
> > > > > > > -------------
> > > > > > > where it hangs.
> > > > > > 
> > > > > > try to attach gdb to the process that hang and do
> > > > > > 
> > > > > > thread apply all bt
> > > > > > 
> > > > > > (make sure musl-dbg is installed)
> > > > > 
> > > > > I cannot attach gdb for running process because my Alpine development
> > > > > boxes are lxc containers where CAP_SYS_PTRACE is not enabled afaik.
> > > > 
> > > > there should be a way to config lxc to allow ptrace.
> > > > 
> > > > > I installed musl-dbg and run program with 'gdb .build/crystal' and result is:
> > > > > ----------
> > > > > Reading symbols from .build/crystal...
> > > > > (gdb) run
> > > > > Starting program: /home/mps/aports/community/crystal/src/crystal-0.35.1/.build/crystal
> > > > > 
> > > > > Program received signal SIGSEGV, Segmentation fault.
> > > > 
> > > > hm segfault is not a hang..
> > > > 
> > > > libgc tries to find stack bounds by registering a segfault handler
> > > > and walking stack pages until it triggers.
> > > > 
> > > > gdb stops at signals, this one is harmless, you should just continue
> > > > until you see a hang, then interrupt and bt.
> > >  
> > > I did continue now and stopped it when it hangs. Attached is gdb log
> > > produced by 'thread apply all bt'
> > 
> > Next time please don't gzip so it's readable and repliable. I've
> > expanded it out here though:
> > 
> > > Starting program: /home/mps/aports/community/crystal/src/crystal-0.35.1/.build/crystal 
> > > 
> > > Program received signal SIGSEGV, Segmentation fault.
> > > 0x00007ffff401c463 in GC_find_limit_with_bound () from /usr/lib/libgc.so.1
> > > Continuing.
> > > [New LWP 18867]
> > > [New LWP 18868]
> > > [New LWP 18869]
> > > [New LWP 18870]
> > > [New LWP 18871]
> > > [New LWP 18872]
> > > [New LWP 18873]
> > > [New LWP 18874]
> > > [New LWP 18875]
> > > [New LWP 18876]
> > > [New LWP 18877]
> > > [New LWP 18878]
> > > [New LWP 18879]
> > > [New LWP 18880]
> > > [New LWP 18881]
> > > 
> > > Thread 1 "crystal" received signal SIGINT, Interrupt.
> > > __syscall_cp_c (nr=202, u=140737488349316, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > 61	./arch/x86_64/syscall_arch.h: No such file or directory.
> > > 
> > > Thread 16 (LWP 18881):
> > > #0  __syscall_cp_c (nr=202, u=140737280690660, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff39f49e4) at src/thread/__timedwait.c:52
> > > #2  __timedwait_cp (addr=addr@entry=0x7ffff39f49e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3a04aa0) at src/thread/pthread_create.c:196
> > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > Backtrace stopped: frame did not save the PC
> > > 
> > > Thread 15 (LWP 18880):
> > > #0  __syscall_cp_c (nr=202, u=140737280965092, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3a379e4) at src/thread/__timedwait.c:52
> > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3a379e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3a47aa0) at src/thread/pthread_create.c:196
> > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > Backtrace stopped: frame did not save the PC
> > > 
> > > Thread 14 (LWP 18879):
> > > #0  __syscall_cp_c (nr=202, u=140737281239524, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3a7a9e4) at src/thread/__timedwait.c:52
> > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3a7a9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3a8aaa0) at src/thread/pthread_create.c:196
> > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > Backtrace stopped: frame did not save the PC
> > > 
> > > Thread 13 (LWP 18878):
> > > #0  __syscall_cp_c (nr=202, u=140737281513956, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3abd9e4) at src/thread/__timedwait.c:52
> > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3abd9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3acdaa0) at src/thread/pthread_create.c:196
> > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > Backtrace stopped: frame did not save the PC
> > > 
> > > Thread 12 (LWP 18877):
> > > #0  __syscall_cp_c (nr=202, u=140737281788388, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3b009e4) at src/thread/__timedwait.c:52
> > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3b009e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3b10aa0) at src/thread/pthread_create.c:196
> > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > Backtrace stopped: frame did not save the PC
> > > 
> > > Thread 11 (LWP 18876):
> > > #0  __syscall_cp_c (nr=202, u=140737282062820, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3b439e4) at src/thread/__timedwait.c:52
> > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3b439e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3b53aa0) at src/thread/pthread_create.c:196
> > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > Backtrace stopped: frame did not save the PC
> > > 
> > > Thread 10 (LWP 18875):
> > > #0  __syscall_cp_c (nr=202, u=140737282337252, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3b869e4) at src/thread/__timedwait.c:52
> > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3b869e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3b96aa0) at src/thread/pthread_create.c:196
> > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > Backtrace stopped: frame did not save the PC
> > > 
> > > Thread 9 (LWP 18874):
> > > #0  __syscall_cp_c (nr=202, u=140737282611684, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3bc99e4) at src/thread/__timedwait.c:52
> > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3bc99e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3bd9aa0) at src/thread/pthread_create.c:196
> > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > Backtrace stopped: frame did not save the PC
> > > 
> > > Thread 8 (LWP 18873):
> > > #0  __syscall_cp_c (nr=202, u=140737282886020, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3c0c984) at src/thread/__timedwait.c:52
> > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3c0c984, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > #5  0x00007ffff40186ca in GC_mark_local () from /usr/lib/libgc.so.1
> > > #6  0x00007ffff40188f4 in GC_help_marker () from /usr/lib/libgc.so.1
> > > #7  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > #8  0x00007ffff7fb9df7 in start (p=0x7ffff3c1caa0) at src/thread/pthread_create.c:196
> > > #9  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > Backtrace stopped: frame did not save the PC
> > > 
> > > Thread 7 (LWP 18872):
> > > #0  __syscall_cp_c (nr=202, u=140737283160548, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3c4f9e4) at src/thread/__timedwait.c:52
> > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3c4f9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3c5faa0) at src/thread/pthread_create.c:196
> > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > Backtrace stopped: frame did not save the PC
> > > 
> > > Thread 6 (LWP 18871):
> > > #0  __syscall_cp_c (nr=202, u=140737283434980, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3c929e4) at src/thread/__timedwait.c:52
> > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3c929e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3ca2aa0) at src/thread/pthread_create.c:196
> > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > Backtrace stopped: frame did not save the PC
> > > 
> > > Thread 5 (LWP 18870):
> > > #0  __syscall_cp_c (nr=202, u=140737283709412, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3cd59e4) at src/thread/__timedwait.c:52
> > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3cd59e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3ce5aa0) at src/thread/pthread_create.c:196
> > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > Backtrace stopped: frame did not save the PC
> > > 
> > > Thread 4 (LWP 18869):
> > > #0  __syscall_cp_c (nr=202, u=140737283983844, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3d189e4) at src/thread/__timedwait.c:52
> > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3d189e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3d28aa0) at src/thread/pthread_create.c:196
> > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > Backtrace stopped: frame did not save the PC
> > > 
> > > Thread 3 (LWP 18868):
> > > #0  __syscall_cp_c (nr=202, u=140737284258276, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3d5b9e4) at src/thread/__timedwait.c:52
> > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3d5b9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3d6baa0) at src/thread/pthread_create.c:196
> > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > Backtrace stopped: frame did not save the PC
> > > 
> > > Thread 2 (LWP 18867):
> > > #0  __syscall_cp_c (nr=202, u=140737284532708, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3d9e9e4) at src/thread/__timedwait.c:52
> > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3d9e9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3daeaa0) at src/thread/pthread_create.c:196
> > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > Backtrace stopped: frame did not save the PC
> > > 
> > > Thread 1 (LWP 18859):
> > > #0  __syscall_cp_c (nr=202, u=140737488349316, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7fffffffe884) at src/thread/__timedwait.c:52
> > > #2  __timedwait_cp (addr=addr@entry=0x7fffffffe884, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > #5  0x00007ffff4018852 in GC_do_parallel_mark () from /usr/lib/libgc.so.1
> > > #6  0x00007ffff401937c in GC_mark_some () from /usr/lib/libgc.so.1
> > > #7  0x00007ffff4010e5a in GC_stopped_mark () from /usr/lib/libgc.so.1
> > > #8  0x00007ffff40114d4 in GC_try_to_collect_inner () from /usr/lib/libgc.so.1
> > > #9  0x00007ffff401b47f in GC_init () from /usr/lib/libgc.so.1
> > > #10 0x00005555555668d8 in init () at /home/mps/aports/community/crystal/src/crystal-0.35.1/src/gc/boehm.cr:127
> > > #11 main () at /home/mps/aports/community/crystal/src/crystal-0.35.1/src/crystal/main.cr:35
> > > #12 main () at /home/mps/aports/community/crystal/src/crystal-0.35.1/src/crystal/main.cr:114
> > 
> > OK, I'm looking at Boehm-GC's GC_wait_marker and this function is
> > utterly bogus. It's using a condition variable without a predicate,
> > and the corresponding broadcast operation runs without the lock held,
> > so it's inherently subject both to spurious wakes and missed wakes.
> > Here's the source:
> > 
> > https://github.com/ivmai/bdwgc/blob/57b97be07c514fcc4b608b13768fd2bf637a5899/pthread_support.c#L2374
> > 
> > This code is only used if built with PARALLEL_MARK defined, and
> > obviously does not and cannot work, so I think the best solution is
> > just figuring out how to turn off PARALLEL_MARK.
> > 
> > Do you have reason to believe the problem here has anything to do with
> > the MT-fork patch? Was it maybe just hanging somewhere earlier without
> > the patch, and now hanging on an independent bug?
> 
> It doesn't hangs with musl 1.2.1, but hangs on master branch with
> mt-fork applied to it.
> 
> Also I tested with w3m browser and I see same behavior and same
> (similar) backtrace.
> 
> Interesting thing is that it doesn't hangs on every invocations of
> programs linked with libgc, sometimes programs work sometimes they
> hangs.

Rebuilding 'gc' with '--disable-parallel-mark' fixes hangs.
Not sure that is proper fix but for now it 'good' to not block next
Alpine release.

-- 
Regards

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-10-30 18:52                     ` Milan P. Stanić
@ 2020-10-30 18:57                       ` Rich Felker
  2020-10-30 21:31                         ` Ariadne Conill
  0 siblings, 1 reply; 42+ messages in thread
From: Rich Felker @ 2020-10-30 18:57 UTC (permalink / raw)
  To: Milan P. Stanić; +Cc: musl

On Fri, Oct 30, 2020 at 07:52:05PM +0100, Milan P. Stanić wrote:
> On Fri, 2020-10-30 at 08:47, Milan P. Stanić wrote:
> > On Thu, 2020-10-29 at 19:27, Rich Felker wrote:
> > > On Fri, Oct 30, 2020 at 12:00:07AM +0100, Milan P. Stanić wrote:
> > > > On Thu, 2020-10-29 at 23:21, Szabolcs Nagy wrote:
> > > > > * Milan P. Stanić <mps@arvanta.net> [2020-10-29 21:55:41 +0100]:
> > > > > > On Thu, 2020-10-29 at 17:13, Szabolcs Nagy wrote:
> > > > > > > * Milan P. Stanić <mps@arvanta.net> [2020-10-29 00:06:10 +0100]:
> > > > > > > >  
> > > > > > > > Applied this patch on top of current musl master, build it on Alpine and
> > > > > > > > installed.
> > > > > > > > 
> > > > > > > > Tested by building ruby lang. Works fine.
> > > > > > > > Also tested building zig lang, works fine.
> > > > > > > > But crystal lang builds fine, but running it hangs. strace shows:
> > > > > > > > -------------
> > > > > > > > [pid  5573] futex(0x7efc50fba9e4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
> > > > > > > > [pid  5568] futex(0x7efc5118f984, FUTEX_REQUEUE_PRIVATE, 0, 1, 0x7efc514b67a4) = 1
> > > > > > > > [pid  5568] futex(0x7efc514b67a4, FUTEX_WAKE_PRIVATE, 1) = 1
> > > > > > > > [pid  5571] <... futex resumed>)        = 0
> > > > > > > > [pid  5568] futex(0x7efc511099e4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
> > > > > > > > [pid  5571] futex(0x7efc510409e4, FUTEX_WAIT_PRIVATE, 2, NULL
> > > > > > > > -------------
> > > > > > > > where it hangs.
> > > > > > > 
> > > > > > > try to attach gdb to the process that hang and do
> > > > > > > 
> > > > > > > thread apply all bt
> > > > > > > 
> > > > > > > (make sure musl-dbg is installed)
> > > > > > 
> > > > > > I cannot attach gdb for running process because my Alpine development
> > > > > > boxes are lxc containers where CAP_SYS_PTRACE is not enabled afaik.
> > > > > 
> > > > > there should be a way to config lxc to allow ptrace.
> > > > > 
> > > > > > I installed musl-dbg and run program with 'gdb .build/crystal' and result is:
> > > > > > ----------
> > > > > > Reading symbols from .build/crystal...
> > > > > > (gdb) run
> > > > > > Starting program: /home/mps/aports/community/crystal/src/crystal-0.35.1/.build/crystal
> > > > > > 
> > > > > > Program received signal SIGSEGV, Segmentation fault.
> > > > > 
> > > > > hm segfault is not a hang..
> > > > > 
> > > > > libgc tries to find stack bounds by registering a segfault handler
> > > > > and walking stack pages until it triggers.
> > > > > 
> > > > > gdb stops at signals, this one is harmless, you should just continue
> > > > > until you see a hang, then interrupt and bt.
> > > >  
> > > > I did continue now and stopped it when it hangs. Attached is gdb log
> > > > produced by 'thread apply all bt'
> > > 
> > > Next time please don't gzip so it's readable and repliable. I've
> > > expanded it out here though:
> > > 
> > > > Starting program: /home/mps/aports/community/crystal/src/crystal-0.35.1/.build/crystal 
> > > > 
> > > > Program received signal SIGSEGV, Segmentation fault.
> > > > 0x00007ffff401c463 in GC_find_limit_with_bound () from /usr/lib/libgc.so.1
> > > > Continuing.
> > > > [New LWP 18867]
> > > > [New LWP 18868]
> > > > [New LWP 18869]
> > > > [New LWP 18870]
> > > > [New LWP 18871]
> > > > [New LWP 18872]
> > > > [New LWP 18873]
> > > > [New LWP 18874]
> > > > [New LWP 18875]
> > > > [New LWP 18876]
> > > > [New LWP 18877]
> > > > [New LWP 18878]
> > > > [New LWP 18879]
> > > > [New LWP 18880]
> > > > [New LWP 18881]
> > > > 
> > > > Thread 1 "crystal" received signal SIGINT, Interrupt.
> > > > __syscall_cp_c (nr=202, u=140737488349316, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > > 61	./arch/x86_64/syscall_arch.h: No such file or directory.
> > > > 
> > > > Thread 16 (LWP 18881):
> > > > #0  __syscall_cp_c (nr=202, u=140737280690660, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff39f49e4) at src/thread/__timedwait.c:52
> > > > #2  __timedwait_cp (addr=addr@entry=0x7ffff39f49e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3a04aa0) at src/thread/pthread_create.c:196
> > > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > > Backtrace stopped: frame did not save the PC
> > > > 
> > > > Thread 15 (LWP 18880):
> > > > #0  __syscall_cp_c (nr=202, u=140737280965092, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3a379e4) at src/thread/__timedwait.c:52
> > > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3a379e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3a47aa0) at src/thread/pthread_create.c:196
> > > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > > Backtrace stopped: frame did not save the PC
> > > > 
> > > > Thread 14 (LWP 18879):
> > > > #0  __syscall_cp_c (nr=202, u=140737281239524, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3a7a9e4) at src/thread/__timedwait.c:52
> > > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3a7a9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3a8aaa0) at src/thread/pthread_create.c:196
> > > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > > Backtrace stopped: frame did not save the PC
> > > > 
> > > > Thread 13 (LWP 18878):
> > > > #0  __syscall_cp_c (nr=202, u=140737281513956, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3abd9e4) at src/thread/__timedwait.c:52
> > > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3abd9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3acdaa0) at src/thread/pthread_create.c:196
> > > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > > Backtrace stopped: frame did not save the PC
> > > > 
> > > > Thread 12 (LWP 18877):
> > > > #0  __syscall_cp_c (nr=202, u=140737281788388, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3b009e4) at src/thread/__timedwait.c:52
> > > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3b009e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3b10aa0) at src/thread/pthread_create.c:196
> > > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > > Backtrace stopped: frame did not save the PC
> > > > 
> > > > Thread 11 (LWP 18876):
> > > > #0  __syscall_cp_c (nr=202, u=140737282062820, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3b439e4) at src/thread/__timedwait.c:52
> > > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3b439e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3b53aa0) at src/thread/pthread_create.c:196
> > > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > > Backtrace stopped: frame did not save the PC
> > > > 
> > > > Thread 10 (LWP 18875):
> > > > #0  __syscall_cp_c (nr=202, u=140737282337252, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3b869e4) at src/thread/__timedwait.c:52
> > > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3b869e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3b96aa0) at src/thread/pthread_create.c:196
> > > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > > Backtrace stopped: frame did not save the PC
> > > > 
> > > > Thread 9 (LWP 18874):
> > > > #0  __syscall_cp_c (nr=202, u=140737282611684, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3bc99e4) at src/thread/__timedwait.c:52
> > > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3bc99e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3bd9aa0) at src/thread/pthread_create.c:196
> > > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > > Backtrace stopped: frame did not save the PC
> > > > 
> > > > Thread 8 (LWP 18873):
> > > > #0  __syscall_cp_c (nr=202, u=140737282886020, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3c0c984) at src/thread/__timedwait.c:52
> > > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3c0c984, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > > #5  0x00007ffff40186ca in GC_mark_local () from /usr/lib/libgc.so.1
> > > > #6  0x00007ffff40188f4 in GC_help_marker () from /usr/lib/libgc.so.1
> > > > #7  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > > #8  0x00007ffff7fb9df7 in start (p=0x7ffff3c1caa0) at src/thread/pthread_create.c:196
> > > > #9  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > > Backtrace stopped: frame did not save the PC
> > > > 
> > > > Thread 7 (LWP 18872):
> > > > #0  __syscall_cp_c (nr=202, u=140737283160548, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3c4f9e4) at src/thread/__timedwait.c:52
> > > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3c4f9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3c5faa0) at src/thread/pthread_create.c:196
> > > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > > Backtrace stopped: frame did not save the PC
> > > > 
> > > > Thread 6 (LWP 18871):
> > > > #0  __syscall_cp_c (nr=202, u=140737283434980, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3c929e4) at src/thread/__timedwait.c:52
> > > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3c929e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3ca2aa0) at src/thread/pthread_create.c:196
> > > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > > Backtrace stopped: frame did not save the PC
> > > > 
> > > > Thread 5 (LWP 18870):
> > > > #0  __syscall_cp_c (nr=202, u=140737283709412, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3cd59e4) at src/thread/__timedwait.c:52
> > > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3cd59e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3ce5aa0) at src/thread/pthread_create.c:196
> > > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > > Backtrace stopped: frame did not save the PC
> > > > 
> > > > Thread 4 (LWP 18869):
> > > > #0  __syscall_cp_c (nr=202, u=140737283983844, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3d189e4) at src/thread/__timedwait.c:52
> > > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3d189e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3d28aa0) at src/thread/pthread_create.c:196
> > > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > > Backtrace stopped: frame did not save the PC
> > > > 
> > > > Thread 3 (LWP 18868):
> > > > #0  __syscall_cp_c (nr=202, u=140737284258276, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3d5b9e4) at src/thread/__timedwait.c:52
> > > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3d5b9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3d6baa0) at src/thread/pthread_create.c:196
> > > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > > Backtrace stopped: frame did not save the PC
> > > > 
> > > > Thread 2 (LWP 18867):
> > > > #0  __syscall_cp_c (nr=202, u=140737284532708, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7ffff3d9e9e4) at src/thread/__timedwait.c:52
> > > > #2  __timedwait_cp (addr=addr@entry=0x7ffff3d9e9e4, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > > #5  0x00007ffff40188b9 in GC_help_marker () from /usr/lib/libgc.so.1
> > > > #6  0x00007ffff40206db in GC_mark_thread () from /usr/lib/libgc.so.1
> > > > #7  0x00007ffff7fb9df7 in start (p=0x7ffff3daeaa0) at src/thread/pthread_create.c:196
> > > > #8  0x00007ffff7fbbf93 in __clone () at src/thread/x86_64/clone.s:22
> > > > Backtrace stopped: frame did not save the PC
> > > > 
> > > > Thread 1 (LWP 18859):
> > > > #0  __syscall_cp_c (nr=202, u=140737488349316, v=128, w=2, x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
> > > > #1  0x00007ffff7fb8937 in __futex4_cp (to=0x0, val=2, op=128, addr=0x7fffffffe884) at src/thread/__timedwait.c:52
> > > > #2  __timedwait_cp (addr=addr@entry=0x7fffffffe884, val=val@entry=2, clk=clk@entry=0, at=at@entry=0x0, priv=128, priv@entry=1) at src/thread/__timedwait.c:52
> > > > #3  0x00007ffff7fb9740 in __pthread_cond_timedwait (c=0x7ffff403fba0, m=0x7ffff403f7a0, ts=0x0) at src/thread/pthread_cond_timedwait.c:100
> > > > #4  0x00007ffff40206fd in GC_wait_marker () from /usr/lib/libgc.so.1
> > > > #5  0x00007ffff4018852 in GC_do_parallel_mark () from /usr/lib/libgc.so.1
> > > > #6  0x00007ffff401937c in GC_mark_some () from /usr/lib/libgc.so.1
> > > > #7  0x00007ffff4010e5a in GC_stopped_mark () from /usr/lib/libgc.so.1
> > > > #8  0x00007ffff40114d4 in GC_try_to_collect_inner () from /usr/lib/libgc.so.1
> > > > #9  0x00007ffff401b47f in GC_init () from /usr/lib/libgc.so.1
> > > > #10 0x00005555555668d8 in init () at /home/mps/aports/community/crystal/src/crystal-0.35.1/src/gc/boehm.cr:127
> > > > #11 main () at /home/mps/aports/community/crystal/src/crystal-0.35.1/src/crystal/main.cr:35
> > > > #12 main () at /home/mps/aports/community/crystal/src/crystal-0.35.1/src/crystal/main.cr:114
> > > 
> > > OK, I'm looking at Boehm-GC's GC_wait_marker and this function is
> > > utterly bogus. It's using a condition variable without a predicate,
> > > and the corresponding broadcast operation runs without the lock held,
> > > so it's inherently subject both to spurious wakes and missed wakes.
> > > Here's the source:
> > > 
> > > https://github.com/ivmai/bdwgc/blob/57b97be07c514fcc4b608b13768fd2bf637a5899/pthread_support.c#L2374
> > > 
> > > This code is only used if built with PARALLEL_MARK defined, and
> > > obviously does not and cannot work, so I think the best solution is
> > > just figuring out how to turn off PARALLEL_MARK.
> > > 
> > > Do you have reason to believe the problem here has anything to do with
> > > the MT-fork patch? Was it maybe just hanging somewhere earlier without
> > > the patch, and now hanging on an independent bug?
> > 
> > It doesn't hangs with musl 1.2.1, but hangs on master branch with
> > mt-fork applied to it.
> > 
> > Also I tested with w3m browser and I see same behavior and same
> > (similar) backtrace.
> > 
> > Interesting thing is that it doesn't hangs on every invocations of
> > programs linked with libgc, sometimes programs work sometimes they
> > hangs.
> 
> Rebuilding 'gc' with '--disable-parallel-mark' fixes hangs.
> Not sure that is proper fix but for now it 'good' to not block next
> Alpine release.

There was a regression in musl too, I think. With
27b2fc9d6db956359727a66c262f1e69995660aa you should be able to
re-enable parallel mark. If you get a chance to test, let us know if
it works for you.

Rich

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-10-30 18:57                       ` Rich Felker
@ 2020-10-30 21:31                         ` Ariadne Conill
  2020-10-31  3:31                           ` Rich Felker
  2020-10-31  7:22                           ` Timo Teras
  0 siblings, 2 replies; 42+ messages in thread
From: Ariadne Conill @ 2020-10-30 21:31 UTC (permalink / raw)
  To: Milan P. Stanić, musl; +Cc: musl, Rich Felker

Hello,

On Friday, October 30, 2020 12:57:17 PM MDT Rich Felker wrote:
> There was a regression in musl too, I think. With
> 27b2fc9d6db956359727a66c262f1e69995660aa you should be able to
> re-enable parallel mark. If you get a chance to test, let us know if
> it works for you.

I have pushed current musl git plus the MT fork patch to Alpine edge as Alpine 
musl 1.2.2_pre0, and reenabling parallel mark has worked fine.

It would be nice to have a musl 1.2.2 release that I can use for the source 
tarball instead of a git snapshot, but this will do for now.

Ariadne




^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-10-30 21:31                         ` Ariadne Conill
@ 2020-10-31  3:31                           ` Rich Felker
  2020-11-06  3:36                             ` Rich Felker
  2020-10-31  7:22                           ` Timo Teras
  1 sibling, 1 reply; 42+ messages in thread
From: Rich Felker @ 2020-10-31  3:31 UTC (permalink / raw)
  To: musl

On Fri, Oct 30, 2020 at 03:31:54PM -0600, Ariadne Conill wrote:
> Hello,
> 
> On Friday, October 30, 2020 12:57:17 PM MDT Rich Felker wrote:
> > There was a regression in musl too, I think. With
> > 27b2fc9d6db956359727a66c262f1e69995660aa you should be able to
> > re-enable parallel mark. If you get a chance to test, let us know if
> > it works for you.
> 
> I have pushed current musl git plus the MT fork patch to Alpine edge as Alpine 
> musl 1.2.2_pre0, and reenabling parallel mark has worked fine.
> 
> It would be nice to have a musl 1.2.2 release that I can use for the source 
> tarball instead of a git snapshot, but this will do for now.

Thanks for the feedback. I'll try to wrap up this release cycle pretty
quickly now, since I know this has been a big stress for distros, but
I want to make sure the MT-fork doesn't introduce other breakage.

One thing I know is potentially problematic is interaction with malloc
replacement -- locking of any of the subsystems locked at fork time
necessarily takes place after application atfork handlers, so if the
malloc replacement registers atfork handlers (as many do), it could
deadlock. I'm exploring whether malloc use in these systems can be
eliminated. A few are almost-surely better just using direct mmap
anyway, but for some it's borderline. I'll have a better idea sometime
in the next few days.

Rich

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-10-30 21:31                         ` Ariadne Conill
  2020-10-31  3:31                           ` Rich Felker
@ 2020-10-31  7:22                           ` Timo Teras
  2020-10-31 13:29                             ` Szabolcs Nagy
  2020-10-31 14:47                             ` Rich Felker
  1 sibling, 2 replies; 42+ messages in thread
From: Timo Teras @ 2020-10-31  7:22 UTC (permalink / raw)
  To: Ariadne Conill; +Cc: musl, Milan P. Stanić, Rich Felker

Hi

On Fri, 30 Oct 2020 15:31:54 -0600
Ariadne Conill <ariadne@dereferenced.org> wrote:

> On Friday, October 30, 2020 12:57:17 PM MDT Rich Felker wrote:
> > There was a regression in musl too, I think. With
> > 27b2fc9d6db956359727a66c262f1e69995660aa you should be able to
> > re-enable parallel mark. If you get a chance to test, let us know if
> > it works for you.  
> 
> I have pushed current musl git plus the MT fork patch to Alpine edge
> as Alpine musl 1.2.2_pre0, and reenabling parallel mark has worked
> fine.
> 
> It would be nice to have a musl 1.2.2 release that I can use for the
> source tarball instead of a git snapshot, but this will do for now.

And now firefox is utterly broken. Though seems to be not related to MT
fork patch.

Bisected it down to commit b8b729bd22c28c9116c2fce65dce207a35299c26
"fix missing O_LARGEFILE values on x86_64, x32, and mips64"

I think this breaks the seccomp because now e.g. fopen() calls has this
bit set for the syscall and seccomp does not like it.

Wondering whether to fix firefox seccomp ignore this bit, or if this
commit needs reconsideration?

Timo

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-10-31  7:22                           ` Timo Teras
@ 2020-10-31 13:29                             ` Szabolcs Nagy
  2020-10-31 13:35                               ` Timo Teras
  2020-10-31 14:47                             ` Rich Felker
  1 sibling, 1 reply; 42+ messages in thread
From: Szabolcs Nagy @ 2020-10-31 13:29 UTC (permalink / raw)
  To: Timo Teras; +Cc: Ariadne Conill, musl, Milan P. Stanić, Rich Felker

* Timo Teras <timo.teras@iki.fi> [2020-10-31 09:22:04 +0200]:

> Hi
> 
> On Fri, 30 Oct 2020 15:31:54 -0600
> Ariadne Conill <ariadne@dereferenced.org> wrote:
> 
> > On Friday, October 30, 2020 12:57:17 PM MDT Rich Felker wrote:
> > > There was a regression in musl too, I think. With
> > > 27b2fc9d6db956359727a66c262f1e69995660aa you should be able to
> > > re-enable parallel mark. If you get a chance to test, let us know if
> > > it works for you.  
> > 
> > I have pushed current musl git plus the MT fork patch to Alpine edge
> > as Alpine musl 1.2.2_pre0, and reenabling parallel mark has worked
> > fine.
> > 
> > It would be nice to have a musl 1.2.2 release that I can use for the
> > source tarball instead of a git snapshot, but this will do for now.
> 
> And now firefox is utterly broken. Though seems to be not related to MT
> fork patch.
> 
> Bisected it down to commit b8b729bd22c28c9116c2fce65dce207a35299c26
> "fix missing O_LARGEFILE values on x86_64, x32, and mips64"
> 
> I think this breaks the seccomp because now e.g. fopen() calls has this
> bit set for the syscall and seccomp does not like it.
> 
> Wondering whether to fix firefox seccomp ignore this bit, or if this
> commit needs reconsideration?

please report it to firefox while we work out what's best.

this is something they sould be aware of.

> 
> Timo

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-10-31 13:29                             ` Szabolcs Nagy
@ 2020-10-31 13:35                               ` Timo Teras
  2020-10-31 14:41                                 ` Ariadne Conill
  2020-10-31 14:48                                 ` Rich Felker
  0 siblings, 2 replies; 42+ messages in thread
From: Timo Teras @ 2020-10-31 13:35 UTC (permalink / raw)
  To: Szabolcs Nagy; +Cc: Ariadne Conill, musl, Milan P. Stanić, Rich Felker

On Sat, 31 Oct 2020 14:29:32 +0100
Szabolcs Nagy <nsz@port70.net> wrote:

> * Timo Teras <timo.teras@iki.fi> [2020-10-31 09:22:04 +0200]:
> 
> > On Fri, 30 Oct 2020 15:31:54 -0600
> > Ariadne Conill <ariadne@dereferenced.org> wrote:
> >   
> > > I have pushed current musl git plus the MT fork patch to Alpine
> > > edge as Alpine musl 1.2.2_pre0, and reenabling parallel mark has
> > > worked fine.
> > > 
> > > It would be nice to have a musl 1.2.2 release that I can use for
> > > the source tarball instead of a git snapshot, but this will do
> > > for now.  
> > 
> > And now firefox is utterly broken. Though seems to be not related
> > to MT fork patch.
> > 
> > Bisected it down to commit b8b729bd22c28c9116c2fce65dce207a35299c26
> > "fix missing O_LARGEFILE values on x86_64, x32, and mips64"
> > 
> > I think this breaks the seccomp because now e.g. fopen() calls has
> > this bit set for the syscall and seccomp does not like it.
> > 
> > Wondering whether to fix firefox seccomp ignore this bit, or if this
> > commit needs reconsideration?  
> 
> please report it to firefox while we work out what's best.
> 
> this is something they sould be aware of.

Turns out the rebuilding firefox was enough. They allow O_LARGEFILE
there, but when built with earlier musl version it was zero...

So basically the change there requires rebuild of certain applications.
Even if from kernel side the bit makes no big difference, but from
seccomp side it does.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-10-31 13:35                               ` Timo Teras
@ 2020-10-31 14:41                                 ` Ariadne Conill
  2020-10-31 14:49                                   ` Rich Felker
  2020-10-31 14:48                                 ` Rich Felker
  1 sibling, 1 reply; 42+ messages in thread
From: Ariadne Conill @ 2020-10-31 14:41 UTC (permalink / raw)
  To: Szabolcs Nagy, Timo Teras; +Cc: musl, Milan P. Stanić, Rich Felker

Hello,

On Saturday, October 31, 2020 7:35:57 AM MDT Timo Teras wrote:
> On Sat, 31 Oct 2020 14:29:32 +0100
> 
> Szabolcs Nagy <nsz@port70.net> wrote:
> > * Timo Teras <timo.teras@iki.fi> [2020-10-31 09:22:04 +0200]:
> > > On Fri, 30 Oct 2020 15:31:54 -0600
> > > 
> > > Ariadne Conill <ariadne@dereferenced.org> wrote:
> > > > I have pushed current musl git plus the MT fork patch to Alpine
> > > > edge as Alpine musl 1.2.2_pre0, and reenabling parallel mark has
> > > > worked fine.
> > > > 
> > > > It would be nice to have a musl 1.2.2 release that I can use for
> > > > the source tarball instead of a git snapshot, but this will do
> > > > for now.
> > > 
> > > And now firefox is utterly broken. Though seems to be not related
> > > to MT fork patch.
> > > 
> > > Bisected it down to commit b8b729bd22c28c9116c2fce65dce207a35299c26
> > > "fix missing O_LARGEFILE values on x86_64, x32, and mips64"
> > > 
> > > I think this breaks the seccomp because now e.g. fopen() calls has
> > > this bit set for the syscall and seccomp does not like it.
> > > 
> > > Wondering whether to fix firefox seccomp ignore this bit, or if this
> > > commit needs reconsideration?
> > 
> > please report it to firefox while we work out what's best.
> > 
> > this is something they sould be aware of.
> 
> Turns out the rebuilding firefox was enough. They allow O_LARGEFILE
> there, but when built with earlier musl version it was zero...
> 
> So basically the change there requires rebuild of certain applications.
> Even if from kernel side the bit makes no big difference, but from
> seccomp side it does.

There is additional fallout from the new terminal window size ioctls as well.  
But fixing the seccomp policy there seems straightforward, and I think firefox 
82.0-r2 will have the fix.

Ariadne



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-10-31  7:22                           ` Timo Teras
  2020-10-31 13:29                             ` Szabolcs Nagy
@ 2020-10-31 14:47                             ` Rich Felker
  1 sibling, 0 replies; 42+ messages in thread
From: Rich Felker @ 2020-10-31 14:47 UTC (permalink / raw)
  To: musl

On Sat, Oct 31, 2020 at 09:22:04AM +0200, Timo Teras wrote:
> Hi
> 
> On Fri, 30 Oct 2020 15:31:54 -0600
> Ariadne Conill <ariadne@dereferenced.org> wrote:
> 
> > On Friday, October 30, 2020 12:57:17 PM MDT Rich Felker wrote:
> > > There was a regression in musl too, I think. With
> > > 27b2fc9d6db956359727a66c262f1e69995660aa you should be able to
> > > re-enable parallel mark. If you get a chance to test, let us know if
> > > it works for you.  
> > 
> > I have pushed current musl git plus the MT fork patch to Alpine edge
> > as Alpine musl 1.2.2_pre0, and reenabling parallel mark has worked
> > fine.
> > 
> > It would be nice to have a musl 1.2.2 release that I can use for the
> > source tarball instead of a git snapshot, but this will do for now.
> 
> And now firefox is utterly broken. Though seems to be not related to MT
> fork patch.
> 
> Bisected it down to commit b8b729bd22c28c9116c2fce65dce207a35299c26
> "fix missing O_LARGEFILE values on x86_64, x32, and mips64"
> 
> I think this breaks the seccomp because now e.g. fopen() calls has this
> bit set for the syscall and seccomp does not like it.
> 
> Wondering whether to fix firefox seccomp ignore this bit, or if this
> commit needs reconsideration?

Firefox needs to be fixed. A seccomp filter error is *always* the
filter being wrong.

Further, there should be a real audit of these filters (I'm willing to
do it myself if someone can dig up the code I need to look at) to
prevent this kind of thing preemptively. Any reasonable review would
have shown this code was wrong if blocking the bit was specific to
x86_64 and a few other archs, and if it's not specific, all the other
archs were already broken. I'm pretty sure we'll find a lot of other
issues that need to be fixed, some of them probably breaking
less-popular archs right now.

Rich

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-10-31 13:35                               ` Timo Teras
  2020-10-31 14:41                                 ` Ariadne Conill
@ 2020-10-31 14:48                                 ` Rich Felker
  1 sibling, 0 replies; 42+ messages in thread
From: Rich Felker @ 2020-10-31 14:48 UTC (permalink / raw)
  To: musl

On Sat, Oct 31, 2020 at 03:35:57PM +0200, Timo Teras wrote:
> On Sat, 31 Oct 2020 14:29:32 +0100
> Szabolcs Nagy <nsz@port70.net> wrote:
> 
> > * Timo Teras <timo.teras@iki.fi> [2020-10-31 09:22:04 +0200]:
> > 
> > > On Fri, 30 Oct 2020 15:31:54 -0600
> > > Ariadne Conill <ariadne@dereferenced.org> wrote:
> > >   
> > > > I have pushed current musl git plus the MT fork patch to Alpine
> > > > edge as Alpine musl 1.2.2_pre0, and reenabling parallel mark has
> > > > worked fine.
> > > > 
> > > > It would be nice to have a musl 1.2.2 release that I can use for
> > > > the source tarball instead of a git snapshot, but this will do
> > > > for now.  
> > > 
> > > And now firefox is utterly broken. Though seems to be not related
> > > to MT fork patch.
> > > 
> > > Bisected it down to commit b8b729bd22c28c9116c2fce65dce207a35299c26
> > > "fix missing O_LARGEFILE values on x86_64, x32, and mips64"
> > > 
> > > I think this breaks the seccomp because now e.g. fopen() calls has
> > > this bit set for the syscall and seccomp does not like it.
> > > 
> > > Wondering whether to fix firefox seccomp ignore this bit, or if this
> > > commit needs reconsideration?  
> > 
> > please report it to firefox while we work out what's best.
> > 
> > this is something they sould be aware of.
> 
> Turns out the rebuilding firefox was enough. They allow O_LARGEFILE
> there, but when built with earlier musl version it was zero...
> 
> So basically the change there requires rebuild of certain applications.
> Even if from kernel side the bit makes no big difference, but from
> seccomp side it does.

It does make a difference from the kernel side. If a file is opened
without it, passing the fd to 32-bit processes is unsafe.

Rich

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-10-31 14:41                                 ` Ariadne Conill
@ 2020-10-31 14:49                                   ` Rich Felker
  0 siblings, 0 replies; 42+ messages in thread
From: Rich Felker @ 2020-10-31 14:49 UTC (permalink / raw)
  To: musl

On Sat, Oct 31, 2020 at 08:41:40AM -0600, Ariadne Conill wrote:
> Hello,
> 
> On Saturday, October 31, 2020 7:35:57 AM MDT Timo Teras wrote:
> > On Sat, 31 Oct 2020 14:29:32 +0100
> > 
> > Szabolcs Nagy <nsz@port70.net> wrote:
> > > * Timo Teras <timo.teras@iki.fi> [2020-10-31 09:22:04 +0200]:
> > > > On Fri, 30 Oct 2020 15:31:54 -0600
> > > > 
> > > > Ariadne Conill <ariadne@dereferenced.org> wrote:
> > > > > I have pushed current musl git plus the MT fork patch to Alpine
> > > > > edge as Alpine musl 1.2.2_pre0, and reenabling parallel mark has
> > > > > worked fine.
> > > > > 
> > > > > It would be nice to have a musl 1.2.2 release that I can use for
> > > > > the source tarball instead of a git snapshot, but this will do
> > > > > for now.
> > > > 
> > > > And now firefox is utterly broken. Though seems to be not related
> > > > to MT fork patch.
> > > > 
> > > > Bisected it down to commit b8b729bd22c28c9116c2fce65dce207a35299c26
> > > > "fix missing O_LARGEFILE values on x86_64, x32, and mips64"
> > > > 
> > > > I think this breaks the seccomp because now e.g. fopen() calls has
> > > > this bit set for the syscall and seccomp does not like it.
> > > > 
> > > > Wondering whether to fix firefox seccomp ignore this bit, or if this
> > > > commit needs reconsideration?
> > > 
> > > please report it to firefox while we work out what's best.
> > > 
> > > this is something they sould be aware of.
> > 
> > Turns out the rebuilding firefox was enough. They allow O_LARGEFILE
> > there, but when built with earlier musl version it was zero...
> > 
> > So basically the change there requires rebuild of certain applications.
> > Even if from kernel side the bit makes no big difference, but from
> > seccomp side it does.
> 
> There is additional fallout from the new terminal window size ioctls as well.  
> But fixing the seccomp policy there seems straightforward, and I think firefox 
> 82.0-r2 will have the fix.

What new terminal window size ioctls? There are new functions for the
POSIX-future interfaces, but no new use of the ioctl that wasn't
already there in existing code paths. And this was already a problem
in firefox.

Rich

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-10-31  3:31                           ` Rich Felker
@ 2020-11-06  3:36                             ` Rich Felker
  2020-11-08 16:12                               ` Szabolcs Nagy
  0 siblings, 1 reply; 42+ messages in thread
From: Rich Felker @ 2020-11-06  3:36 UTC (permalink / raw)
  To: musl

On Fri, Oct 30, 2020 at 11:31:17PM -0400, Rich Felker wrote:
> On Fri, Oct 30, 2020 at 03:31:54PM -0600, Ariadne Conill wrote:
> > Hello,
> > 
> > On Friday, October 30, 2020 12:57:17 PM MDT Rich Felker wrote:
> > > There was a regression in musl too, I think. With
> > > 27b2fc9d6db956359727a66c262f1e69995660aa you should be able to
> > > re-enable parallel mark. If you get a chance to test, let us know if
> > > it works for you.
> > 
> > I have pushed current musl git plus the MT fork patch to Alpine edge as Alpine 
> > musl 1.2.2_pre0, and reenabling parallel mark has worked fine.
> > 
> > It would be nice to have a musl 1.2.2 release that I can use for the source 
> > tarball instead of a git snapshot, but this will do for now.
> 
> Thanks for the feedback. I'll try to wrap up this release cycle pretty
> quickly now, since I know this has been a big stress for distros, but
> I want to make sure the MT-fork doesn't introduce other breakage.
> 
> One thing I know is potentially problematic is interaction with malloc
> replacement -- locking of any of the subsystems locked at fork time
> necessarily takes place after application atfork handlers, so if the
> malloc replacement registers atfork handlers (as many do), it could
> deadlock. I'm exploring whether malloc use in these systems can be
> eliminated. A few are almost-surely better just using direct mmap
> anyway, but for some it's borderline. I'll have a better idea sometime
> in the next few days.

OK, here's a summary of the affected locks (where there's a lock order
conflict between them and application-replaced malloc):

- atexit: uses calloc to allocate more handler slots of the builtin 32
  are exhausted. Could reasonably be changed to just mmap a whole page
  of slots in this case.

- dlerror: the lock is just for a queue of buffers to be freed on
  future calls, since they can't be freed at thread exit time because
  the calling context (thread that's "already exited") is not valid to
  call application code, and malloc might be replaced. one plausible
  solution here is getting rid of the free queue hack (and thus the
  lock) entirely and instead calling libc's malloc/free via dlsym
  rather than using the potentially-replaced symbol. but this would
  not work for static linking (same dlerror is used even though dlopen
  always fails; in future it may work) so it's probably not a good
  approach. mmap is really not a good option here because it's
  excessive mem usage. It's probably possible to just repeatedly
  unlock/relock around performing each free so that only one lock is
  held at once.

- gettext: bindtextdomain calls calloc while holding the lock on list
  of bindings. It could drop the lock, allocate, retake it, recheck
  for an existing binding, and free in that case, but this is
  undesirable because it introduces a dependency on free in
  static-linked programs. Otherwise all memory gettext allocates is
  permanent. Because of this we could just mmap an area and bump
  allocate it, but that's wasteful because most programs will only use
  one tiny binding. We could also just leak on the rare possibility of
  concurrent binding allocations; the number of such leaks is bounded
  by nthreads*ndomains, and we could make it just nthreads by keeping
  and reusing abandoned ones.

- sem_open: a one-time calloc of global semtab takes place with the
  lock held. On 32-bit archs this table is exactly 4k; on 64-bit it's
  6k. So it seems very reasonable to just mmap instead of calloc.

- timezone: The tz core allocates memory to remember the last-seen
  value of TZ env var to react if it changes. Normally it's small, so
  perhaps we could just use a small (e.g. 32 byte) static buffer and
  replace it with a whole mmapped page if a value too large for that
  is seen.

Also, somehow I failed to find one of the important locks MT-fork
needs to be taking: locale_map.c has a lock for the records of mapped
locales. Allocation also takes place with it held, and for the same
reason as gettext it really shouldn't be changed to allocate
differently. It could possibly do the allocation without the lock held
though and leak it (or save it for reuse later if needed) when another
thread races to load the locale.

So anywya, it looks like there's some nontrivial work to be done here
in order to make the MT-fork not be a regression for replaced-malloc
uage... :( Ideas for the above and keeping the solutions non-invasive
will be very much welcome!

Rich

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-11-06  3:36                             ` Rich Felker
@ 2020-11-08 16:12                               ` Szabolcs Nagy
  2020-11-09 17:07                                 ` Rich Felker
  0 siblings, 1 reply; 42+ messages in thread
From: Szabolcs Nagy @ 2020-11-08 16:12 UTC (permalink / raw)
  To: Rich Felker; +Cc: musl

* Rich Felker <dalias@libc.org> [2020-11-05 22:36:17 -0500]:
> On Fri, Oct 30, 2020 at 11:31:17PM -0400, Rich Felker wrote:
> > One thing I know is potentially problematic is interaction with malloc
> > replacement -- locking of any of the subsystems locked at fork time
> > necessarily takes place after application atfork handlers, so if the
> > malloc replacement registers atfork handlers (as many do), it could
> > deadlock. I'm exploring whether malloc use in these systems can be
> > eliminated. A few are almost-surely better just using direct mmap
> > anyway, but for some it's borderline. I'll have a better idea sometime
> > in the next few days.
> 
> OK, here's a summary of the affected locks (where there's a lock order
> conflict between them and application-replaced malloc):

if malloc replacements take internal locks in atfork
handlers then it's not just libc internal locks that
can cause problems but locks taken by other atfork
handlers that were registered before the malloc one.

i don't think there is a clean solution to this. i
think using mmap is ugly (uless there is some reason
to prefer that other than the atfork issue).

> 
> - atexit: uses calloc to allocate more handler slots of the builtin 32
>   are exhausted. Could reasonably be changed to just mmap a whole page
>   of slots in this case.
> 
> - dlerror: the lock is just for a queue of buffers to be freed on
>   future calls, since they can't be freed at thread exit time because
>   the calling context (thread that's "already exited") is not valid to
>   call application code, and malloc might be replaced. one plausible
>   solution here is getting rid of the free queue hack (and thus the
>   lock) entirely and instead calling libc's malloc/free via dlsym
>   rather than using the potentially-replaced symbol. but this would
>   not work for static linking (same dlerror is used even though dlopen
>   always fails; in future it may work) so it's probably not a good
>   approach. mmap is really not a good option here because it's
>   excessive mem usage. It's probably possible to just repeatedly
>   unlock/relock around performing each free so that only one lock is
>   held at once.
> 
> - gettext: bindtextdomain calls calloc while holding the lock on list
>   of bindings. It could drop the lock, allocate, retake it, recheck
>   for an existing binding, and free in that case, but this is
>   undesirable because it introduces a dependency on free in
>   static-linked programs. Otherwise all memory gettext allocates is
>   permanent. Because of this we could just mmap an area and bump
>   allocate it, but that's wasteful because most programs will only use
>   one tiny binding. We could also just leak on the rare possibility of
>   concurrent binding allocations; the number of such leaks is bounded
>   by nthreads*ndomains, and we could make it just nthreads by keeping
>   and reusing abandoned ones.
> 
> - sem_open: a one-time calloc of global semtab takes place with the
>   lock held. On 32-bit archs this table is exactly 4k; on 64-bit it's
>   6k. So it seems very reasonable to just mmap instead of calloc.
> 
> - timezone: The tz core allocates memory to remember the last-seen
>   value of TZ env var to react if it changes. Normally it's small, so
>   perhaps we could just use a small (e.g. 32 byte) static buffer and
>   replace it with a whole mmapped page if a value too large for that
>   is seen.
> 
> Also, somehow I failed to find one of the important locks MT-fork
> needs to be taking: locale_map.c has a lock for the records of mapped
> locales. Allocation also takes place with it held, and for the same
> reason as gettext it really shouldn't be changed to allocate
> differently. It could possibly do the allocation without the lock held
> though and leak it (or save it for reuse later if needed) when another
> thread races to load the locale.

yeah this sounds problematic.

if malloc interposers want to do something around fork
then libc may need to expose some better api than atfork.

> 
> So anywya, it looks like there's some nontrivial work to be done here
> in order to make the MT-fork not be a regression for replaced-malloc
> uage... :( Ideas for the above and keeping the solutions non-invasive
> will be very much welcome!
> 
> Rich

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-11-08 16:12                               ` Szabolcs Nagy
@ 2020-11-09 17:07                                 ` Rich Felker
  2020-11-09 18:01                                   ` Érico Nogueira
  2020-11-09 21:59                                   ` Szabolcs Nagy
  0 siblings, 2 replies; 42+ messages in thread
From: Rich Felker @ 2020-11-09 17:07 UTC (permalink / raw)
  To: musl

On Sun, Nov 08, 2020 at 05:12:15PM +0100, Szabolcs Nagy wrote:
> * Rich Felker <dalias@libc.org> [2020-11-05 22:36:17 -0500]:
> > On Fri, Oct 30, 2020 at 11:31:17PM -0400, Rich Felker wrote:
> > > One thing I know is potentially problematic is interaction with malloc
> > > replacement -- locking of any of the subsystems locked at fork time
> > > necessarily takes place after application atfork handlers, so if the
> > > malloc replacement registers atfork handlers (as many do), it could
> > > deadlock. I'm exploring whether malloc use in these systems can be
> > > eliminated. A few are almost-surely better just using direct mmap
> > > anyway, but for some it's borderline. I'll have a better idea sometime
> > > in the next few days.
> > 
> > OK, here's a summary of the affected locks (where there's a lock order
> > conflict between them and application-replaced malloc):
> 
> if malloc replacements take internal locks in atfork
> handlers then it's not just libc internal locks that
> can cause problems but locks taken by other atfork
> handlers that were registered before the malloc one.

No other locks taken there could impede forward process in libc. The
only reason malloc is special here is because we allowed the
application to redefine a family of functions used by libc. For
MT-fork with malloc inside libc, we do the malloc_atfork code last so
that the lock isn't held while other libc components' locks are being
taken. But we can't start taking libc-internal locks until the
application atfork handlers run, or the application could see a
deadlocked libc state (e.g. if something in the atfork handlers used
time functions, maybe to log time of fork, or gettext functions, maybe
to print a localized message, etc.).

> i don't think there is a clean solution to this. i
> think using mmap is ugly (uless there is some reason
> to prefer that other than the atfork issue).

When the size is exactly a 4k page, it's preferable in terms of memory
usage to use mmap instead of malloc, except on wacky archs with large
pages (which I don't think it makes sense to memory-usage-optimize
for; they're inherently and unfixably memory-inefficient).

But for the most part, yes, it's ugly and I don't want to make such a
change.

> > - atexit: uses calloc to allocate more handler slots of the builtin 32
> >   are exhausted. Could reasonably be changed to just mmap a whole page
> >   of slots in this case.

Not sure on this. Since it's only used in the extremely rare case
where a huge number of atexit handlers are registered, it's probably
nicer to use mmap anyway -- it avoids linking a malloc that will
almost surely never be used by atfork.

> > - dlerror: the lock is just for a queue of buffers to be freed on
> >   future calls, since they can't be freed at thread exit time because
> >   the calling context (thread that's "already exited") is not valid to
> >   call application code, and malloc might be replaced. one plausible
> >   solution here is getting rid of the free queue hack (and thus the
> >   lock) entirely and instead calling libc's malloc/free via dlsym
> >   rather than using the potentially-replaced symbol. but this would
> >   not work for static linking (same dlerror is used even though dlopen
> >   always fails; in future it may work) so it's probably not a good
> >   approach. mmap is really not a good option here because it's
> >   excessive mem usage. It's probably possible to just repeatedly
> >   unlock/relock around performing each free so that only one lock is
> >   held at once.

I think the repeated unlock/relock works fine here, but it would also
work to just null out the list in the child (i.e. never free any
buffers that were queued to be freed in the parent before fork). Fork
of MT process is inherently leaky anyway (you can never free thread
data from the other parent threads). I'm not sure which approach is
nicer. The repeated unlock/relock is less logically invasive I think.

> > - gettext: bindtextdomain calls calloc while holding the lock on list
> >   of bindings. It could drop the lock, allocate, retake it, recheck
> >   for an existing binding, and free in that case, but this is
> >   undesirable because it introduces a dependency on free in
> >   static-linked programs. Otherwise all memory gettext allocates is
> >   permanent. Because of this we could just mmap an area and bump
> >   allocate it, but that's wasteful because most programs will only use
> >   one tiny binding. We could also just leak on the rare possibility of
> >   concurrent binding allocations; the number of such leaks is bounded
> >   by nthreads*ndomains, and we could make it just nthreads by keeping
> >   and reusing abandoned ones.

Thoughts here?

> > - sem_open: a one-time calloc of global semtab takes place with the
> >   lock held. On 32-bit archs this table is exactly 4k; on 64-bit it's
> >   6k. So it seems very reasonable to just mmap instead of calloc.

This should almost surely just be changed to mmap. With mallocng, an
early malloc(4k) will take over 5k, and this table is permanent so the
excess will never be released. mmap avoids that entirely.

> > - timezone: The tz core allocates memory to remember the last-seen
> >   value of TZ env var to react if it changes. Normally it's small, so
> >   perhaps we could just use a small (e.g. 32 byte) static buffer and
> >   replace it with a whole mmapped page if a value too large for that
> >   is seen.
> > 
> > Also, somehow I failed to find one of the important locks MT-fork
> > needs to be taking: locale_map.c has a lock for the records of mapped
> > locales. Allocation also takes place with it held, and for the same
> > reason as gettext it really shouldn't be changed to allocate
> > differently. It could possibly do the allocation without the lock held
> > though and leak it (or save it for reuse later if needed) when another
> > thread races to load the locale.
> 
> yeah this sounds problematic.
> 
> if malloc interposers want to do something around fork
> then libc may need to expose some better api than atfork.

Most of them use existing pthread_atfork if they're intended to be
usable with MT-fork. I don't think inventing a new musl-specific API
is a solution here. Their pthread_atfork approach already fully works
with glibc because glibc *doesn't* make anything but malloc work in
the MT-forked child. All the other subsystems mentioned here will
deadlock or blow up in the child with glibc, but only with 0.01%
probability since it's rare for them to be under hammering at fork
time.

One solution you might actually like: getting rid of
application-provided-malloc use inside libc. This could be achieved by
making malloc a thin wrapper for __libc_malloc or whatever, which
could be called by everything in libc that doesn't actually have a
contract to return "as-if-by-malloc" memory. Only a few functions like
getdelim would be left still calling malloc.

The other pros of such an approach are stuff like making it so
application code doesn't get called as a callback from messy contexts
inside libc, e.g. with dynamic linker in inconsistent state. The major
con I see is that it precludes omitting the libc malloc entirely when
static linking, assuming you link any part of libc that uses malloc
internally. However, almost all such places only call malloc, not
free, so you'd just get the trivial bump allocator gratuitously
linked, rather than full mallocng or oldmalloc, except for dlerror
which shouldn't come up in static linked programs anyway.

Rich

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-11-09 17:07                                 ` Rich Felker
@ 2020-11-09 18:01                                   ` Érico Nogueira
  2020-11-09 18:44                                     ` Rich Felker
  2020-11-09 21:59                                   ` Szabolcs Nagy
  1 sibling, 1 reply; 42+ messages in thread
From: Érico Nogueira @ 2020-11-09 18:01 UTC (permalink / raw)
  To: musl, musl

On Mon Nov 9, 2020 at 9:07 AM -03, Rich Felker wrote:
> On Sun, Nov 08, 2020 at 05:12:15PM +0100, Szabolcs Nagy wrote:
> > * Rich Felker <dalias@libc.org> [2020-11-05 22:36:17 -0500]:
> > > On Fri, Oct 30, 2020 at 11:31:17PM -0400, Rich Felker wrote:
> > > > One thing I know is potentially problematic is interaction with malloc
> > > > replacement -- locking of any of the subsystems locked at fork time
> > > > necessarily takes place after application atfork handlers, so if the
> > > > malloc replacement registers atfork handlers (as many do), it could
> > > > deadlock. I'm exploring whether malloc use in these systems can be
> > > > eliminated. A few are almost-surely better just using direct mmap
> > > > anyway, but for some it's borderline. I'll have a better idea sometime
> > > > in the next few days.
> > > 
> > > OK, here's a summary of the affected locks (where there's a lock order
> > > conflict between them and application-replaced malloc):
> > 
> > if malloc replacements take internal locks in atfork
> > handlers then it's not just libc internal locks that
> > can cause problems but locks taken by other atfork
> > handlers that were registered before the malloc one.
>
> No other locks taken there could impede forward process in libc. The
> only reason malloc is special here is because we allowed the
> application to redefine a family of functions used by libc. For
> MT-fork with malloc inside libc, we do the malloc_atfork code last so
> that the lock isn't held while other libc components' locks are being
> taken. But we can't start taking libc-internal locks until the
> application atfork handlers run, or the application could see a
> deadlocked libc state (e.g. if something in the atfork handlers used
> time functions, maybe to log time of fork, or gettext functions, maybe
> to print a localized message, etc.).
>
> > i don't think there is a clean solution to this. i
> > think using mmap is ugly (uless there is some reason
> > to prefer that other than the atfork issue).
>
> When the size is exactly a 4k page, it's preferable in terms of memory
> usage to use mmap instead of malloc, except on wacky archs with large
> pages (which I don't think it makes sense to memory-usage-optimize
> for; they're inherently and unfixably memory-inefficient).
>
> But for the most part, yes, it's ugly and I don't want to make such a
> change.
>
> > > - atexit: uses calloc to allocate more handler slots of the builtin 32
> > >   are exhausted. Could reasonably be changed to just mmap a whole page
> > >   of slots in this case.
>
> Not sure on this. Since it's only used in the extremely rare case
> where a huge number of atexit handlers are registered, it's probably
> nicer to use mmap anyway -- it avoids linking a malloc that will
> almost surely never be used by atfork.
>
> > > - dlerror: the lock is just for a queue of buffers to be freed on
> > >   future calls, since they can't be freed at thread exit time because
> > >   the calling context (thread that's "already exited") is not valid to
> > >   call application code, and malloc might be replaced. one plausible
> > >   solution here is getting rid of the free queue hack (and thus the
> > >   lock) entirely and instead calling libc's malloc/free via dlsym
> > >   rather than using the potentially-replaced symbol. but this would
> > >   not work for static linking (same dlerror is used even though dlopen
> > >   always fails; in future it may work) so it's probably not a good
> > >   approach. mmap is really not a good option here because it's
> > >   excessive mem usage. It's probably possible to just repeatedly
> > >   unlock/relock around performing each free so that only one lock is
> > >   held at once.
>
> I think the repeated unlock/relock works fine here, but it would also
> work to just null out the list in the child (i.e. never free any
> buffers that were queued to be freed in the parent before fork). Fork
> of MT process is inherently leaky anyway (you can never free thread
> data from the other parent threads). I'm not sure which approach is
> nicer. The repeated unlock/relock is less logically invasive I think.
>
> > > - gettext: bindtextdomain calls calloc while holding the lock on list
> > >   of bindings. It could drop the lock, allocate, retake it, recheck
> > >   for an existing binding, and free in that case, but this is
> > >   undesirable because it introduces a dependency on free in
> > >   static-linked programs. Otherwise all memory gettext allocates is
> > >   permanent. Because of this we could just mmap an area and bump
> > >   allocate it, but that's wasteful because most programs will only use
> > >   one tiny binding. We could also just leak on the rare possibility of
> > >   concurrent binding allocations; the number of such leaks is bounded
> > >   by nthreads*ndomains, and we could make it just nthreads by keeping
> > >   and reusing abandoned ones.
>
> Thoughts here?
>
> > > - sem_open: a one-time calloc of global semtab takes place with the
> > >   lock held. On 32-bit archs this table is exactly 4k; on 64-bit it's
> > >   6k. So it seems very reasonable to just mmap instead of calloc.
>
> This should almost surely just be changed to mmap. With mallocng, an
> early malloc(4k) will take over 5k, and this table is permanent so the
> excess will never be released. mmap avoids that entirely.
>
> > > - timezone: The tz core allocates memory to remember the last-seen
> > >   value of TZ env var to react if it changes. Normally it's small, so
> > >   perhaps we could just use a small (e.g. 32 byte) static buffer and
> > >   replace it with a whole mmapped page if a value too large for that
> > >   is seen.
> > > 
> > > Also, somehow I failed to find one of the important locks MT-fork
> > > needs to be taking: locale_map.c has a lock for the records of mapped
> > > locales. Allocation also takes place with it held, and for the same
> > > reason as gettext it really shouldn't be changed to allocate
> > > differently. It could possibly do the allocation without the lock held
> > > though and leak it (or save it for reuse later if needed) when another
> > > thread races to load the locale.
> > 
> > yeah this sounds problematic.
> > 
> > if malloc interposers want to do something around fork
> > then libc may need to expose some better api than atfork.
>
> Most of them use existing pthread_atfork if they're intended to be
> usable with MT-fork. I don't think inventing a new musl-specific API
> is a solution here. Their pthread_atfork approach already fully works
> with glibc because glibc *doesn't* make anything but malloc work in
> the MT-forked child. All the other subsystems mentioned here will
> deadlock or blow up in the child with glibc, but only with 0.01%
> probability since it's rare for them to be under hammering at fork
> time.
>
> One solution you might actually like: getting rid of
> application-provided-malloc use inside libc. This could be achieved by
> making malloc a thin wrapper for __libc_malloc or whatever, which
> could be called by everything in libc that doesn't actually have a
> contract to return "as-if-by-malloc" memory. Only a few functions like
> getdelim would be left still calling malloc.

This code block in glob() uses strdup(), which I'd assume would have to
use the application provided malloc. Wouldn't that have to be worked
around somehow?

	if (*pat) {
		char *p = strdup(pat);
		if (!p) return GLOB_NOSPACE;
		buf[0] = 0;
		size_t pos = 0;
		char *s = p;
		if ((flags & (GLOB_TILDE | GLOB_TILDE_CHECK)) && *p == '~')
			error = expand_tilde(&s, buf, &pos);
		if (!error)
			error = do_glob(buf, pos, 0, s, flags, errfunc, &tail);
		free(p);
	}

>
> The other pros of such an approach are stuff like making it so
> application code doesn't get called as a callback from messy contexts
> inside libc, e.g. with dynamic linker in inconsistent state. The major
> con I see is that it precludes omitting the libc malloc entirely when
> static linking, assuming you link any part of libc that uses malloc
> internally. However, almost all such places only call malloc, not
> free, so you'd just get the trivial bump allocator gratuitously
> linked, rather than full mallocng or oldmalloc, except for dlerror
> which shouldn't come up in static linked programs anyway.
>
> Rich


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-11-09 18:01                                   ` Érico Nogueira
@ 2020-11-09 18:44                                     ` Rich Felker
  2020-11-09 18:54                                       ` Érico Nogueira
  0 siblings, 1 reply; 42+ messages in thread
From: Rich Felker @ 2020-11-09 18:44 UTC (permalink / raw)
  To: musl

On Mon, Nov 09, 2020 at 03:01:24PM -0300, Érico Nogueira wrote:
> On Mon Nov 9, 2020 at 9:07 AM -03, Rich Felker wrote:
> > On Sun, Nov 08, 2020 at 05:12:15PM +0100, Szabolcs Nagy wrote:
> > > * Rich Felker <dalias@libc.org> [2020-11-05 22:36:17 -0500]:
> > > > On Fri, Oct 30, 2020 at 11:31:17PM -0400, Rich Felker wrote:
> > > > > One thing I know is potentially problematic is interaction with malloc
> > > > > replacement -- locking of any of the subsystems locked at fork time
> > > > > necessarily takes place after application atfork handlers, so if the
> > > > > malloc replacement registers atfork handlers (as many do), it could
> > > > > deadlock. I'm exploring whether malloc use in these systems can be
> > > > > eliminated. A few are almost-surely better just using direct mmap
> > > > > anyway, but for some it's borderline. I'll have a better idea sometime
> > > > > in the next few days.
> > > > 
> > > > OK, here's a summary of the affected locks (where there's a lock order
> > > > conflict between them and application-replaced malloc):
> > > 
> > > if malloc replacements take internal locks in atfork
> > > handlers then it's not just libc internal locks that
> > > can cause problems but locks taken by other atfork
> > > handlers that were registered before the malloc one.
> >
> > No other locks taken there could impede forward process in libc. The
> > only reason malloc is special here is because we allowed the
> > application to redefine a family of functions used by libc. For
> > MT-fork with malloc inside libc, we do the malloc_atfork code last so
> > that the lock isn't held while other libc components' locks are being
> > taken. But we can't start taking libc-internal locks until the
> > application atfork handlers run, or the application could see a
> > deadlocked libc state (e.g. if something in the atfork handlers used
> > time functions, maybe to log time of fork, or gettext functions, maybe
> > to print a localized message, etc.).
> >
> > > i don't think there is a clean solution to this. i
> > > think using mmap is ugly (uless there is some reason
> > > to prefer that other than the atfork issue).
> >
> > When the size is exactly a 4k page, it's preferable in terms of memory
> > usage to use mmap instead of malloc, except on wacky archs with large
> > pages (which I don't think it makes sense to memory-usage-optimize
> > for; they're inherently and unfixably memory-inefficient).
> >
> > But for the most part, yes, it's ugly and I don't want to make such a
> > change.
> >
> > > > - atexit: uses calloc to allocate more handler slots of the builtin 32
> > > >   are exhausted. Could reasonably be changed to just mmap a whole page
> > > >   of slots in this case.
> >
> > Not sure on this. Since it's only used in the extremely rare case
> > where a huge number of atexit handlers are registered, it's probably
> > nicer to use mmap anyway -- it avoids linking a malloc that will
> > almost surely never be used by atfork.
> >
> > > > - dlerror: the lock is just for a queue of buffers to be freed on
> > > >   future calls, since they can't be freed at thread exit time because
> > > >   the calling context (thread that's "already exited") is not valid to
> > > >   call application code, and malloc might be replaced. one plausible
> > > >   solution here is getting rid of the free queue hack (and thus the
> > > >   lock) entirely and instead calling libc's malloc/free via dlsym
> > > >   rather than using the potentially-replaced symbol. but this would
> > > >   not work for static linking (same dlerror is used even though dlopen
> > > >   always fails; in future it may work) so it's probably not a good
> > > >   approach. mmap is really not a good option here because it's
> > > >   excessive mem usage. It's probably possible to just repeatedly
> > > >   unlock/relock around performing each free so that only one lock is
> > > >   held at once.
> >
> > I think the repeated unlock/relock works fine here, but it would also
> > work to just null out the list in the child (i.e. never free any
> > buffers that were queued to be freed in the parent before fork). Fork
> > of MT process is inherently leaky anyway (you can never free thread
> > data from the other parent threads). I'm not sure which approach is
> > nicer. The repeated unlock/relock is less logically invasive I think.
> >
> > > > - gettext: bindtextdomain calls calloc while holding the lock on list
> > > >   of bindings. It could drop the lock, allocate, retake it, recheck
> > > >   for an existing binding, and free in that case, but this is
> > > >   undesirable because it introduces a dependency on free in
> > > >   static-linked programs. Otherwise all memory gettext allocates is
> > > >   permanent. Because of this we could just mmap an area and bump
> > > >   allocate it, but that's wasteful because most programs will only use
> > > >   one tiny binding. We could also just leak on the rare possibility of
> > > >   concurrent binding allocations; the number of such leaks is bounded
> > > >   by nthreads*ndomains, and we could make it just nthreads by keeping
> > > >   and reusing abandoned ones.
> >
> > Thoughts here?
> >
> > > > - sem_open: a one-time calloc of global semtab takes place with the
> > > >   lock held. On 32-bit archs this table is exactly 4k; on 64-bit it's
> > > >   6k. So it seems very reasonable to just mmap instead of calloc.
> >
> > This should almost surely just be changed to mmap. With mallocng, an
> > early malloc(4k) will take over 5k, and this table is permanent so the
> > excess will never be released. mmap avoids that entirely.
> >
> > > > - timezone: The tz core allocates memory to remember the last-seen
> > > >   value of TZ env var to react if it changes. Normally it's small, so
> > > >   perhaps we could just use a small (e.g. 32 byte) static buffer and
> > > >   replace it with a whole mmapped page if a value too large for that
> > > >   is seen.
> > > > 
> > > > Also, somehow I failed to find one of the important locks MT-fork
> > > > needs to be taking: locale_map.c has a lock for the records of mapped
> > > > locales. Allocation also takes place with it held, and for the same
> > > > reason as gettext it really shouldn't be changed to allocate
> > > > differently. It could possibly do the allocation without the lock held
> > > > though and leak it (or save it for reuse later if needed) when another
> > > > thread races to load the locale.
> > > 
> > > yeah this sounds problematic.
> > > 
> > > if malloc interposers want to do something around fork
> > > then libc may need to expose some better api than atfork.
> >
> > Most of them use existing pthread_atfork if they're intended to be
> > usable with MT-fork. I don't think inventing a new musl-specific API
> > is a solution here. Their pthread_atfork approach already fully works
> > with glibc because glibc *doesn't* make anything but malloc work in
> > the MT-forked child. All the other subsystems mentioned here will
> > deadlock or blow up in the child with glibc, but only with 0.01%
> > probability since it's rare for them to be under hammering at fork
> > time.
> >
> > One solution you might actually like: getting rid of
> > application-provided-malloc use inside libc. This could be achieved by
> > making malloc a thin wrapper for __libc_malloc or whatever, which
> > could be called by everything in libc that doesn't actually have a
> > contract to return "as-if-by-malloc" memory. Only a few functions like
> > getdelim would be left still calling malloc.
> 
> This code block in glob() uses strdup(), which I'd assume would have to
> use the application provided malloc. Wouldn't that have to be worked
> around somehow?
> 
> 	if (*pat) {
> 		char *p = strdup(pat);
> 		if (!p) return GLOB_NOSPACE;
> 		buf[0] = 0;
> 		size_t pos = 0;
> 		char *s = p;
> 		if ((flags & (GLOB_TILDE | GLOB_TILDE_CHECK)) && *p == '~')
> 			error = expand_tilde(&s, buf, &pos);
> 		if (!error)
> 			error = do_glob(buf, pos, 0, s, flags, errfunc, &tail);
> 		free(p);
> 	}

It could either be left using public malloc (imo fine since this is
not an "internal component of libc" but generic library code with no
tie-in to libc) or use of strdup could be replaced with a trivial
alternate version that uses __libc_malloc instead. My leaning would be
towards the former -- only using libc malloc in places where calling
the application-provided malloc could lead to recursive locking of
libc-internal locks (because the caller already holds a libc-internal
lock) or other "inconsistent state" issues (like dlerror buffers at
pthread_exit time).

Rich

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-11-09 18:44                                     ` Rich Felker
@ 2020-11-09 18:54                                       ` Érico Nogueira
  0 siblings, 0 replies; 42+ messages in thread
From: Érico Nogueira @ 2020-11-09 18:54 UTC (permalink / raw)
  To: musl, musl

On Mon Nov 9, 2020 at 10:44 AM -03, Rich Felker wrote:
> On Mon, Nov 09, 2020 at 03:01:24PM -0300, Érico Nogueira wrote:
> > On Mon Nov 9, 2020 at 9:07 AM -03, Rich Felker wrote:
> > > One solution you might actually like: getting rid of
> > > application-provided-malloc use inside libc. This could be achieved by
> > > making malloc a thin wrapper for __libc_malloc or whatever, which
> > > could be called by everything in libc that doesn't actually have a
> > > contract to return "as-if-by-malloc" memory. Only a few functions like
> > > getdelim would be left still calling malloc.
> > 
> > This code block in glob() uses strdup(), which I'd assume would have to
> > use the application provided malloc. Wouldn't that have to be worked
> > around somehow?
> > 
> > 	if (*pat) {
> > 		char *p = strdup(pat);
> > 		if (!p) return GLOB_NOSPACE;
> > 		buf[0] = 0;
> > 		size_t pos = 0;
> > 		char *s = p;
> > 		if ((flags & (GLOB_TILDE | GLOB_TILDE_CHECK)) && *p == '~')
> > 			error = expand_tilde(&s, buf, &pos);
> > 		if (!error)
> > 			error = do_glob(buf, pos, 0, s, flags, errfunc, &tail);
> > 		free(p);
> > 	}
>
> It could either be left using public malloc (imo fine since this is
> not an "internal component of libc" but generic library code with no
> tie-in to libc) or use of strdup could be replaced with a trivial
> alternate version that uses __libc_malloc instead. My leaning would be
> towards the former -- only using libc malloc in places where calling
> the application-provided malloc could lead to recursive locking of
> libc-internal locks (because the caller already holds a libc-internal
> lock) or other "inconsistent state" issues (like dlerror buffers at
> pthread_exit time).

Ok, I think I hadn't understood the problem space completely. Given this
explanation, I would agree that allowing this to use an appliccation
provided malloc is fine, and might even be desirable, since it would
perform as many allocations as possible with the provided allocator.

That said, could this somehow hurt malloc tracking inside libc, be it
with valgrind or sanitizers?

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-11-09 17:07                                 ` Rich Felker
  2020-11-09 18:01                                   ` Érico Nogueira
@ 2020-11-09 21:59                                   ` Szabolcs Nagy
  2020-11-09 22:23                                     ` Rich Felker
  1 sibling, 1 reply; 42+ messages in thread
From: Szabolcs Nagy @ 2020-11-09 21:59 UTC (permalink / raw)
  To: Rich Felker; +Cc: musl

* Rich Felker <dalias@libc.org> [2020-11-09 12:07:36 -0500]:
> On Sun, Nov 08, 2020 at 05:12:15PM +0100, Szabolcs Nagy wrote:
> > * Rich Felker <dalias@libc.org> [2020-11-05 22:36:17 -0500]:
> > > On Fri, Oct 30, 2020 at 11:31:17PM -0400, Rich Felker wrote:
> > > > One thing I know is potentially problematic is interaction with malloc
> > > > replacement -- locking of any of the subsystems locked at fork time
> > > > necessarily takes place after application atfork handlers, so if the
> > > > malloc replacement registers atfork handlers (as many do), it could
> > > > deadlock. I'm exploring whether malloc use in these systems can be
> > > > eliminated. A few are almost-surely better just using direct mmap
> > > > anyway, but for some it's borderline. I'll have a better idea sometime
> > > > in the next few days.
> > > 
> > > OK, here's a summary of the affected locks (where there's a lock order
> > > conflict between them and application-replaced malloc):
> > 
> > if malloc replacements take internal locks in atfork
> > handlers then it's not just libc internal locks that
> > can cause problems but locks taken by other atfork
> > handlers that were registered before the malloc one.
> 
> No other locks taken there could impede forward process in libc. The

the problem is not in the libc: if malloc locks are
taken and user code can run (which can in an atfork
handler) then that's a problem between the malloc
lock and other user lock.

i.e. it's simply not valid for a malloc implementation
to register an atfork handler to take locks since it
cannot guarantee lock ordering. (the only reason it
works in practice because atfork handlers are rare)

> only reason malloc is special here is because we allowed the
> application to redefine a family of functions used by libc. For
> MT-fork with malloc inside libc, we do the malloc_atfork code last so
> that the lock isn't held while other libc components' locks are being
> taken. But we can't start taking libc-internal locks until the
> application atfork handlers run, or the application could see a
> deadlocked libc state (e.g. if something in the atfork handlers used
> time functions, maybe to log time of fork, or gettext functions, maybe
> to print a localized message, etc.).

yes i understood this.

> > if malloc interposers want to do something around fork
> > then libc may need to expose some better api than atfork.
> 
> Most of them use existing pthread_atfork if they're intended to be
> usable with MT-fork. I don't think inventing a new musl-specific API
> is a solution here. Their pthread_atfork approach already fully works
> with glibc because glibc *doesn't* make anything but malloc work in
> the MT-forked child. All the other subsystems mentioned here will
> deadlock or blow up in the child with glibc, but only with 0.01%
> probability since it's rare for them to be under hammering at fork
> time.

yes the libc internal locks are a problem, but
other atfork handlers are a problem too that
cannot be fixed.

we can make the atfork ordering an application
responsibility but it feels a bit sketchy (locks
can be library internals an application does not
know about), so i think to solve the more general
problem of locking around fork needs some new api,
pthread_atfork is not good for that.

> One solution you might actually like: getting rid of
> application-provided-malloc use inside libc. This could be achieved by
> making malloc a thin wrapper for __libc_malloc or whatever, which
> could be called by everything in libc that doesn't actually have a
> contract to return "as-if-by-malloc" memory. Only a few functions like
> getdelim would be left still calling malloc.

if none of the as-if-by-malloc allocations are
behind libc internal locks then this sounds good.

(not ideal, since then interposers can't see all
allocations, which some tools would like to see,
but at least correct and robust. and it is annoying
that we have to do all this extra work just because
of mt-fork)

> 
> The other pros of such an approach are stuff like making it so
> application code doesn't get called as a callback from messy contexts
> inside libc, e.g. with dynamic linker in inconsistent state. The major
> con I see is that it precludes omitting the libc malloc entirely when
> static linking, assuming you link any part of libc that uses malloc
> internally. However, almost all such places only call malloc, not
> free, so you'd just get the trivial bump allocator gratuitously
> linked, rather than full mallocng or oldmalloc, except for dlerror
> which shouldn't come up in static linked programs anyway.

i see.
that sounds fine to me.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-11-09 21:59                                   ` Szabolcs Nagy
@ 2020-11-09 22:23                                     ` Rich Felker
  2020-11-11  0:52                                       ` Rich Felker
  0 siblings, 1 reply; 42+ messages in thread
From: Rich Felker @ 2020-11-09 22:23 UTC (permalink / raw)
  To: musl

On Mon, Nov 09, 2020 at 10:59:01PM +0100, Szabolcs Nagy wrote:
> * Rich Felker <dalias@libc.org> [2020-11-09 12:07:36 -0500]:
> > On Sun, Nov 08, 2020 at 05:12:15PM +0100, Szabolcs Nagy wrote:
> > > * Rich Felker <dalias@libc.org> [2020-11-05 22:36:17 -0500]:
> > > > On Fri, Oct 30, 2020 at 11:31:17PM -0400, Rich Felker wrote:
> > > > > One thing I know is potentially problematic is interaction with malloc
> > > > > replacement -- locking of any of the subsystems locked at fork time
> > > > > necessarily takes place after application atfork handlers, so if the
> > > > > malloc replacement registers atfork handlers (as many do), it could
> > > > > deadlock. I'm exploring whether malloc use in these systems can be
> > > > > eliminated. A few are almost-surely better just using direct mmap
> > > > > anyway, but for some it's borderline. I'll have a better idea sometime
> > > > > in the next few days.
> > > > 
> > > > OK, here's a summary of the affected locks (where there's a lock order
> > > > conflict between them and application-replaced malloc):
> > > 
> > > if malloc replacements take internal locks in atfork
> > > handlers then it's not just libc internal locks that
> > > can cause problems but locks taken by other atfork
> > > handlers that were registered before the malloc one.
> > 
> > No other locks taken there could impede forward process in libc. The
> 
> the problem is not in the libc: if malloc locks are
> taken and user code can run (which can in an atfork
> handler) then that's a problem between the malloc
> lock and other user lock.
> 
> i.e. it's simply not valid for a malloc implementation
> to register an atfork handler to take locks since it
> cannot guarantee lock ordering. (the only reason it
> works in practice because atfork handlers are rare)

The only constraint for it to work is that the malloc replacement's
atfork handler is registered first. This is reasonable for an
application that's replacing malloc to guarantee (e.g. by not using
atfork handlers otherwise, or by knowing which ones it uses and taking
care with the order they're registered).

> > > if malloc interposers want to do something around fork
> > > then libc may need to expose some better api than atfork.
> > 
> > Most of them use existing pthread_atfork if they're intended to be
> > usable with MT-fork. I don't think inventing a new musl-specific API
> > is a solution here. Their pthread_atfork approach already fully works
> > with glibc because glibc *doesn't* make anything but malloc work in
> > the MT-forked child. All the other subsystems mentioned here will
> > deadlock or blow up in the child with glibc, but only with 0.01%
> > probability since it's rare for them to be under hammering at fork
> > time.
> 
> yes the libc internal locks are a problem, but
> other atfork handlers are a problem too that
> cannot be fixed.
> 
> we can make the atfork ordering an application
> responsibility but it feels a bit sketchy (locks
> can be library internals an application does not
> know about), so i think to solve the more general
> problem of locking around fork needs some new api,
> pthread_atfork is not good for that.

pthread_atfork is highly flawed, but I don't think this is anywhere
near the biggest flaw with it. Either the application takes
responsibility for order, or it just ensures that there is no
interaction between the components locked. (If malloc replacement is
one of the things using atfork, that implies that none of the other
atfork handlers take locks that could be held while malloc is called.)

But the issue at hand is not the interaction of multiple atfork
handlers by the application, which is already delicate and something
we have no control over. The issue at hand is avoiding a regression
whereby applications that are replacing malloc, and using an atfork
handler to make that "MT-fork-safe", now hang in the parent on
MT-fork, due to fork (not atfork handlers, fork itself) taking locks
on libc subsystems that use malloc. I don't think this is an
appropriate regression to introduce for the sake of making
badly-behaved programs work.

> > One solution you might actually like: getting rid of
> > application-provided-malloc use inside libc. This could be achieved by
> > making malloc a thin wrapper for __libc_malloc or whatever, which
> > could be called by everything in libc that doesn't actually have a
> > contract to return "as-if-by-malloc" memory. Only a few functions like
> > getdelim would be left still calling malloc.
> 
> if none of the as-if-by-malloc allocations are
> behind libc internal locks then this sounds good.

I think the only as-if-by-malloc ones are str[n]dup, getline/getdelim,
scanf family, open_[w]memstream, and none of these could reasonably
ever hold libc locks (they're not operating on global state). But
there are a lot of other things that could stick with public malloc
use too. glob was mentioned already, and the same applies to anything
else that's "library code" vs a libc singleton.

> (not ideal, since then interposers can't see all
> allocations, which some tools would like to see,
> but at least correct and robust. and it is annoying
> that we have to do all this extra work just because
> of mt-fork)

Yes. On the other hand if this were done more rigorously it would fix
the valgrind breakage of malloc during ldso startup..

> > The other pros of such an approach are stuff like making it so
> > application code doesn't get called as a callback from messy contexts
> > inside libc, e.g. with dynamic linker in inconsistent state. The major
> > con I see is that it precludes omitting the libc malloc entirely when
> > static linking, assuming you link any part of libc that uses malloc
> > internally. However, almost all such places only call malloc, not
> > free, so you'd just get the trivial bump allocator gratuitously
> > linked, rather than full mallocng or oldmalloc, except for dlerror
> > which shouldn't come up in static linked programs anyway.
> 
> i see.
> that sounds fine to me.

I'm still not sure it's fine, so I appreciate your input and anyone
else's who has spent some time thinking about this.

Rich

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-11-09 22:23                                     ` Rich Felker
@ 2020-11-11  0:52                                       ` Rich Felker
  2020-11-11  6:35                                         ` Alexey Izbyshev
  2020-11-11 16:35                                         ` Rich Felker
  0 siblings, 2 replies; 42+ messages in thread
From: Rich Felker @ 2020-11-11  0:52 UTC (permalink / raw)
  To: musl

[-- Attachment #1: Type: text/plain, Size: 2546 bytes --]

On Mon, Nov 09, 2020 at 05:23:21PM -0500, Rich Felker wrote:
> On Mon, Nov 09, 2020 at 10:59:01PM +0100, Szabolcs Nagy wrote:
> > (not ideal, since then interposers can't see all
> > allocations, which some tools would like to see,
> > but at least correct and robust. and it is annoying
> > that we have to do all this extra work just because
> > of mt-fork)
> 
> Yes. On the other hand if this were done more rigorously it would fix
> the valgrind breakage of malloc during ldso startup..
> 
> > > The other pros of such an approach are stuff like making it so
> > > application code doesn't get called as a callback from messy contexts
> > > inside libc, e.g. with dynamic linker in inconsistent state. The major
> > > con I see is that it precludes omitting the libc malloc entirely when
> > > static linking, assuming you link any part of libc that uses malloc
> > > internally. However, almost all such places only call malloc, not
> > > free, so you'd just get the trivial bump allocator gratuitously
> > > linked, rather than full mallocng or oldmalloc, except for dlerror
> > > which shouldn't come up in static linked programs anyway.
> > 
> > i see.
> > that sounds fine to me.
> 
> I'm still not sure it's fine, so I appreciate your input and anyone
> else's who has spent some time thinking about this.

Here's a proposed first patch in series, getting rid of getdelim/stdio
usage in ldso. I think that suffices to set the stage for adding
__libc_malloc, __libc_free, __libc_calloc, __libc_realloc and having
ldso use them.

To make this work, I think malloc needs to actually be a separate
function wrapping __libc_malloc -- this is because __simple_malloc
precludes the malloc symbol itself being weak. That's a very slight
runtime cost, but has the benefit of eliminating the awful hack of
relyin on link order to get __simple_malloc (bump allocator) to be
chosen over full malloc. Now, the source file containing the bump
allocator can define malloc as a call to __libc_malloc, and provide
the bump allocator as the weak definition of __libc_malloc.
mallocng/malloc.c would then provide the strong definition of
__libc_malloc.

For the other functions, I think __libc_* can theoretically just be
aliases for the public symbols, but this may (almost surely does)
break valgrind, and it's messy to do at the source level, so perhaps
they should be wrapped too. This should entirely prevent valgrind from
interposing the libc-internal calls, thereby fixing the longstanding
bug where it crashes by interposing them too early.

Rich

[-- Attachment #2: 0001-drop-use-of-getdelim-stdio-in-dynamic-linker.patch --]
[-- Type: text/plain, Size: 3123 bytes --]

From b24186676cbc87c0e2c3e0fa672ef73ea55b600e Mon Sep 17 00:00:00 2001
From: Rich Felker <dalias@aerifal.cx>
Date: Tue, 10 Nov 2020 19:32:09 -0500
Subject: [PATCH] drop use of getdelim/stdio in dynamic linker

the only place stdio was used here was for reading the ldso path file,
taking advantage of getdelim to automatically allocate and resize the
buffer. the motivation for use here was that, with shared libraries,
stdio is already available anyway and free to use. this has long been
a nuisance to users because getdelim's use of realloc here triggered a
valgrind bug, but removing it doesn't really fix that; on some archs
even calling the valgrind-interposed malloc at this point will crash.

the actual motivation for this change is moving towards getting rid of
use of application-provided malloc in parts of libc where it would be
called with libc-internal locks held, leading to the possibility of
deadlock if the malloc implementation doesn't follow unwritten rules
about which libc functions are safe for it to call. since getdelim is
required to produce a pointer as if by malloc (i.e. that can be passed
to reallor or free), it necessarily must use the public malloc.

instead of performing a realloc loop as the path file is read, first
query its size with fstat and allocate only once. this produces
slightly different truncation behavior when racing with writes to a
file, but neither behavior is or could be made safe anyway; on a live
system, ldso path files should be replaced by atomic rename only. the
change should also reduce memory waste.
---
 ldso/dynlink.c | 27 ++++++++++++++++++++++-----
 1 file changed, 22 insertions(+), 5 deletions(-)

diff --git a/ldso/dynlink.c b/ldso/dynlink.c
index f9ac0100..d736bf12 100644
--- a/ldso/dynlink.c
+++ b/ldso/dynlink.c
@@ -1,6 +1,5 @@
 #define _GNU_SOURCE
 #define SYSCALL_NO_TLS 1
-#include <stdio.h>
 #include <stdlib.h>
 #include <stdarg.h>
 #include <stddef.h>
@@ -556,6 +555,20 @@ static void reclaim_gaps(struct dso *dso)
 	}
 }
 
+static ssize_t read_loop(int fd, void *p, size_t n)
+{
+	unsigned char *b = p;
+	for (size_t l, i=0; i<n; i+=l) {
+		l = read(fd, b+i, n-i);
+		if (l<0) {
+			if (errno==EINTR) continue;
+			else return -1;
+		}
+		if (l==0) return i;
+	}
+	return n;
+}
+
 static void *mmap_fixed(void *p, size_t n, int prot, int flags, int fd, off_t off)
 {
 	static int no_map_fixed;
@@ -1060,13 +1073,17 @@ static struct dso *load_library(const char *name, struct dso *needed_by)
 				snprintf(etc_ldso_path, sizeof etc_ldso_path,
 					"%.*s/etc/ld-musl-" LDSO_ARCH ".path",
 					(int)prefix_len, prefix);
-				FILE *f = fopen(etc_ldso_path, "rbe");
-				if (f) {
-					if (getdelim(&sys_path, (size_t[1]){0}, 0, f) <= 0) {
+				fd = open(etc_ldso_path, O_RDONLY|O_CLOEXEC);
+				if (fd>=0) {
+					size_t n = 0;
+					if (!fstat(fd, &st)) n = st.st_size;
+					sys_path = malloc(n+1);
+					sys_path[n] = 0;
+					if (!sys_path || read_loop(fd, sys_path, n)<0) {
 						free(sys_path);
 						sys_path = "";
 					}
-					fclose(f);
+					close(fd);
 				} else if (errno != ENOENT) {
 					sys_path = "";
 				}
-- 
2.21.0


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-11-11  0:52                                       ` Rich Felker
@ 2020-11-11  6:35                                         ` Alexey Izbyshev
  2020-11-11 11:25                                           ` Szabolcs Nagy
  2020-11-11 16:35                                         ` Rich Felker
  1 sibling, 1 reply; 42+ messages in thread
From: Alexey Izbyshev @ 2020-11-11  6:35 UTC (permalink / raw)
  To: musl

On 2020-11-11 03:52, Rich Felker wrote:
> Here's a proposed first patch in series, getting rid of getdelim/stdio
> usage in ldso. I think that suffices to set the stage for adding
> __libc_malloc, __libc_free, __libc_calloc, __libc_realloc and having
> ldso use them.
> 

> +static ssize_t read_loop(int fd, void *p, size_t n)
> +{
> +    unsigned char *b = p;
> +    for (size_t l, i=0; i<n; i+=l) {
> +        l = read(fd, b+i, n-i);
> +        if (l<0) {
> +            if (errno==EINTR) continue;
This increments `i` by a negative `l`.
> +            else return -1;
> +        }
> +        if (l==0) return i;
> +    }
> +    return n;
> +}

> +                if (fd>=0) {
> +                    size_t n = 0;
> +                    if (!fstat(fd, &st)) n = st.st_size;
> +                    sys_path = malloc(n+1);
> +                    sys_path[n] = 0;
`sys_path` can be NULL here.
> +                    if (!sys_path || read_loop(fd, sys_path, n)<0) {
>                         free(sys_path);
>                         sys_path = "";
>                     }
> -                    fclose(f);
> +                    close(fd);
>                 } else if (errno != ENOENT) {
>                     sys_path = "";
>                 }

Alexey

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-11-11  6:35                                         ` Alexey Izbyshev
@ 2020-11-11 11:25                                           ` Szabolcs Nagy
  2020-11-11 14:56                                             ` Rich Felker
  0 siblings, 1 reply; 42+ messages in thread
From: Szabolcs Nagy @ 2020-11-11 11:25 UTC (permalink / raw)
  To: Alexey Izbyshev; +Cc: musl

* Alexey Izbyshev <izbyshev@ispras.ru> [2020-11-11 09:35:22 +0300]:
> On 2020-11-11 03:52, Rich Felker wrote:
> > Here's a proposed first patch in series, getting rid of getdelim/stdio
> > usage in ldso. I think that suffices to set the stage for adding
> > __libc_malloc, __libc_free, __libc_calloc, __libc_realloc and having
> > ldso use them.
> > 

if we don't have to replicate a lot of code in ldso then this sounds good.

> > +static ssize_t read_loop(int fd, void *p, size_t n)
> > +{
> > +    unsigned char *b = p;
> > +    for (size_t l, i=0; i<n; i+=l) {
> > +        l = read(fd, b+i, n-i);
> > +        if (l<0) {
> > +            if (errno==EINTR) continue;
> This increments `i` by a negative `l`.

it's worse: l cannot be negative so the error check is ineffective.

maybe it should be ssize_t? or check == -1

> > +            else return -1;
> > +        }
> > +        if (l==0) return i;
> > +    }
> > +    return n;
> > +}
> 
> > +                if (fd>=0) {
> > +                    size_t n = 0;
> > +                    if (!fstat(fd, &st)) n = st.st_size;
> > +                    sys_path = malloc(n+1);
> > +                    sys_path[n] = 0;
> `sys_path` can be NULL here.
> > +                    if (!sys_path || read_loop(fd, sys_path, n)<0) {
> >                         free(sys_path);
> >                         sys_path = "";
> >                     }
> > -                    fclose(f);
> > +                    close(fd);
> >                 } else if (errno != ENOENT) {
> >                     sys_path = "";
> >                 }
> 
> Alexey

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-11-11 11:25                                           ` Szabolcs Nagy
@ 2020-11-11 14:56                                             ` Rich Felker
  0 siblings, 0 replies; 42+ messages in thread
From: Rich Felker @ 2020-11-11 14:56 UTC (permalink / raw)
  To: Alexey Izbyshev, musl

On Wed, Nov 11, 2020 at 12:25:00PM +0100, Szabolcs Nagy wrote:
> * Alexey Izbyshev <izbyshev@ispras.ru> [2020-11-11 09:35:22 +0300]:
> > On 2020-11-11 03:52, Rich Felker wrote:
> > > Here's a proposed first patch in series, getting rid of getdelim/stdio
> > > usage in ldso. I think that suffices to set the stage for adding
> > > __libc_malloc, __libc_free, __libc_calloc, __libc_realloc and having
> > > ldso use them.
> > > 
> 
> if we don't have to replicate a lot of code in ldso then this sounds good.

Indeed the ldso part of the patch it just:

+#define malloc __libc_malloc
+#define calloc __libc_calloc
+#define realloc __libc_realloc
+#define free __libc_free

That's the minimal needed to make it work. Assuming we adopt and keep
this I might also remove a bunch of ugly code in dynlink.c that
special-cases whether it's running with replaced malloc or not.

> > > +static ssize_t read_loop(int fd, void *p, size_t n)
> > > +{
> > > +    unsigned char *b = p;
> > > +    for (size_t l, i=0; i<n; i+=l) {
> > > +        l = read(fd, b+i, n-i);
> > > +        if (l<0) {
> > > +            if (errno==EINTR) continue;
> > This increments `i` by a negative `l`.

Thanks Alexey for catching that!

> it's worse: l cannot be negative so the error check is ineffective.
> 
> maybe it should be ssize_t? or check == -1

Yes, there are multiple problems and this was the original motivation
for using stdio -- not having to write this ugly error-prone code. But
it only has to be written and reviewed once so it shouldn't be too bad
to get it right. ssize_t should be fine. I mainly did ridiculous stuff
trying to be clever with scope of the vars.

> > > +                if (fd>=0) {
> > > +                    size_t n = 0;
> > > +                    if (!fstat(fd, &st)) n = st.st_size;
> > > +                    sys_path = malloc(n+1);
> > > +                    sys_path[n] = 0;
> > `sys_path` can be NULL here.

Thanks, I meant to put that in if ((sys_path = malloc(n+1))) or
something. Will fix.

Rich

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [musl] [PATCH v2] MT fork
  2020-11-11  0:52                                       ` Rich Felker
  2020-11-11  6:35                                         ` Alexey Izbyshev
@ 2020-11-11 16:35                                         ` Rich Felker
  1 sibling, 0 replies; 42+ messages in thread
From: Rich Felker @ 2020-11-11 16:35 UTC (permalink / raw)
  To: musl

On Tue, Nov 10, 2020 at 07:52:17PM -0500, Rich Felker wrote:
> On Mon, Nov 09, 2020 at 05:23:21PM -0500, Rich Felker wrote:
> > On Mon, Nov 09, 2020 at 10:59:01PM +0100, Szabolcs Nagy wrote:
> > > (not ideal, since then interposers can't see all
> > > allocations, which some tools would like to see,
> > > but at least correct and robust. and it is annoying
> > > that we have to do all this extra work just because
> > > of mt-fork)
> > 
> > Yes. On the other hand if this were done more rigorously it would fix
> > the valgrind breakage of malloc during ldso startup..
> > 
> > > > The other pros of such an approach are stuff like making it so
> > > > application code doesn't get called as a callback from messy contexts
> > > > inside libc, e.g. with dynamic linker in inconsistent state. The major
> > > > con I see is that it precludes omitting the libc malloc entirely when
> > > > static linking, assuming you link any part of libc that uses malloc
> > > > internally. However, almost all such places only call malloc, not
> > > > free, so you'd just get the trivial bump allocator gratuitously
> > > > linked, rather than full mallocng or oldmalloc, except for dlerror
> > > > which shouldn't come up in static linked programs anyway.
> > > 
> > > i see.
> > > that sounds fine to me.
> > 
> > I'm still not sure it's fine, so I appreciate your input and anyone
> > else's who has spent some time thinking about this.
> 
> Here's a proposed first patch in series, getting rid of getdelim/stdio
> usage in ldso. I think that suffices to set the stage for adding
> __libc_malloc, __libc_free, __libc_calloc, __libc_realloc and having
> ldso use them.
> 
> To make this work, I think malloc needs to actually be a separate
> function wrapping __libc_malloc -- this is because __simple_malloc
> precludes the malloc symbol itself being weak. That's a very slight
> runtime cost, but has the benefit of eliminating the awful hack of
> relyin on link order to get __simple_malloc (bump allocator) to be
> chosen over full malloc. Now, the source file containing the bump
> allocator can define malloc as a call to __libc_malloc, and provide
> the bump allocator as the weak definition of __libc_malloc.
> mallocng/malloc.c would then provide the strong definition of
> __libc_malloc.
> 
> For the other functions, I think __libc_* can theoretically just be
> aliases for the public symbols, but this may (almost surely does)
> break valgrind, and it's messy to do at the source level, so perhaps
> they should be wrapped too. This should entirely prevent valgrind from
> interposing the libc-internal calls, thereby fixing the longstanding
> bug where it crashes by interposing them too early.

The __simple_malloc link logic for this actually turned out to be
somewhat complicated, so I want to document it here independent of
code before actually working out and posting the code draft:

The source file defining __simple_malloc (lite_malloc.c but I'll
probably rename it to malloc.c at some point) needs to also define
(the only definition of) __libc_malloc and a weak symbol for malloc,
which should be the only definition of malloc in libc.

The first ensures that any reference to __libc_malloc will cause this
file to be linked (rather than relying on link order or else having
the possibility of pulling in full malloc). The latter ensures that
any reference to malloc will cause this file to be linked, unless the
application already defined its own malloc, and weakness ensures they
won't clash if the application did.

Both __libc_malloc and (the weak) malloc function should just call
__libc_malloc_impl, which has a weak definition provided by
__simple_malloc and a strong definition if full malloc is linked (as a
result of referencing free, realloc, etc.). Here malloc could simply
be a weak alias for __libc_malloc, except that valgrind would then
munge __libc_malloc. The function is nothing but a tail call so
duplicating the code twice really doesn't hurt anyway.

Aside from the new __libc_malloc function, the above is really how
this always should have worked (to avoid the link order hack). Not
doing it was a matter of avoiding the wrapper function layer.

Rich

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2020-11-11 16:36 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-26  0:50 [musl] Status report and MT fork Rich Felker
2020-10-26  0:59 ` Rich Felker
2020-10-26  3:29   ` Rich Felker
2020-10-26 18:44     ` Érico Nogueira
2020-10-26 19:52       ` Rich Felker
2020-10-26 20:11         ` Érico Nogueira
2020-10-27 21:17   ` Rich Felker
2020-10-28 18:56     ` [musl] [PATCH v2] " Rich Felker
2020-10-28 23:06       ` Milan P. Stanić
2020-10-29 16:13         ` Szabolcs Nagy
2020-10-29 16:20           ` Rich Felker
2020-10-29 20:55           ` Milan P. Stanić
2020-10-29 22:21             ` Szabolcs Nagy
2020-10-29 23:00               ` Milan P. Stanić
2020-10-29 23:27                 ` Rich Felker
2020-10-30  0:13                   ` Rich Felker
2020-10-30  7:47                   ` Milan P. Stanić
2020-10-30 18:52                     ` Milan P. Stanić
2020-10-30 18:57                       ` Rich Felker
2020-10-30 21:31                         ` Ariadne Conill
2020-10-31  3:31                           ` Rich Felker
2020-11-06  3:36                             ` Rich Felker
2020-11-08 16:12                               ` Szabolcs Nagy
2020-11-09 17:07                                 ` Rich Felker
2020-11-09 18:01                                   ` Érico Nogueira
2020-11-09 18:44                                     ` Rich Felker
2020-11-09 18:54                                       ` Érico Nogueira
2020-11-09 21:59                                   ` Szabolcs Nagy
2020-11-09 22:23                                     ` Rich Felker
2020-11-11  0:52                                       ` Rich Felker
2020-11-11  6:35                                         ` Alexey Izbyshev
2020-11-11 11:25                                           ` Szabolcs Nagy
2020-11-11 14:56                                             ` Rich Felker
2020-11-11 16:35                                         ` Rich Felker
2020-10-31  7:22                           ` Timo Teras
2020-10-31 13:29                             ` Szabolcs Nagy
2020-10-31 13:35                               ` Timo Teras
2020-10-31 14:41                                 ` Ariadne Conill
2020-10-31 14:49                                   ` Rich Felker
2020-10-31 14:48                                 ` Rich Felker
2020-10-31 14:47                             ` Rich Felker
2020-10-29 23:32                 ` Szabolcs Nagy

Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).