mailing list of musl libc
 help / color / mirror / code / Atom feed
* [musl] Hung processes with althttpd web server
@ 2023-10-05  1:41 Carl Chave
  2023-10-05  2:15 ` Rich Felker
  2023-10-05  3:37 ` Markus Wichmann
  0 siblings, 2 replies; 11+ messages in thread
From: Carl Chave @ 2023-10-05  1:41 UTC (permalink / raw)
  To: musl

Hello, I'm running the althttpd web server on Alpine Linux using a Ramnode VPS.

I've been having issues for quite a while with "hung" processes. There
is a long lived parent process and then a short lived forked process
for each http request. What I've been seeing is that the forked
processes will sometimes get stuck:

sod01:/srv/www/log$ sudo strace -p 11329
strace: Process 11329 attached
futex(0x7f5bdcd77900, FUTEX_WAIT_PRIVATE, 4294967295, NULL

Please see this forum thread for additional information:
https://sqlite.org/althttpd/forumpost/4dc31619341ce947

That thread is kind of all over the place though as I banged around
trying to figure out how to troubleshoot the issue. The last thing I
tried was to statically build althttpd in a glibc based Void Linux
chroot. I ran that on the Alpine VPS for 37 days without issue. I
switched back to the musl dynamically linked build and 14 hours later
got a hung process.

Can you give me any tips on how to troubleshoot this further?

Thanks,
Carl

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [musl] Hung processes with althttpd web server
  2023-10-05  1:41 [musl] Hung processes with althttpd web server Carl Chave
@ 2023-10-05  2:15 ` Rich Felker
  2023-10-05  2:43   ` Carl Chave
  2023-10-05  3:37 ` Markus Wichmann
  1 sibling, 1 reply; 11+ messages in thread
From: Rich Felker @ 2023-10-05  2:15 UTC (permalink / raw)
  To: Carl Chave; +Cc: musl

On Wed, Oct 04, 2023 at 09:41:41PM -0400, Carl Chave wrote:
> Hello, I'm running the althttpd web server on Alpine Linux using a Ramnode VPS.
> 
> I've been having issues for quite a while with "hung" processes. There
> is a long lived parent process and then a short lived forked process
> for each http request. What I've been seeing is that the forked
> processes will sometimes get stuck:
> 
> sod01:/srv/www/log$ sudo strace -p 11329
> strace: Process 11329 attached
> futex(0x7f5bdcd77900, FUTEX_WAIT_PRIVATE, 4294967295, NULL
> 
> Please see this forum thread for additional information:
> https://sqlite.org/althttpd/forumpost/4dc31619341ce947
> 
> That thread is kind of all over the place though as I banged around
> trying to figure out how to troubleshoot the issue. The last thing I
> tried was to statically build althttpd in a glibc based Void Linux
> chroot. I ran that on the Alpine VPS for 37 days without issue. I
> switched back to the musl dynamically linked build and 14 hours later
> got a hung process.
> 
> Can you give me any tips on how to troubleshoot this further?

1. What Alpine/musl version are you using? If it's older, it might be
   something that would be different in current versions.

2. Can you attach gdb to the hung process and identify what lock
   object it's waiting on?

3. Is this stock althttpd from sqlite?

4. Is the process multithreaded? I don't see where althttpd is doing
   anything with threads but there could be library code I'm not aware
   of.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [musl] Hung processes with althttpd web server
  2023-10-05  2:15 ` Rich Felker
@ 2023-10-05  2:43   ` Carl Chave
  2023-10-05 13:39     ` Szabolcs Nagy
  0 siblings, 1 reply; 11+ messages in thread
From: Carl Chave @ 2023-10-05  2:43 UTC (permalink / raw)
  To: Rich Felker; +Cc: musl

> 1. What Alpine/musl version are you using? If it's older, it might be
>    something that would be different in current versions.

Alpine 3.18.3 with musl package musl-1.2.4-r1 x86_64

> 2. Can you attach gdb to the hung process and identify what lock
>    object it's waiting on?

I don't really know what I'm doing with gdb. This is probably not helpful:
(gdb) bt
#0  0x00007f5fb449c0dd in ?? () from /lib/ld-musl-x86_64.so.1
#1  0x0000000000000002 in ?? ()
#2  0x0000000000000000 in ?? ()

> 3. Is this stock althttpd from sqlite?

I have some patches applied which can be seen here:
http://www.sodface.com/repo/index/aports/althttpd
I applied the same patches to the static glibc build.


> 4. Is the process multithreaded? I don't see where althttpd is doing
>    anything with threads but there could be library code I'm not aware
>    of.

From the althttpd homepage:
Because each althttpd process only needs to service a single
connection, althttpd is single threaded. Furthermore, each process
only lives for the duration of a single connection, which means that
althttpd does not need to worry too much about memory leaks. These
design factors help keep the althttpd source code simple, which
facilitates security auditing and analysis.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [musl] Hung processes with althttpd web server
  2023-10-05  1:41 [musl] Hung processes with althttpd web server Carl Chave
  2023-10-05  2:15 ` Rich Felker
@ 2023-10-05  3:37 ` Markus Wichmann
  2023-10-05 12:39   ` Rich Felker
  2023-10-05 18:52   ` Carl Chave
  1 sibling, 2 replies; 11+ messages in thread
From: Markus Wichmann @ 2023-10-05  3:37 UTC (permalink / raw)
  To: musl

Am Wed, Oct 04, 2023 at 09:41:41PM -0400 schrieb Carl Chave:
> Hello, I'm running the althttpd web server on Alpine Linux using a Ramnode VPS.
>
> I've been having issues for quite a while with "hung" processes. There
> is a long lived parent process and then a short lived forked process
> for each http request. What I've been seeing is that the forked
> processes will sometimes get stuck:
>
> sod01:/srv/www/log$ sudo strace -p 11329
> strace: Process 11329 attached
> futex(0x7f5bdcd77900, FUTEX_WAIT_PRIVATE, 4294967295, NULL
>

I often see this system call hung when signal handlers are doing
signal-unsafe things. Looking at the source code, that is exactly what
happens if the process catches a signal at the wrong time. Try removing
all calls to signal(); that should do what the designers intended
better (namely quit the process). If you want to log when a process dies
of unnatural causes, that's something the parent process can do.

The signal handler will call MakeLogEntry(), and that will do
signal-unsafe things such as call free(), localtime(), or fopen(). If
the main process is currently using malloc() when that happens, you will
get precisely this hang.


> Please see this forum thread for additional information:
> https://sqlite.org/althttpd/forumpost/4dc31619341ce947
>

Seems like they haven't yet found the trail of the signal handler.

Ciao,
Markus

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [musl] Hung processes with althttpd web server
  2023-10-05  3:37 ` Markus Wichmann
@ 2023-10-05 12:39   ` Rich Felker
  2023-10-05 18:34     ` Markus Wichmann
  2023-10-05 18:52   ` Carl Chave
  1 sibling, 1 reply; 11+ messages in thread
From: Rich Felker @ 2023-10-05 12:39 UTC (permalink / raw)
  To: Markus Wichmann; +Cc: musl

On Thu, Oct 05, 2023 at 05:37:41AM +0200, Markus Wichmann wrote:
> Am Wed, Oct 04, 2023 at 09:41:41PM -0400 schrieb Carl Chave:
> > Hello, I'm running the althttpd web server on Alpine Linux using a Ramnode VPS.
> >
> > I've been having issues for quite a while with "hung" processes. There
> > is a long lived parent process and then a short lived forked process
> > for each http request. What I've been seeing is that the forked
> > processes will sometimes get stuck:
> >
> > sod01:/srv/www/log$ sudo strace -p 11329
> > strace: Process 11329 attached
> > futex(0x7f5bdcd77900, FUTEX_WAIT_PRIVATE, 4294967295, NULL
> >
> 
> I often see this system call hung when signal handlers are doing
> signal-unsafe things. Looking at the source code, that is exactly what
> happens if the process catches a signal at the wrong time. Try removing
> all calls to signal(); that should do what the designers intended
> better (namely quit the process). If you want to log when a process dies
> of unnatural causes, that's something the parent process can do.
> 
> The signal handler will call MakeLogEntry(), and that will do
> signal-unsafe things such as call free(), localtime(), or fopen(). If
> the main process is currently using malloc() when that happens, you will
> get precisely this hang.
> 
> 
> > Please see this forum thread for additional information:
> > https://sqlite.org/althttpd/forumpost/4dc31619341ce947
> >
> 
> Seems like they haven't yet found the trail of the signal handler.

OK, this is almost surely the source of the problem. It would still be
interesting to know which lock is being hit here, since for the most
part, locks are skipped in single-threaded processes. But even if the
lock were skipped, the invalid calls to async-signal-unsafe functions
from async-signal context would be corrupting the state those locks
were meant to protect. That's probably what's happening on glibc
(meaning this code only appears to work there, but it likely behaving
dangerously).

Rich

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [musl] Hung processes with althttpd web server
  2023-10-05  2:43   ` Carl Chave
@ 2023-10-05 13:39     ` Szabolcs Nagy
  2023-10-05 18:26       ` Carl Chave
  0 siblings, 1 reply; 11+ messages in thread
From: Szabolcs Nagy @ 2023-10-05 13:39 UTC (permalink / raw)
  To: Carl Chave; +Cc: Rich Felker, musl

* Carl Chave <online@chave.us> [2023-10-04 22:43:16 -0400]:
> > 1. What Alpine/musl version are you using? If it's older, it might be
> >    something that would be different in current versions.
> 
> Alpine 3.18.3 with musl package musl-1.2.4-r1 x86_64
> 
> > 2. Can you attach gdb to the hung process and identify what lock
> >    object it's waiting on?
> 
> I don't really know what I'm doing with gdb. This is probably not helpful:
> (gdb) bt
> #0  0x00007f5fb449c0dd in ?? () from /lib/ld-musl-x86_64.so.1
> #1  0x0000000000000002 in ?? ()
> #2  0x0000000000000000 in ?? ()

you might want to

apk add musl-dbg

the bt should be more useful then.

in this case you can also look at

(gdb) disas $rip-40,+80
(gdb) info reg

since the address is the first arg to a futex syscall (rdi).
then you can try to dig around to see where rdi points to

(gdb) x/4wx $rdi-4
(gdb) info sym $rdi

etc

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [musl] Hung processes with althttpd web server
  2023-10-05 13:39     ` Szabolcs Nagy
@ 2023-10-05 18:26       ` Carl Chave
  2023-10-05 18:48         ` Markus Wichmann
  0 siblings, 1 reply; 11+ messages in thread
From: Carl Chave @ 2023-10-05 18:26 UTC (permalink / raw)
  To: musl, Carl Chave, Rich Felker

> apk add musl-dbg
>
> the bt should be more useful then.

(gdb) bt
#0  __syscall_cp_c (nr=202, u=140049023633540, v=128, w=-2147483632,
x=0, y=0, z=0) at ./arch/x86_64/syscall_arch.h:61
#1  0x00007f5fb449b6c9 in __futex4_cp (to=0x0, val=-2147483632,
op=128, addr=0x7f5fb44e0884 <init_fini_lock+4>) at
src/thread/__timedwait.c:24
#2  __timedwait_cp (addr=addr@entry=0x7f5fb44e0884 <init_fini_lock+4>,
val=val@entry=-2147483632, clk=clk@entry=0, at=at@entry=0x0,
priv=priv@entry=128) at src/thread/__timedwait.c:52
#3  0x00007f5fb449b76e in __timedwait (addr=addr@entry=0x7f5fb44e0884
<init_fini_lock+4>, val=-2147483632, clk=clk@entry=0, at=at@entry=0x0,
priv=priv@entry=128)
    at src/thread/__timedwait.c:68
#4  0x00007f5fb449d9b1 in __pthread_mutex_timedlock (m=0x7f5fb44e0880
<init_fini_lock>, at=at@entry=0x0) at
src/thread/pthread_mutex_timedlock.c:85
#5  0x00007f5fb449d7c0 in __pthread_mutex_lock
(m=m@entry=0x7f5fb44e0880 <init_fini_lock>) at
src/thread/pthread_mutex_lock.c:9
#6  0x00007f5fb44a49ff in __libc_exit_fini () at ldso/dynlink.c:1442
#7  0x00007f5fb445b082 in exit (code=0) at src/exit/exit.c:30
#8  0x0000557471c3cf45 in ?? ()
#9  <signal handler called>
#10 0x00007f5fb43d3f20 in ?? () from /lib/libssl.so.3
#11 0x00007f5fb44a4a9d in __libc_exit_fini () at ldso/dynlink.c:1453
#12 0x00007f5fb445b082 in exit (code=0) at src/exit/exit.c:30
#13 0x0000557471c3cbe7 in ?? ()
#14 0x0000557471c3e934 in ?? ()
#15 0x0000557471c3c2d2 in ?? ()
#16 0x00007f5fb4462aad in libc_start_main_stage2 (main=0x557471c3b780,
argc=13, argv=0x7ffd0641b958) at src/env/__libc_start_main.c:95
#17 0x0000557471c3c31a in ?? ()
#18 0x000000000000000d in ?? ()
#19 0x00007ffd0641ce54 in ?? ()
#20 0x00007ffd0641ce66 in ?? ()
#21 0x00007ffd0641ce6d in ?? ()
#22 0x00007ffd0641ce6f in ?? ()
#23 0x00007ffd0641ce76 in ?? ()
#24 0x00007ffd0641ce79 in ?? ()
#25 0x00007ffd0641ce80 in ?? ()
#26 0x00007ffd0641ce89 in ?? ()
#27 0x00007ffd0641ce92 in ?? ()
#28 0x00007ffd0641cea2 in ?? ()
#29 0x00007ffd0641ceac in ?? ()
#30 0x00007ffd0641cec6 in ?? ()
#31 0x00007ffd0641cecd in ?? ()
#32 0x0000000000000000 in ?? ()

> in this case you can also look at
>
> (gdb) disas $rip-40,+80

(gdb) disas $rip-40,+80
Dump of assembler code from 0x7f5fb449c0b5 to 0x7f5fb449c105:
   0x00007f5fb449c0b5 <__syscall_cp_c+19>:    mov    %r9,%r8
   0x00007f5fb449c0b8 <__syscall_cp_c+22>:    mov    %fs:0x0,%rbp
   0x00007f5fb449c0c1 <__syscall_cp_c+31>:    movzbl 0x40(%rbp),%eax
   0x00007f5fb449c0c5 <__syscall_cp_c+35>:    mov    0x20(%rsp),%r9
   0x00007f5fb449c0ca <__syscall_cp_c+40>:    test   %eax,%eax
   0x00007f5fb449c0cc <__syscall_cp_c+42>:    je     0x7f5fb449c0df
<__syscall_cp_c+61>
   0x00007f5fb449c0ce <__syscall_cp_c+44>:    dec    %eax
   0x00007f5fb449c0d0 <__syscall_cp_c+46>:    je     0x7f5fb449c0d8
<__syscall_cp_c+54>
   0x00007f5fb449c0d2 <__syscall_cp_c+48>:    cmp    $0x3,%rbx
   0x00007f5fb449c0d6 <__syscall_cp_c+52>:    jne    0x7f5fb449c0df
<__syscall_cp_c+61>
   0x00007f5fb449c0d8 <__syscall_cp_c+54>:    mov    %rbx,%rax
   0x00007f5fb449c0db <__syscall_cp_c+57>:    syscall
=> 0x00007f5fb449c0dd <__syscall_cp_c+59>:    jmp    0x7f5fb449c12b
<__syscall_cp_c+137>
   0x00007f5fb449c0df <__syscall_cp_c+61>:    push   %r9
   0x00007f5fb449c0e1 <__syscall_cp_c+63>:    lea    0x3c(%rbp),%rax
   0x00007f5fb449c0e5 <__syscall_cp_c+67>:    mov    %rsi,%rcx
   0x00007f5fb449c0e8 <__syscall_cp_c+70>:    mov    %r10,%r9
   0x00007f5fb449c0eb <__syscall_cp_c+73>:    push   %r8
   0x00007f5fb449c0ed <__syscall_cp_c+75>:    mov    %rbx,%rsi
   0x00007f5fb449c0f0 <__syscall_cp_c+78>:    mov    %rdx,%r8
   0x00007f5fb449c0f3 <__syscall_cp_c+81>:    mov    %rdi,%rdx
   0x00007f5fb449c0f6 <__syscall_cp_c+84>:    mov    %rax,%rdi
   0x00007f5fb449c0f9 <__syscall_cp_c+87>:    call   0x7f5fb449ef4a
<__syscall_cp_asm>
   0x00007f5fb449c0fe <__syscall_cp_c+92>:    pop    %rsi
   0x00007f5fb449c0ff <__syscall_cp_c+93>:    pop    %rdi
   0x00007f5fb449c100 <__syscall_cp_c+94>:    cmp    $0xfffffffffffffffc,%rax
   0x00007f5fb449c104 <__syscall_cp_c+98>:    jne    0x7f5fb449c12b
<__syscall_cp_c+137>
End of assembler dump.

> (gdb) info reg
>

(gdb) info reg
rax            0xfffffffffffffe00  -512
rbx            0xca                202
rcx            0x7f5fb449c0dd      140049023353053
rdx            0xffffffff80000010  -2147483632
rsi            0x80                128
rdi            0x7f5fb44e0884      140049023633540
rbp            0x7f5fb44e0b48      0x7f5fb44e0b48 <builtin_tls+136>
rsp            0x7ffd06416680      0x7ffd06416680
r8             0x0                 0
r9             0x0                 0
r10            0x0                 0
r11            0x246               582
r12            0x7f5fb44e0884      140049023633540
r13            0x80                128
r14            0x80                128
r15            0x0                 0
rip            0x7f5fb449c0dd      0x7f5fb449c0dd <__syscall_cp_c+59>
eflags         0x246               [ PF ZF IF ]
cs             0x33                51
ss             0x2b                43
ds             0x0                 0
es             0x0                 0
fs             0x0                 0
gs             0x0                 0

> since the address is the first arg to a futex syscall (rdi).
> then you can try to dig around to see where rdi points to
>
> (gdb) x/4wx $rdi-4

(gdb) x/4wx $rdi-4
0x7f5fb44e0880 <init_fini_lock>:    0x00000000    0x80000010
0x00000001    0x00000000

> (gdb) info sym $rdi
>

(gdb) info sym $rdi
init_fini_lock + 4 in section .bss of /lib/ld-musl-x86_64.so.1

Thanks for the reply and instruction.

Carl

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [musl] Hung processes with althttpd web server
  2023-10-05 12:39   ` Rich Felker
@ 2023-10-05 18:34     ` Markus Wichmann
  0 siblings, 0 replies; 11+ messages in thread
From: Markus Wichmann @ 2023-10-05 18:34 UTC (permalink / raw)
  To: musl

Am Thu, Oct 05, 2023 at 08:39:03AM -0400 schrieb Rich Felker:
> > Am Wed, Oct 04, 2023 at 09:41:41PM -0400 schrieb Carl Chave:
> > > futex(0x7f5bdcd77900, FUTEX_WAIT_PRIVATE, 4294967295, NULL
> It would still be
> interesting to know which lock is being hit here, since for the most
> part, locks are skipped in single-threaded processes.

The only hints to that we have right now are the futex address and
value. The address looks like it would be in some mmap()ed memory, but
that could be anything. The value is more interesting. Because it shows
us that the object is set to 0xffffffff when taken by a thread and a
single waiter is present. And the only synchronization object I could
find that does that is pthread_rwlock_t. There would also be sem_t for
older musl versions, but since the wait flag overhaul it isn't anymore,
and that was last year.

I know that musl has some internal rwlocks, but the only one I could
find that would plausibly be locked in write mode is the lock in
dynlink.c, which is wrlocked in __libc_exit_fini(), among other places.
Of course, the signal handler in althttpd also calls exit(), which may
reach there. So it might be that the signal hit while the code was
trying to exit, leading to a re-entrant call to exit(), and therefore to
a deadlock. So that's possible. But the time the lock is taken in
__libc_exit_fini() is so small, this theory seems like a reach.

There is also __inhibit_ptc(), but I could find nothing that would call
that in a single-threaded process. Let alone twice.

Ciao,
Markus

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [musl] Hung processes with althttpd web server
  2023-10-05 18:26       ` Carl Chave
@ 2023-10-05 18:48         ` Markus Wichmann
  2023-10-05 19:06           ` Carl Chave
  0 siblings, 1 reply; 11+ messages in thread
From: Markus Wichmann @ 2023-10-05 18:48 UTC (permalink / raw)
  To: musl; +Cc: Carl Chave, Rich Felker, musl

Am Thu, Oct 05, 2023 at 02:26:42PM -0400 schrieb Carl Chave:
> #5  0x00007f5fb449d7c0 in __pthread_mutex_lock
> (m=m@entry=0x7f5fb44e0880 <init_fini_lock>) at
> src/thread/pthread_mutex_lock.c:9
> #6  0x00007f5fb44a49ff in __libc_exit_fini () at ldso/dynlink.c:1442
> #7  0x00007f5fb445b082 in exit (code=0) at src/exit/exit.c:30
> #8  0x0000557471c3cf45 in ?? ()
> #9  <signal handler called>
> #10 0x00007f5fb43d3f20 in ?? () from /lib/libssl.so.3
> #11 0x00007f5fb44a4a9d in __libc_exit_fini () at ldso/dynlink.c:1453
> #12 0x00007f5fb445b082 in exit (code=0) at src/exit/exit.c:30

Oh, goddang it, I had just theorized that. The process is actually
finished and called exit(). exit() has in turn called
__libc_exit_fini(), which has taken the init_fini_lock and is now
calling the destructors. Apparently libssl has a destructor, and while
it is running the signal hits. And then the signal handler also calls
exit() (which is invalid, for exit() is not signal-safe), and tries to
call __libc_exit_fini() again, and that deadlocks on init_fini_lock.

But that also means that you sometimes get the other deadlock I
theorized in my other mail just now, the deadlock on "lock" in
dynlink.c. Because this backtrace does not fit the strace from earlier.

Solution is still the same as before: Either clean up the signal handler
or don't even register it in the first place. The proper place to log
unusual exit of a process is in the parent process. All of the
intercepted signals have a default action of Term or Core, so they will
cause the end of the process.

Ciao,
Markus


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [musl] Hung processes with althttpd web server
  2023-10-05  3:37 ` Markus Wichmann
  2023-10-05 12:39   ` Rich Felker
@ 2023-10-05 18:52   ` Carl Chave
  1 sibling, 0 replies; 11+ messages in thread
From: Carl Chave @ 2023-10-05 18:52 UTC (permalink / raw)
  To: musl

Markus,

> The signal handler will call MakeLogEntry(), and that will do
> signal-unsafe things such as call free(), localtime(), or fopen(). If
> the main process is currently using malloc() when that happens, you will
> get precisely this hang.

One of the patches I applied adds the pid to the end of the logfile
entry so I could better track the type of requests that were
triggering the hang. In this case, the hung pid is 9780. The logfile
entry for 9780 is:

2023-10-04 10:32:19,174.138.61.44,"http://","",400,3,180,258,0,0,0,47,1,"","",7,200,9780

The 400 response code is handled here:
https://sqlite.org/althttpd/file?ci=tip&name=althttpd.c&ln=2686-2693

The signal handler section:
https://sqlite.org/althttpd/file?ci=tip&name=althttpd.c&ln=1229-1261

Looks like it's supposed to log a line with 131, 132, 133, or 139 (or
nTimeoutLine though I'm not exactly sure what value to look for on
that) in the second to last log field and I'm not seeing any of those.

Not arguing with your analysis but I'm trying to figure out how to
verify it. I could try running with:

--debug BOOLEAN  Disables input timeouts.  This is useful for
debugging when inputs are being typed in manually.

Though I'm not sure if that would help or cause more problems.

Carl

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [musl] Hung processes with althttpd web server
  2023-10-05 18:48         ` Markus Wichmann
@ 2023-10-05 19:06           ` Carl Chave
  0 siblings, 0 replies; 11+ messages in thread
From: Carl Chave @ 2023-10-05 19:06 UTC (permalink / raw)
  To: Markus Wichmann; +Cc: musl, Rich Felker

> Because this backtrace does not fit the strace from earlier.

I'm sorry, that's my fault, I used strace output from a previous hang
as an example, not realizing it would cause confusion. For the record,
this is the current output:

strace: Process 9780 attached
futex(0x7f5fb44e0884, FUTEX_WAIT_PRIVATE, 2147483664, NULL

> Solution is still the same as before: Either clean up the signal handler
> or don't even register it in the first place. The proper place to log
> unusual exit of a process is in the parent process. All of the
> intercepted signals have a default action of Term or Core, so they will
> cause the end of the process.

Disregard my other reply about the --debug option, I will point the
althttpd devs at this thread and hopefully they can review.

Thanks a lot for the help.
Carl

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2023-10-05 19:06 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-05  1:41 [musl] Hung processes with althttpd web server Carl Chave
2023-10-05  2:15 ` Rich Felker
2023-10-05  2:43   ` Carl Chave
2023-10-05 13:39     ` Szabolcs Nagy
2023-10-05 18:26       ` Carl Chave
2023-10-05 18:48         ` Markus Wichmann
2023-10-05 19:06           ` Carl Chave
2023-10-05  3:37 ` Markus Wichmann
2023-10-05 12:39   ` Rich Felker
2023-10-05 18:34     ` Markus Wichmann
2023-10-05 18:52   ` Carl Chave

Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).