mailing list of musl libc
 help / color / mirror / code / Atom feed
From: 'Rich Felker' <dalias@libc.org>
To: sidneym@codeaurora.org
Cc: musl@lists.openwall.com
Subject: Re: [musl] Hexagon DSP support
Date: Wed, 15 Apr 2020 15:29:40 -0400	[thread overview]
Message-ID: <20200415192940.GJ11469@brightrain.aerifal.cx> (raw)
In-Reply-To: <039701d61359$c56efa50$504ceef0$@codeaurora.org>

On Wed, Apr 15, 2020 at 02:12:12PM -0500, sidneym@codeaurora.org wrote:
> src/api/ftw.c:44:1: error: switch condition has boolean value [-Werror,-Wswitch-bool]

Compiling libc-test with -Werror is invalid. It necessarily must
produce some warnings, and needs to be able to distinguish actual
errors from them.

> 8 errors generated.
> src/functional/dlopen.c:39: dlsym main failed: Symbol not found: main
> FAIL src/functional/dlopen.exe [status 1]
> clang-11: warning: argument unused during compilation: '-rdynamic' [-Wunused-command-line-argument]

This is a clang deficiency that's preventing the test from working.

> src/functional/ipc_msg.c:62: qid_ds.msg_lspid == 0 failed: got 100, want 0
> src/functional/ipc_msg.c:64: (long long)qid_ds.msg_stime == 0 failed: got 6815993655711498240, want 0
> src/functional/ipc_msg.c:67: qid_ds.msg_ctime >= t failed: got 70368744177664, want >= 154502684032321054
> src/functional/ipc_msg.c:71: qid_ds.msg_qbytes > 0 failed: got 0, want > 0
> src/functional/ipc_msg.c:76: qid_ds.msg_qnum == 1 failed: got 0, want 1
> src/functional/ipc_msg.c:77: qid_ds.msg_lspid == getpid() failed: got 100, want 185819
> src/functional/ipc_msg.c:81: msg_stime is 6815993655711498240 want <= 154502684032321059
> src/functional/ipc_msg.c:128: child exit status: 256
> FAIL src/functional/ipc_msg-static.exe [status 1]

This is the sysvipc bits headers issue I mentioned. Same with the
sem/shm ones.

> src/functional/pthread_mutex.c:145: PTHREAD_MUTEX_ERRORCHECK relock did not return EDEADLK, got deadlock
> src/functional/pthread_mutex.c:148: PTHREAD_MUTEX_RECURSIVE relock did not succed, got deadlock
> FAIL src/functional/pthread_mutex-static.exe [status 1]
> src/functional/pthread_mutex.c:145: PTHREAD_MUTEX_ERRORCHECK relock did not return EDEADLK, got 
> src/functional/pthread_mutex.c:148: PTHREAD_MUTEX_RECURSIVE relock did not succed, got deadlock
> FAIL src/functional/pthread_mutex.exe [status 1]

This suggests broken atomics. It doesn't look like anything qemu-user
could cause (unless it's just failing to emulate the atomics at all).

> src/functional/pthread_mutex_pi.c:42: pthread_mutexattr_setprotocol(&ma, PTHREAD_PRIO_INHERIT) failed: Function not implemented
> src/functional/pthread_mutex_pi.c:42: pthread_mutexattr_setprotocol(&ma, PTHREAD_PRIO_INHERIT) failed: Function not implemented
> src/functional/pthread_mutex_pi.c:148: PTHREAD_MUTEX_ERRORCHECK relock did not return EDEADLK, got deadlock
> src/functional/pthread_mutex_pi.c:42: pthread_mutexattr_setprotocol(&ma, PTHREAD_PRIO_INHERIT) failed: Function not implemented
> FAIL src/functional/pthread_mutex_pi-static.exe [timed out]
> src/functional/pthread_mutex_pi.c:42: pthread_mutexattr_setprotocol(&ma, PTHREAD_PRIO_INHERIT) failed: Function not implemented
> src/functional/pthread_mutex_pi.c:42: pthread_mutexattr_setprotocol(&ma, PTHREAD_PRIO_INHERIT) failed: Function not implemented
> src/functional/pthread_mutex_pi.c:148: PTHREAD_MUTEX_ERRORCHECK relock did not return EDEADLK, got deadlock
> src/functional/pthread_mutex_pi.c:42: pthread_mutexattr_setprotocol(&ma, PTHREAD_PRIO_INHERIT) failed: Function not implemented
> src/functional/pthread_mutex_pi.c:151: PTHREAD_MUTEX_RECURSIVE relock did not succed, got deadlock
> src/functional/pthread_mutex_pi.c:87: pthread_mutexattr_setprotocol(&ma, PTHREAD_PRIO_INHERIT) failed: Function not implemented
> src/functional/pthread_mutex_pi.c:24: pthread_mutex_unlock(a[0]) failed: Operation not permitted
> FAIL src/functional/pthread_mutex_pi.exe [signal Segmentation fault]
> src/functional/pthread_robust.c:39: pthread_mutexattr_setrobust(&mtx_a, PTHREAD_MUTEX_ROBUST) failed: (pshared==0, pi==0) Function not implemented (setting robust attribute)
> FAIL src/functional/pthread_robust-static.exe [timed out]
> src/functional/pthread_robust.c:39: pthread_mutexattr_setrobust(&mtx_a, PTHREAD_MUTEX_ROBUST) failed: (pshared==0, pi==0) Function not implemented (setting robust attribute)
> FAIL src/functional/pthread_robust.exe [timed out]

These are all expected on qemu-user: kernel prio-inherit and robust
features are not emulated by qemu.

> src/functional/sem_init.c:19: sem_timedwait(s+1, &ts) failed: Operation timed out
> src/functional/sem_init.c:19: sem_timedwait(s+1, &ts) failed: Operation timed out
> src/functional/sem_init.c:19: sem_timedwait(s+1, &ts) failed: Operation timed out
> src/functional/sem_init.c:49: sem value should be 0, got 3
> FAIL src/functional/sem_init-static.exe [status 1]
> src/functional/sem_init.c:19: sem_timedwait(s+1, &ts) failed: Operation timed out
> src/functional/sem_init.c:19: sem_timedwait(s+1, &ts) failed: Operation timed out
> src/functional/sem_init.c:19: sem_timedwait(s+1, &ts) failed: Operation timed out
> src/functional/sem_init.c:49: sem value should be 0, got 3
> FAIL src/functional/sem_init.exe [status 1]

This is probably broken atomics again.

> src/functional/strptime.c:23: "%F": failed to parse "1856-07-10"
> src/functional/strptime.c:23: "%s": failed to parse "683078400"
> src/functional/strptime.c:47: "%z": failed to parse "+0200"
> src/functional/strptime.c:47: "%z": failed to parse "-0530"
> src/functional/strptime.c:47: "%z": failed to parse "-06"
> FAIL src/functional/strptime-static.exe [status 1]
> src/functional/strptime.c:23: "%F": failed to parse "1856-07-10"
> src/functional/strptime.c:23: "%s": failed to parse "683078400"
> src/functional/strptime.c:47: "%z": failed to parse "+0200"
> src/functional/strptime.c:47: "%z": failed to parse "-0530"
> src/functional/strptime.c:47: "%z": failed to parse "-06"
> FAIL src/functional/strptime.exe [status 1]

These are expected, functionality musl does not yet support
(future-standard, I think).

> src/functional/utime.c:38: st.st_atim.tv_sec == 0 failed: 1
> src/functional/utime.c:40: st.st_mtim.tv_sec == 0 failed: 1073741822
> src/functional/utime.c:47: st.st_atim.tv_sec >= t failed: 0
> src/functional/utime.c:48: st.st_mtim.tv_sec == 0 failed: 1073741823
> src/functional/utime.c:55: st.st_mtim.tv_sec >= t failed: 1073741822
> src/functional/utime.c:59: st.st_atim.tv_sec >= t failed: 0
> src/functional/utime.c:60: st.st_mtim.tv_sec >= t failed: 1073741823
> src/functional/utime.c:65: st.st_atim.tv_sec == 1LL<<32 failed: 0
> src/functional/utime.c:66: st.st_mtim.tv_sec == 1LL<<32 failed: 0
> FAIL src/functional/utime-static.exe [status 1]
> src/functional/utime.c:38: st.st_atim.tv_sec == 0 failed: 1
> src/functional/utime.c:40: st.st_mtim.tv_sec == 0 failed: 1073741822
> src/functional/utime.c:47: st.st_atim.tv_sec >= t failed: 0
> src/functional/utime.c:48: st.st_mtim.tv_sec == 0 failed: 1073741823
> src/functional/utime.c:55: st.st_mtim.tv_sec >= t failed: 1073741822
> src/functional/utime.c:59: st.st_atim.tv_sec >= t failed: 0
> src/functional/utime.c:60: st.st_mtim.tv_sec >= t failed: 1073741823
> src/functional/utime.c:65: st.st_atim.tv_sec == 1LL<<32 failed: 0
> src/functional/utime.c:66: st.st_mtim.tv_sec == 1LL<<32 failed: 0
> FAIL src/functional/utime.exe [status 1]

Looks like something is wrong here, possibly wrong struct kstat, or it
could be a qemu bug.

> src/math/ucb/sqrt.h:47: bad fp exception: RN sqrt(0x1.fffffffffffffp+1023)=0x1.fffffffffffffp+511, want INEXACT got 0
> [...]
> src/math/special/sqrt.h:74: bad fp exception: RN sqrtl(0x1.9b294f88p-1015)=0x1.cad197e28e85bp-508, want INEXACT got 0
> FAIL src/math/sqrtl.exe [status 1]

These are likely all general (not arch-specific) clang floating point
bugs. We should see if they also happen compiling other archs without
asm versions of sqrt.

Alternatively it might be a qemu fpu emulation bug. I've seen this
kind of thing (inconsistent across arch) on other archs under
qemu-user.

> src/regression/malloc-brk-fail.c:41: malloc(10000) failed (eventhough 64k is available to mmap): Out of memory
> FAIL src/regression/malloc-brk-fail-static.exe [status 1]
> FAIL src/regression/malloc-brk-fail.exe [timed out]

This test does not seem to work on modern kernels much less qemu-user.

> src/regression/pthread-robust-detach.c:33: pthread_mutexattr_setrobust(&mtx_a, 1) failed: (pshared==0) got 38 "Function not implemented" want 0 "No error information"
> src/regression/pthread-robust-detach.c:50: pthread_mutex_timedlock(&mtx, &ts) failed: (pshared==0) got 110 "Operation timed out" want 130 "Previous owner died"
> src/regression/pthread-robust-detach.c:33: pthread_mutexattr_setrobust(&mtx_a, 1) failed: (pshared==1) got 38 "Function not implemented" want 0 "No error information"
> src/regression/pthread-robust-detach.c:50: pthread_mutex_timedlock(&mtx, &ts) failed: (pshared==1) got 110 "Operation timed out" want 130 "Previous owner died"
> FAIL src/regression/pthread-robust-detach-static.exe [status 1]
> src/regression/pthread-robust-detach.c:33: pthread_mutexattr_setrobust(&mtx_a, 1) failed: (pshared==0) got 38 "Function not implemented" want 0 "No error information"
> src/regression/pthread-robust-detach.c:50: pthread_mutex_timedlock(&mtx, &ts) failed: (pshared==0) got 110 "Operation timed out" want 130 "Previous owner died"
> src/regression/pthread-robust-detach.c:33: pthread_mutexattr_setrobust(&mtx_a, 1) failed: (pshared==1) got 38 "Function not implemented" want 0 "No error information"
> src/regression/pthread-robust-detach.c:50: pthread_mutex_timedlock(&mtx, &ts) failed: (pshared==1) got 110 "Operation timed out" want 130 "Previous owner died"
> FAIL src/regression/pthread-robust-detach.exe [status 1]

Expected on qemu-user.

> src/regression/pthread_cond-smasher.c:140: main thread in phase 0 (0 threads inside), finished waiting: Operation timed out
> FAIL src/regression/pthread_cond-smasher-static.exe [status 1]
> src/regression/pthread_cond-smasher.c:104: thread 5 in phase 0 (0) finished waiting: Operation timed out
> FAIL src/regression/pthread_cond-smasher.exe [status 1]

These could be atomic bugs or just emulation being slow.

> src/regression/pthread_cond_wait-cancel_ignored.c:52: pthread_cond_wait did not act on cancellation
> FAIL src/regression/pthread_cond_wait-cancel_ignored-static.exe [status 1]
> src/regression/pthread_once-deadlock.c:41: sem_timedwait(s,&ts) failed: Operation timed out
> src/regression/pthread_once-deadlock.c:44: pthread_once deadlocked
> FAIL src/regression/pthread_once-deadlock-static.exe [status 1]

These are definitely atomic bugs (either in the arch bits or in qemu).

Rich

  reply	other threads:[~2020-04-15 19:29 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-15 13:19 sidneym
2020-04-15 16:30 ` Rich Felker
2020-04-15 17:50   ` sidneym
2020-04-15 18:06     ` Szabolcs Nagy
2020-04-15 18:22       ` sidneym
2020-04-16  9:36         ` Szabolcs Nagy
2020-04-16 15:34           ` Rich Felker
2020-04-16 16:26             ` sidneym
2020-04-16 16:34               ` 'Rich Felker'
2020-04-15 18:26       ` Rich Felker
2020-04-15 19:12         ` sidneym
2020-04-15 19:29           ` 'Rich Felker' [this message]
2020-04-30 22:44             ` sidneym
2020-04-30 23:51               ` Rich Felker
2020-05-05 23:37                 ` sidneym
2020-05-06  0:59                   ` Rich Felker
2020-06-18 16:37                     ` sidneym
2020-06-18 21:42                       ` Szabolcs Nagy
2020-06-19 21:58                         ` sidneym
2020-06-19 22:46                           ` Rich Felker
2020-06-20  0:03                             ` [musl] strtok Robert Skopalík
2020-06-20  0:15                               ` Rich Felker
2020-06-20  0:36                                 ` Robert Skopalík
2020-06-20  0:46                                 ` Robert Skopalík
2020-06-20  1:44                                   ` Rich Felker
2020-06-20  7:07                                 ` Patrick Oppenlander
2020-06-20 13:00                                   ` Robert Skopalík
2020-06-22  0:57                                     ` Bery Saidi
2020-06-20  2:29                             ` [musl] Hexagon DSP support sidneym
2020-06-20  3:20                               ` Rich Felker
2020-07-20 21:26                                 ` sidneym
2020-07-23 21:56                                   ` Szabolcs Nagy
2020-07-24 17:49                                     ` sidneym
2020-09-16 20:49                                     ` sidneym
2020-09-17  1:32                                       ` 'Rich Felker'
2020-09-17 22:31                                         ` sidneym
2020-09-18  1:08                                           ` Rich Felker
2020-09-18  8:10                                             ` Szabolcs Nagy
2020-09-20 13:12                                             ` sidneym
2020-09-20 17:17                                               ` 'Rich Felker'
2020-09-21 14:09                                                 ` sidneym
2020-04-15 18:55 ` Fangrui Song
2021-03-09 20:25 sidneym

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200415192940.GJ11469@brightrain.aerifal.cx \
    --to=dalias@libc.org \
    --cc=musl@lists.openwall.com \
    --cc=sidneym@codeaurora.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).