From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-3.3 required=5.0 tests=MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL autolearn=ham autolearn_force=no version=3.4.4 Received: (qmail 3207 invoked from network); 23 Aug 2023 13:30:01 -0000 Received: from second.openwall.net (193.110.157.125) by inbox.vuxu.org with ESMTPUTF8; 23 Aug 2023 13:30:01 -0000 Received: (qmail 1140 invoked by uid 550); 23 Aug 2023 13:29:57 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: musl@lists.openwall.com Received: (qmail 1105 invoked from network); 23 Aug 2023 13:29:56 -0000 Date: Wed, 23 Aug 2023 09:29:48 -0400 From: Rich Felker To: Mike Granby Cc: "musl@lists.openwall.com" Message-ID: <20230823132948.GX4163@brightrain.aerifal.cx> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Subject: Re: [musl] sizeof(pthread_mutex_t) on aarch64 On Wed, Aug 23, 2023 at 02:09:35AM +0000, Mike Granby wrote: > I've been running glibc-compiled programs on Alpine and thus on musl > with a high level of success, but I just hit the wall when working > on a Raspberry PI running aarch64 Alpine. I tracked the issue down > to a difference in the size of pthread_mutex_t. It appears that musl > uses 6 ints for platforms with 32-bit longs, and 10 ints for those > with 64-bit longs, and this seems to match glibc on all of the > platforms I've played with to date. But on aarch64, it appears that > glibc is using 48 bytes rather than 40 bytes that musl expects. This > doesn't actually cause an issue in many cases as the application > just allocates too much space, but if you're using inlining and > std::ofstream, for example, you end up with the inline code in the > application having a different layout for the file buffer object > compared to that implemented on the target platform. Now, perhaps > the answer is just, well, stop expecting to run glibc code on musl, > but given that aarch64 seems to be an outlier in my experience to > date, I thought I'd mention it. I guess that's interesting to know, but not something actionable. It's not like we can change and break ABI for the sake of glibc-ABI-compat, nor like we'd want to make every program use even more excess memory space for mutexes than they're already using. At some point the decision was made, based on our existing practice at the time and possibly loosely on glibc doing the same, to have the pthread types be arch-independent and only depend on wordsize, and this determined the ABI for all archs added later, without the types getting evaluated for glibc-ABI-compat since they were no longer arch types. FWIW I think the program would likely "work" with a glibc version of libstdc++. However, there are a lot of other places the ABI-compat breaks down, like mismatching stat structs, ipc structs, etc. The future roadmap is for glibc-ABI-compat handling to be shifted out of libc to the gcompat package, which could do some sort of shims (and which musl ldso would assist it in shimming by letting a delegated library interpose on modules that have libc.so.6 dep) and perhaps make something like this work.. Rich