From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/11487 Path: news.gmane.org!.POSTED!not-for-mail From: Rich Felker Newsgroups: gmane.linux.lib.musl.general Subject: Re: pthread_getattr_np doing loads of mremaps on ARM, MIPS under QEMU user-mode Date: Thu, 15 Jun 2017 10:19:27 -0400 Message-ID: <20170615141927.GN1627@brightrain.aerifal.cx> References: Reply-To: musl@lists.openwall.com NNTP-Posting-Host: blaine.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Trace: blaine.gmane.org 1497536381 3415 195.159.176.226 (15 Jun 2017 14:19:41 GMT) X-Complaints-To: usenet@blaine.gmane.org NNTP-Posting-Date: Thu, 15 Jun 2017 14:19:41 +0000 (UTC) User-Agent: Mutt/1.5.21 (2010-09-15) To: musl@lists.openwall.com Original-X-From: musl-return-11500-gllmg-musl=m.gmane.org@lists.openwall.com Thu Jun 15 16:19:37 2017 Return-path: Envelope-to: gllmg-musl@m.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by blaine.gmane.org with smtp (Exim 4.84_2) (envelope-from ) id 1dLVcX-0000e2-Bd for gllmg-musl@m.gmane.org; Thu, 15 Jun 2017 16:19:37 +0200 Original-Received: (qmail 3277 invoked by uid 550); 15 Jun 2017 14:19:40 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Original-Received: (qmail 3253 invoked from network); 15 Jun 2017 14:19:39 -0000 Content-Disposition: inline In-Reply-To: Original-Sender: Rich Felker Xref: news.gmane.org gmane.linux.lib.musl.general:11487 Archived-At: On Thu, Jun 15, 2017 at 04:08:03PM +0300, Tobias Koch wrote: > Hi, > > running under QEMU user mode, Ruby 2.4 (and it seems also Guile) ARM > and MIPS binaries take a long time to start and eventually crash. > The long startup seems to come from this loop > > while (mremap(p-l-PAGE_SIZE, PAGE_SIZE, 2*PAGE_SIZE, > 0)==MAP_FAILED && errno==ENOMEM) > l += PAGE_SIZE; > > being executed hundreds of times in pthread_getattr_np. Any idea > what this could be about except from maybe being a QEMU bug? This is not particularly unusual (it's the best way we could find to measure the initial thread's stack size), but it's possible that qemu user mode is botching emulation of mremap and thus resulting in a wrong stack size being reported. Can you send a full strace (qemu-arm -strace, maybe also real strace of the qemu process with the host strace utility) log of the crash? That will probably shed some light on what's happening. > The subsequent crash then occurs after memory set aside by alloca is > accessed. I think this may be unrelated. It seems plausible that it's related or that it's unrelated. Rich