From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/4744 Path: news.gmane.org!not-for-mail From: Rich Felker Newsgroups: gmane.linux.lib.musl.general Subject: Re: malloc not behaving well when brk space is limited? Date: Sat, 29 Mar 2014 13:22:12 -0400 Message-ID: <20140329172212.GW26358@brightrain.aerifal.cx> References: <20140329170032.GJ8221@example.net> <20140329191502.072c07f9@vostro> Reply-To: musl@lists.openwall.com NNTP-Posting-Host: plane.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Trace: ger.gmane.org 1396113753 4840 80.91.229.3 (29 Mar 2014 17:22:33 GMT) X-Complaints-To: usenet@ger.gmane.org NNTP-Posting-Date: Sat, 29 Mar 2014 17:22:33 +0000 (UTC) Cc: u-igbb@aetey.se To: musl@lists.openwall.com Original-X-From: musl-return-4748-gllmg-musl=m.gmane.org@lists.openwall.com Sat Mar 29 18:22:28 2014 Return-path: Envelope-to: gllmg-musl@plane.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by plane.gmane.org with smtp (Exim 4.69) (envelope-from ) id 1WTwxe-0001PX-4u for gllmg-musl@plane.gmane.org; Sat, 29 Mar 2014 18:22:26 +0100 Original-Received: (qmail 9580 invoked by uid 550); 29 Mar 2014 17:22:25 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: Original-Received: (qmail 9569 invoked from network); 29 Mar 2014 17:22:24 -0000 Content-Disposition: inline In-Reply-To: <20140329191502.072c07f9@vostro> User-Agent: Mutt/1.5.21 (2010-09-15) Xref: news.gmane.org gmane.linux.lib.musl.general:4744 Archived-At: On Sat, Mar 29, 2014 at 07:15:02PM +0200, Timo Teras wrote: > On Sat, 29 Mar 2014 17:00:32 +0000 > u-igbb@aetey.se wrote: > > > Background: > > Compiling a native musl-based toolchain for ia32 on Linux 2.6+. > > Using the standalone dynamic loader mode. > > (The latter seems to lead to a quite limited heap space, by kernel > > behaviour/design) > > > > I encounter out of memory errors. A look at the malloc source does not > > find any fallback to mmap when heap is exhausted. What would you > > suggest as a suitable approach to make it work? > > > > Somebody has possibly already encountered and solved this with musl? > > Yes, been there done that. I patched kernel. > > The thread that follows on sending the patch upstream is e.g. at: > https://groups.google.com/forum/#!msg/linux.kernel/mOf1EWrrhZc/bl96BAE4fyQJ > > Also using grsec kernel would fix the issue mostly, since grsec creates > "better" memory layout for PIE binaries. > > > I see also reports about a related out of memory problem with > > pae-executables which means a solution might help many musl users. > > > > The other standard libraries I am using (glibc, uclibc) seem to > > happily switch to allocation from mmap() when the heap is full. I > > understand that this costs some code and performance but a breakup is > > no good either. > > > > Any ideas? Maintaining and using an external libmalloc or substituting > > malloc in musl? This feels like quite a burden... > > (Would musl internal calls to malloc notice the external library > > and resolve to its entry points instead of the internal malloc?) > > musl does not support external malloc. musl internal calls to > malloc() are not overridable. > > I think you need to fix kernel. Rewrite allocator in musl. Or add the > fallback code to mmap - but dalias said it's "hard". Perhaps still > should be still reconsidered. Unfortunately the approach I want to use with mmap seems to be what glibc uses for its thread-local arenas, and it performs something like 2 to 10 times worse than brk... So unless we can solve that, I don't think it's a good option. It could be a fallback, but I still don't want PIE binaries running that much slower just because the kernel is doing something wacky and wrong. We need a good solution for this problem but I don't have one yet. Rich