From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/4747 Path: news.gmane.org!not-for-mail From: Rich Felker Newsgroups: gmane.linux.lib.musl.general Subject: Re: malloc not behaving well when brk space is limited? Date: Sat, 29 Mar 2014 14:56:19 -0400 Message-ID: <20140329185619.GX26358@brightrain.aerifal.cx> References: <20140329170032.GJ8221@example.net> <20140329191502.072c07f9@vostro> <20140329172212.GW26358@brightrain.aerifal.cx> <20140329180229.GL8221@example.net> Reply-To: musl@lists.openwall.com NNTP-Posting-Host: plane.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Trace: ger.gmane.org 1396119401 30382 80.91.229.3 (29 Mar 2014 18:56:41 GMT) X-Complaints-To: usenet@ger.gmane.org NNTP-Posting-Date: Sat, 29 Mar 2014 18:56:41 +0000 (UTC) Cc: musl@lists.openwall.com To: u-igbb@aetey.se Original-X-From: musl-return-4751-gllmg-musl=m.gmane.org@lists.openwall.com Sat Mar 29 19:56:35 2014 Return-path: Envelope-to: gllmg-musl@plane.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by plane.gmane.org with smtp (Exim 4.69) (envelope-from ) id 1WTyQk-00064l-4Z for gllmg-musl@plane.gmane.org; Sat, 29 Mar 2014 19:56:34 +0100 Original-Received: (qmail 9744 invoked by uid 550); 29 Mar 2014 18:56:33 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: Original-Received: (qmail 9736 invoked from network); 29 Mar 2014 18:56:33 -0000 Content-Disposition: inline In-Reply-To: <20140329180229.GL8221@example.net> User-Agent: Mutt/1.5.21 (2010-09-15) Xref: news.gmane.org gmane.linux.lib.musl.general:4747 Archived-At: On Sat, Mar 29, 2014 at 06:02:29PM +0000, u-igbb@aetey.se wrote: > > Unfortunately the approach I want to use with mmap seems to be what > > glibc uses for its thread-local arenas, and it performs something like > > 2 to 10 times worse than brk... So unless we can solve that, I don't > > think it's a good option. It could be a fallback, but I still don't > > want PIE binaries running that much slower just because the kernel is > > doing something wacky and wrong. > > It is a _lot_ better to run an order of magnitude slower than to fail > totally, isn't it? If you have any code doing mmap-fallback, I would be > willing to test it and hopefully use. > > > We need a good solution for this problem but I don't have one yet. > > This looks like a showstopper. The applications do not work properly > as-is and replacing malloc or adding mmap support from scratch by myself > is a threshold too high, given the time constraints. :( > > In other words, a bad solution would be much better than no solution. > > Otherwise my impression from musl until now is excellent. Thanks > for your work on it. Yes, I understand. I didn't mean that this can't or shouldn't be fixed, just that the changes I had hoped to make to malloc in the 1.1.x series are not looking like the right direction for fixing this, so we're back to the question of what to do. If you need a fix (or at least a workaround) right away, let me know and I'll see if I can think of anything. Rich