From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/4768 Path: news.gmane.org!not-for-mail From: Rich Felker Newsgroups: gmane.linux.lib.musl.general Subject: Re: Proposed approach for malloc to deal with failing brk Date: Tue, 1 Apr 2014 13:03:54 -0400 Message-ID: <20140401170354.GA8145@brightrain.aerifal.cx> References: <20140331004104.GA15223@brightrain.aerifal.cx> <20140401164057.GA8347@cachalot> Reply-To: musl@lists.openwall.com NNTP-Posting-Host: plane.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Trace: ger.gmane.org 1396371858 6844 80.91.229.3 (1 Apr 2014 17:04:18 GMT) X-Complaints-To: usenet@ger.gmane.org NNTP-Posting-Date: Tue, 1 Apr 2014 17:04:18 +0000 (UTC) To: musl@lists.openwall.com Original-X-From: musl-return-4772-gllmg-musl=m.gmane.org@lists.openwall.com Tue Apr 01 19:04:13 2014 Return-path: Envelope-to: gllmg-musl@plane.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by plane.gmane.org with smtp (Exim 4.69) (envelope-from ) id 1WV26Z-00057F-VL for gllmg-musl@plane.gmane.org; Tue, 01 Apr 2014 19:04:08 +0200 Original-Received: (qmail 17416 invoked by uid 550); 1 Apr 2014 17:04:06 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: Original-Received: (qmail 17408 invoked from network); 1 Apr 2014 17:04:06 -0000 Content-Disposition: inline In-Reply-To: <20140401164057.GA8347@cachalot> User-Agent: Mutt/1.5.21 (2010-09-15) Xref: news.gmane.org gmane.linux.lib.musl.general:4768 Archived-At: On Tue, Apr 01, 2014 at 08:40:57PM +0400, Vasily Kulikov wrote: > On Sun, Mar 30, 2014 at 20:41 -0400, Rich Felker wrote: > > We want brk. This is not because "brk is faster than mmap", but > > because it takes a lot of work to replicate what brk does using mmap, > > and there's no hope of making a complex dance of multiple syscalls > > equally efficient. My best idea for emulating brk was to mmap a huge > > PROT_NONE region and gradually mprotect it to PROT_READ|PROT_WRITE, > > What problem do you try to solve via PROT_NONE -> PROT_WRITE? Why not > simply instantly mmap it as PROT_WRITE? Linux will not allocate physical pages > until the first access, so you don't lose physical memory when it is not > actually used. Commit accounting. Committing 100 megs to a process that's only asked for (and only going to use) 100k is harmful because, in effect, it forces people to turn on (or leave on) overcommit. > > but it turns out this is what glibc does for per-thread arenas and > > it's really slow, probably because it involves splitting one VMA and > > merging into another. > > Yes, both VMA split/merge and PTE/etc. changes. Well the page table changes happen even if you just use madvise to zero/'free' the memory and then a page fault on write to get it back, and the latter is very fast compared to the mprotect approach, at least as far as I can tell. So I think the VMA split/merge is the big issue. Rich