From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/395 Path: news.gmane.org!not-for-mail From: Rich Felker Newsgroups: gmane.linux.lib.musl.general Subject: malloc and linux memory layout Date: Wed, 10 Aug 2011 15:56:58 -0400 Message-ID: <20110810195658.GC132@brightrain.aerifal.cx> Reply-To: musl@lists.openwall.com NNTP-Posting-Host: lo.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Trace: dough.gmane.org 1313007002 29030 80.91.229.12 (10 Aug 2011 20:10:02 GMT) X-Complaints-To: usenet@dough.gmane.org NNTP-Posting-Date: Wed, 10 Aug 2011 20:10:02 +0000 (UTC) To: musl@lists.openwall.com Original-X-From: musl-return-396-gllmg-musl=m.gmane.org@lists.openwall.com Wed Aug 10 22:09:57 2011 Return-path: Envelope-to: gllmg-musl@lo.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by lo.gmane.org with smtp (Exim 4.69) (envelope-from ) id 1QrF6D-0006U8-3f for gllmg-musl@lo.gmane.org; Wed, 10 Aug 2011 22:09:57 +0200 Original-Received: (qmail 27736 invoked by uid 550); 10 Aug 2011 20:09:56 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: Original-Received: (qmail 27728 invoked from network); 10 Aug 2011 20:09:56 -0000 Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) Xref: news.gmane.org gmane.linux.lib.musl.general:395 Archived-At: Since Luka has had a number of questions about when/why malloc could fail, heap/brk/mmap allocation, etc. I thought I'd post a quick description of the virtual address space is managed on Linux: Each process has its own 32- or 64-bit virtual address space. Initially, from bottom to top, it looks something like: [low unmappable pages] [main program text (code) segment] [main program data segment] [brk segment (heap)] [....lots of open virtual address space...] [mmap zone] [main thread stack] [reserved for kernelspace use] (Note that there will be small randomized amounts of empty/unused address space between these regions if ALSR is enabled.)x The brk segment starts just above the program's static data and grows upward into the big open space. The mmap zone (where mmaps are put by default) starts just below the stack limit and continues downward as more mappings are made. Previously-mapped ranges here are available for reuse after they're unmapped; that's managed by the kernel. The brk is not allowed to run into any other mapping when it grows upward, so if there's a mapping right above the brk segment, attempts to grow it will always fail. Shared libraries are mapped in the mmap zone just like another other mmap. Typically malloc implementations will use a mix of brk and mmap to satisfy malloc requests, using individual mmaps for large allocations (letting the kernel take care of it and making it easy to free the whole thing back to the system) and carving up the brk segment (and possibly additional mmapped regions) to handle small allocations (this avoids the overhead of syscalls and page alignment for small allocations). musl only uses brk for small allocations, but glibc's ptmalloc and others are happy to use mmap to make a new "heap region" if the brk can't grow. The difference will generally only be seen if virtual address space has already been exhausted. Hope this helps.. Rich