From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/14907 Path: news.gmane.org!.POSTED.blaine.gmane.org!not-for-mail From: Rich Felker Newsgroups: gmane.linux.lib.musl.general Subject: Non-candidate allocator designs, things to learn from them Date: Tue, 5 Nov 2019 22:46:05 -0500 Message-ID: <20191106034605.GB16318@brightrain.aerifal.cx> References: <20191022174051.GA24726@brightrain.aerifal.cx> Reply-To: musl@lists.openwall.com Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Injection-Info: blaine.gmane.org; posting-host="blaine.gmane.org:195.159.176.226"; logging-data="115238"; mail-complaints-to="usenet@blaine.gmane.org" User-Agent: Mutt/1.5.21 (2010-09-15) To: musl@lists.openwall.com Original-X-From: musl-return-14923-gllmg-musl=m.gmane.org@lists.openwall.com Wed Nov 06 04:46:20 2019 Return-path: Envelope-to: gllmg-musl@m.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by blaine.gmane.org with smtp (Exim 4.89) (envelope-from ) id 1iSCGy-000TgX-KT for gllmg-musl@m.gmane.org; Wed, 06 Nov 2019 04:46:20 +0100 Original-Received: (qmail 11389 invoked by uid 550); 6 Nov 2019 03:46:18 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Original-Received: (qmail 11357 invoked from network); 6 Nov 2019 03:46:17 -0000 Content-Disposition: inline In-Reply-To: <20191022174051.GA24726@brightrain.aerifal.cx> Original-Sender: Rich Felker Xref: news.gmane.org gmane.linux.lib.musl.general:14907 Archived-At: On Tue, Oct 22, 2019 at 01:40:51PM -0400, Rich Felker wrote: > Alongside time64 and other enhancements, over the next few months I'll > be working on a major project to replace musl's malloc, which has > known fundamental problems that can't really be fixed without > significant redesign. > > [...] > > I'll follow up soon with some thoughts/findings so far on designs that > might work for us, or that don't work but have interesting properties > that suggest they may lead to workable designs. The following was written incrementally over the past week or so, and as I indicated before it's not so much about a design that works, but about identifying properties that will motivate a good design. The current dlmalloc-style split/merge of free chunks is problematic in a lot of ways (complex interaction with fragmentation, hard to make efficient with threads because access to neighbors being merged has to be synchronized, etc.), so I want to first explore some simple designs that completely lack it. 1. Extending a bump allocator The simplest of all is a bump allocator with free stacks. This is just one step beyond musl's current simple_malloc (used for static linking without free). The only changes are that you round up requests to discrete size classes, and then implement free as pushing onto a free stack, with one stack per size class. Because it's a stack, a singly linked list suffices, and you can push/pop solely with atomics. One big pro is that there is no progressive fragmentation. If you successfully make a complex sequence of allocations, free some subset, then allocate the previously-freed sizes again in different order, it's guaranteed to succeed. This is a really desirable property for long-lived processses in certain kinds of memory-constrained systems. Of course, the permanence of how memory is split up can also be a con. After successfully performing very large numbers of small allocations and freeing them, future large allocations may not be possible, and vice versa. Stack order is also undesirable for catching or mitigating double-free and use-after-free. LRU order will give best results there. Switching to LRU/FIFO order is possible, but no longer admits simple atomics for alloc/free. 2. Zoned size classes The above bump allocators with free essentially partition address space into separate memory usable for each size class in a way that's (one-time-per-run) adaptive, and completely unorganized. Another simple type of allocator partitions (virtual) address space *statically*, reserving (PROT_NONE, later mprotectable to writable storage) for each size a large range of addresses usable as slabs/object pools for objects of that particular size. This type of design appears in GrapheneOS's hardened_malloc: https://github.com/GrapheneOS/hardened_malloc At first this seems pretty magical -- you automatically get a complete lack of fragmentation... of *virtual* memory. It's possible to fragment the zones themselves, requiring far more physical memory than the logical amount allocated, but the physical memory usage by a given size class is bounded by the peak usage in that size class at any point in the program's lifetime. So that's pretty good. In order to be able to do this, however, your virtual address space has to be vastly larger than physical memory available. Ideally, each size's zone should be larger than the total amount of physical memory (ram+swap) you might have, so that you're not imposing an arbitrary limit on what the application can do with it. Figuring around 50 size classes and leaving at least half of virtual address space free for mmap/large allocations, I figure your virtual address space should be at least 100x larger than physical memory availability in order for this to be viable. For 32-bit, that means it's not viable beyond around 30 MB of physical memory. Ouch. Ineed, GrapheneOS's hardened_malloc is explicitly only for 64-bit archs, and this is why. Aside from that, this kind of design also suffers from extreme over-allocation in small programs, and it's not compatible with nommu at all (since it relies on the mmu to cheat fragmentation). For example, if you allocate just one object of each of 50 sizes classes, you'll be using at least 50 pages (200k) already. One advantage of this approach is that it admits nice out-of-band metadata approaches, which let you make strong guarantees about the consistency of the allocator state under heap corruption/overflows. So.. Neither of these approaches is appropriate for the future of musl's malloc. They're not sufficiently general-purpose/generalizable to all environments and archs musl supports. But they do suggest properties worth trying to achieve in an alternative. I've made some good progress on this, which I'll follow up on in a separate post.