From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-3.3 required=5.0 tests=MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL autolearn=ham autolearn_force=no version=3.4.4 Received: (qmail 22170 invoked from network); 11 May 2020 17:51:59 -0000 Received: from mother.openwall.net (195.42.179.200) by inbox.vuxu.org with ESMTPUTF8; 11 May 2020 17:51:59 -0000 Received: (qmail 18384 invoked by uid 550); 11 May 2020 17:51:53 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: musl@lists.openwall.com Received: (qmail 18365 invoked from network); 11 May 2020 17:51:52 -0000 Date: Mon, 11 May 2020 13:51:39 -0400 From: Rich Felker To: musl@lists.openwall.com Message-ID: <20200511175138.GK21576@brightrain.aerifal.cx> References: <20200510180934.GV21576@brightrain.aerifal.cx> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200510180934.GV21576@brightrain.aerifal.cx> User-Agent: Mutt/1.5.21 (2010-09-15) Subject: Re: [musl] mallocng progress and growth chart On Sun, May 10, 2020 at 02:09:34PM -0400, Rich Felker wrote: > I just pushed a big queued set of changes to mallocng, including some > directions that didn't go well and were mostly reversed but with > improvements to the old. These include: > > - Fixes to definitions of some size classes where counts didn't fit in > a power-of-two sized mmap or containing-group. > > - Fixing bad interaction between realloc and coarse size classing > (always causing malloc-memcpy-free due to size class not matching; > this was what was making apk and some other things painfully slow). > > - Coarse size classing down to class 4 (except 6->7). This tends to > save memory in low-usage programs. > > - Improvements to logic for single-slot and low-count groups to avoid > situations that waste lots of memory as well as ones that > unnecessarily do eagar allocation. > > - Flipping definition/dependency of aligned_alloc/memalign so that > legacy nonstd function is not used in definition of std functions. > > - New bounce counter with sequence numbers and decay, so that bounces > with lots of intervening expensive (mmap/munmap) ops don't get > accounted, and classes in bouncing state (retaining a free group > unnecessarily to avoid cost of un- and re-mapping) can return to > non-bouncing state. I left off one fairly important one: - Hardening enhancement: clearing the pointer back to the group's meta struct before freeing the group, so that the next time the memory is handed out in an allocation the uninitialized memory doesn't contain pointers to malloc internal structures. For example: free(malloc(42)); p=malloc(492); would include a pointer to the freed meta structure for the nested group, at offset 0 or 16 if I'm not mistaken. This could allow uninitialized memory leak plus a second difficult-to-exploit bug to be transformed into an easier-to-exploit bug. This also suggested to me that we could clear the pointer back to meta whenever a group is entirely free slots but not itself freed, and restore it only when it's subsequently used. This might have minor hardening benefits too, but it highlights that we could use MADV_FREE on such groups, which might be valuable at large sizes. OTOH MADV_FREE has some anti-hardening properties (potentially loses offset cycling state under attacker-controlled load, allowing the attacker to obtain more predictable addresses) that make it less attractive to use here.