mailing list of musl libc
 help / color / mirror / code / Atom feed
From: Rich Felker <dalias@libc.org>
To: musl@lists.openwall.com
Subject: Re: [musl] mallocng progress and growth chart
Date: Mon, 18 May 2020 14:53:52 -0400	[thread overview]
Message-ID: <20200518185351.GF21576@brightrain.aerifal.cx> (raw)
In-Reply-To: <20200517033025.GQ21576@brightrain.aerifal.cx>

On Sat, May 16, 2020 at 11:30:25PM -0400, Rich Felker wrote:
> Another alternative for avoiding eagar commit at low usage, which
> works for all but nommu: when adding groups with nontrivial slot count
> at low usage, don't activate all the slots right away. Reserve vm
> space for 7 slots for a 7x4672, but only unprotect the first 2 pages,
> and treat it as a group of just 1 slot until there are no slots free
> and one is needed. Then, unprotect another page (or more if needed to
> fit another slot, as would be needed at larger sizes) and adjust the
> slot count to match. (Conceptually; implementation-wise, the slot
> count would be fixed, and there would just be a limit on the number of
> slots made avilable when transformed from "freed" to "available" for
> activation.)
> 
> Note that this is what happens anyway with physical memory as clean
> anonymous pages are first touched, but (1) doing it without explicit
> unprotect over-counts the not-yet-used slots for commit charge
> purposes and breaks tightly-memory-constrained environments (global
> commit limit or cgroup) and (2) when all slots are initially available
> as they are now, repeated free/malloc cycles for the same size will
> round-robin all the slots, touching them all.
> 
> Here, property (2) is of course desirable for hardening at moderate to
> high usage, but at low usage UAF tends to be less of a concern
> (because you don't have complex data structures with complex lifetimes
> if you hardly have any malloc).
> 
> Note also that (2) could be solved without addressing (1) just by
> skipping the protection aspect of this idea and only using the
> available-slot-limiting part.

One abstract way of thinking about the above is that it's just a
per-size-class bump allocator, pre-reserving enough virtual address
space to end sufficiently close to a page boundary that there's no
significant memory waste. This is actually fairly elegant, and might
obsolete some of the other measures taken to avoid overly eagar
allocation. So this might be a worthwhile direction to pursue.

  reply	other threads:[~2020-05-18 18:54 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-10 18:09 Rich Felker
2020-05-11 17:51 ` Rich Felker
2020-05-16  0:29 ` Rich Felker
2020-05-16  3:29   ` Rich Felker
2020-05-17  3:30     ` Rich Felker
2020-05-18 18:53       ` Rich Felker [this message]
2020-05-25 15:45         ` Pirmin Walthert
2020-05-25 17:54           ` Rich Felker
2020-05-25 18:13             ` Pirmin Walthert
2020-06-04  7:04               ` Pirmin Walthert
2020-06-04 15:11                 ` Rich Felker
2020-05-18 18:35   ` Rich Felker

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200518185351.GF21576@brightrain.aerifal.cx \
    --to=dalias@libc.org \
    --cc=musl@lists.openwall.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).