mailing list of musl libc
 help / color / mirror / code / Atom feed
From: Rich Felker <dalias@libc.org>
To: musl@lists.openwall.com
Subject: Re: New malloc - first preview
Date: Thu, 28 Nov 2019 16:56:42 -0500	[thread overview]
Message-ID: <20191128215642.GN16318@brightrain.aerifal.cx> (raw)
In-Reply-To: <20191022174051.GA24726@brightrain.aerifal.cx>

Work on the new malloc is well underway, and I have a draft version
now public at:

https://github.com/richfelker/mallocng-draft

Some highlights:

- Smallest size-class now has 12 of 16 bytes usable for storage,
  compared to 8 of 16 (32-bit archs) or 16 of 32 (64-bit archs) with
  the old malloc, plus 1.5 bytes (32-bit) or 2.5 bytes (64-bit) of
  out-of-band metadata. This overhead (5.5 or 6.5 bytes per
  allocation) is uniform across all sizes.

- Small-size allocation granularity/alignment is now 16-byte on both
  32-bit and 64-bit archs. No more wasting 64 bytes for a 36-byte
  structure.

- Metadata defining heap state is all out-of-band and protected below
  by guard pages, where it's not easily reachable through typical
  attack vectors.

- Freed memory is quickly returnable to the kernel.

- Common path malloc and free (asymptotically should be ~95% of calls
  under high use) do not require exclusive locks -- readlock+atomic
  for malloc, pure-atomic for free.

- Size classes are not straight geometric progression but instead 1/7,
  1/6, and 1/5 of powers of two in between the whole power-of-two
  steps, optimizing fit in whole power-of-two numbers of pages.

- Code size is very close to the same as the old allocator, and may
  end up being smaller.

This is still early in development, and it won't be merged in musl
until the next release cycle, after the new year. There may be plenty
of bugs or suboptimal behavior. And I still need to write up the
design.

For anyone wanting to play around with the draft, it can be linked or
used by LD_PRELOAD. A dump_heap function is provided which can show
allocator state. In the status masks, a=available and f=freed; these
are distinct states to allow some control over interval to reuse of
freed memory (which gives increased safety against UAF/DF). The "_"
status is in-use.

Rich


  parent reply	other threads:[~2019-11-28 21:56 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-22 17:40 New malloc - intro, motivation & design goals document Rich Felker
2019-11-06  3:46 ` Non-candidate allocator designs, things to learn from them Rich Felker
2019-11-06 10:06   ` Florian Weimer
2019-11-28 21:56 ` Rich Felker [this message]
2019-11-28 22:22   ` New malloc - first preview Rich Felker
2019-11-29  6:37     ` Vasya Boytsov
2019-11-29 13:55       ` Rich Felker
2019-11-30  5:54         ` Rich Felker
2019-11-29 16:41   ` Roman Yeryomin
2019-11-30  4:49     ` Rich Felker
2019-11-29 19:45   ` Markus Wichmann
2019-11-30  3:15     ` Rich Felker
2019-11-30 22:11   ` Rich Felker
2019-12-06  5:06     ` Rich Felker

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191128215642.GN16318@brightrain.aerifal.cx \
    --to=dalias@libc.org \
    --cc=musl@lists.openwall.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).