mailing list of musl libc
 help / color / mirror / code / Atom feed
* [musl] "Expected behavior" for mallocng @ low usage
@ 2020-04-08  2:32 Rich Felker
  2020-04-10 15:48 ` Rich Felker
  0 siblings, 1 reply; 5+ messages in thread
From: Rich Felker @ 2020-04-08  2:32 UTC (permalink / raw)
  To: musl

I figured as I tune and prepare to integrate the new malloc, it would
be helpful to have a description of what users should see in programs
that make only small use of malloc, since this is the "hard case" to
get right and since any reports of unexpected behavior would be really
useful. For simplicity I'm going to assume 4k pages.

If the program makes at least one "very small" (<= 108 bytes)
allocation, or at least one allocation of certain other sizes smaller
than the page size, you should expect to see a single-page mmap that's
divided up something like a buddy allocator. At the top level it
consists of 2 2032-byte slots, one of which will be broken into two
1008-byte slots, one of which will be broken into 2 496-byte slots.
For "very small" sizes, one of these will in turn be broken up into N
equal-sized slots for the requested size class (N=30, 15, 10, 7, 6, 5,
or 4).

If the page is fully broken up into pairs of 496-byte slots, there are
8 such slots, and only 7 "very small" size classes, so under "very low
usage", all such objects should fit in the single page, even if you
use a multitude of different sizes.

For the next 8 size classes (2 doublings) up to 492 bytes, and
depending on divisibility, a group of 2, 3, 5, or 7 slots will be
created in a slot of size 496, 1008, or 2032. These can use the same
page as the above smaller sizes if there's room available.

Above this size, coarse size classing is used at first (until usage
reaches a threshold) to avoid allocating a large number of many-slot
groups of slightly different sizes that might never be filled. The
next doubling consists only of ranges [493,668] and [669,1004],
allocated in slots of size 2032 in groups of 3 and 2, respectively;
these can use any existing free slot of size 2032. (Once usage has
reached a threshold such that adding a group of 5 or 7 slots doesn't
cause a dramatic relative increase in total usage, finer-grained size
classes will be used.)

At higher sizes, groups of slots are not allocated inside a larger
slot, but as mmaps consisting of a power-of-two number of pages, which
will be split N ways, initially with N=7, 3, 5, or 2 depending on
divisibility. As usage increases, so does the N (doubling the number
of pages used) which reduces potential for vm space fragmentation and
increases the number of slots that can be allocated/freed with fast
paths manipulating free masks.

At present, coarse size classing is used for all these at first, which
can result in significant "waste" but avoids preallocating large (5 or
7) counts of slots that might not ever be used. This is what I'm
presently working to improve by allowing direct individual mmaps in
cases where they can be efficient.

Changes in this area are likely coming soon. Main thing I'm trying to
solve still is getting eagar allocation down even further so that
small programs don't grow significantly when switching to mallocng.


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-05-03  3:03 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-08  2:32 [musl] "Expected behavior" for mallocng @ low usage Rich Felker
2020-04-10 15:48 ` Rich Felker
2020-04-10 17:04   ` [musl] Concrete case-by-case examples of " Rich Felker
2020-04-10 17:21     ` Rich Felker
2020-05-03  3:03     ` Rich Felker

Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).