mailing list of musl libc
 help / color / mirror / code / Atom feed
From: Rich Felker <dalias@libc.org>
To: baiyang <baiyang@gmail.com>
Cc: musl <musl@lists.openwall.com>
Subject: Re: Re: [musl] The heap memory performance (malloc/free/realloc) is significantly degraded in musl 1.2 (compared to 1.1)
Date: Mon, 19 Sep 2022 23:28:06 -0400	[thread overview]
Message-ID: <20220920032806.GI9709@brightrain.aerifal.cx> (raw)
In-Reply-To: <20220920103500598557106@gmail.com>

On Tue, Sep 20, 2022 at 10:35:02AM +0800, baiyang wrote:
> > You seem to think that if the group stride was 8100, calling realloc might memcpy up to 8100 bytes. This is not the case.
> 
> Yes, I already understood that mallocng would only memcpy 6600 bytes
> when I was told that malloc_usable_size will return the size
> requested by the user.
> 
> But AFAIK, many other malloc implementations basically don't keep
> 6600 bytes of data. So they're actually going to memcpy the 8100
> bytes.

You have made up a problem that does not exist. Specifically, at least
as far as I can tell, no such implementation exists.

The ones that return some value larger than the requested size are
returning "the requested size, rounded up to a multiple of 16" or
similar. Not "the requested size plus 1500 bytes". For the
dlmalloc-style allocators, this is because they do not have uniform
slots for size classes; they have arbitrary splits, and only care
about aligning start/end on a properly aligned boundary. And for more
slab-like allocators, they all keep track of at least a coarse "actual
size" specifically so that realloc does not gratuitously copy large
amounts of unnecessary data (among other reasons).

> > You also seem to be under the impression that the work to determine
> > that the size was 6600 and not 8100 is where most (or at least a
> > significant portion of) the time is spent.  This is also not the case.
> > The majority of the metadata processing time is chasing pointers back
> > to the out-of-band metadata, validating it, validating that it
> > round-trips back, and validating various other things. Some of these
> > could in principle be omitted at the cost of loss-of-hardening.
> 
> Yes, according to my previous understanding (which seems wrong now),
> since other malloc_usable_size implementations that directly return
> 8100 (the actual allocated size class length)

They don't return 8100. They return something like 6608 or 6624.

> such as tcmalloc are
> all very fast, so I can only understand that mallocng is so much
> slower than them because it has to return 6600, not 8100.

This does not follow at all. tcmalloc is fast because it does not have
global consistency, does not have any notable hardening, and (see the
name) keeps large numbers of freed slots *cached* to reuse, thereby
using lots of extra memory. Its malloc_usable_size is not fast because
of returning the wrong value, if it even does return the wrong value
(I have no idea). It's fast because they store the size in-band right
next to the allocated memory and trust that it's valid, rather than
computing it from out-of-band metadata that is not subject to
falsification unless the attacker already has nearly full control of
execution.

> Apart from
> this difference, there is no reason it is slower than other
> implementations of malloc_usable_size as I understand it.

Comparing the speed of malloc_usable_size is utterly pointless. That
is not where the time is spent in any real-world load that's not
gratuitously doing stupid things. I imagine tcmalloc's is faster, for
the reasons explained above (which have nothing to do with whether the
value returned is the exact size you requested). But this has nothing
to do with why the overall performance is higher.

> If this is not the main reason, can we speed up this algorithm with
> the help of a fast lookup table mechanism like tcmalloc? As I said
> before, this not only greatly increases the performance of
> malloc_usable_size , but also the performance of realloc and free .

No, because the fundamental difference is where you're storing the
data, and the *whole point* of mallocng is that we're storing it in a
place where it's not subject to attack via overflows from other
objects, UAF/DF, etc. This makes it somewhat costlier (but still very
reasonable) to obtain.

If you have a particular application where you actually want to trade
safety and low memory usage for performance, you're perfectly free to
link that application to whatever malloc implementation you like. musl
has supported doing that since 1.1.20. It makes no sense to do this
system-wide. You would just increase memory usage and attack surface
drastically with nothing to show for it, since the vast majority of
programs *don't care* how long malloc takes.

Rich

  reply	other threads:[~2022-09-20  3:28 UTC|newest]

Thread overview: 68+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-19  7:53 baiyang
2022-09-19 11:08 ` Szabolcs Nagy
2022-09-19 12:36   ` Florian Weimer
2022-09-19 13:46     ` Rich Felker
2022-09-19 13:53       ` James Y Knight
2022-09-19 17:40         ` baiyang
2022-09-19 18:14           ` Szabolcs Nagy
2022-09-19 18:40             ` baiyang
2022-09-19 19:07             ` Gabriel Ravier
2022-09-19 19:21               ` Rich Felker
2022-09-19 21:02                 ` Gabriel Ravier
2022-09-19 21:47                   ` Rich Felker
2022-09-19 22:31                     ` Gabriel Ravier
2022-09-19 22:46                       ` baiyang
2022-09-19 20:46             ` Nat!
2022-09-20  8:51               ` Szabolcs Nagy
2022-09-20  0:13           ` James Y Knight
2022-09-20  0:25             ` baiyang
2022-09-20  0:38               ` Rich Felker
2022-09-20  0:47                 ` baiyang
2022-09-20  1:00                   ` Rich Felker
2022-09-20  1:18                     ` baiyang
2022-09-20  2:15                       ` Rich Felker
2022-09-20  2:35                         ` baiyang
2022-09-20  3:28                           ` Rich Felker [this message]
2022-09-20  3:53                             ` baiyang
2022-09-20  5:41                               ` Rich Felker
2022-09-20  5:56                                 ` baiyang
2022-09-20 12:16                                   ` Rich Felker
2022-09-20 17:21                                     ` baiyang
2022-09-20  8:33       ` Florian Weimer
2022-09-20 13:54         ` Siddhesh Poyarekar
2022-09-20 16:59           ` James Y Knight
2022-09-20 17:34             ` Szabolcs Nagy
2022-09-20 19:53               ` James Y Knight
2022-09-24  8:55               ` Fangrui Song
2022-09-20 17:39             ` baiyang
2022-09-20 18:12               ` Quentin Rameau
2022-09-20 18:19                 ` Rich Felker
2022-09-20 18:26                   ` Alexander Monakov
2022-09-20 18:35                     ` baiyang
2022-09-20 20:33                       ` Gabriel Ravier
2022-09-20 20:45                         ` baiyang
2022-09-21  8:42                           ` NRK
2022-09-20 18:37                     ` Quentin Rameau
2022-09-21 10:15                   ` [musl] " 王志强
2022-09-21 16:11                     ` [musl] " 王志强
2022-09-21 17:15                     ` [musl] " Rich Felker
2022-09-21 17:58                       ` Rich Felker
2022-09-22  3:34                         ` [musl] " 王志强
2022-09-22  9:10                           ` [musl] " 王志强
2022-09-22  9:39                             ` [musl] " 王志强
2022-09-20 17:28           ` baiyang
2022-09-20 17:44             ` Siddhesh Poyarekar
2022-10-10 14:13           ` Florian Weimer
2022-09-19 13:43 ` Rich Felker
2022-09-19 17:32   ` baiyang
2022-09-19 18:15     ` Rich Felker
2022-09-19 18:44       ` baiyang
2022-09-19 19:18         ` Rich Felker
2022-09-19 19:45           ` baiyang
2022-09-19 20:07             ` Rich Felker
2022-09-19 20:17               ` baiyang
2022-09-19 20:28                 ` Rich Felker
2022-09-19 20:38                   ` baiyang
2022-09-19 22:02                 ` Quentin Rameau
2022-09-19 20:17             ` Joakim Sindholt
2022-09-19 20:33               ` baiyang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220920032806.GI9709@brightrain.aerifal.cx \
    --to=dalias@libc.org \
    --cc=baiyang@gmail.com \
    --cc=musl@lists.openwall.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).