mailing list of musl libc
 help / color / mirror / code / Atom feed
From: Rich Felker <dalias@libc.org>
To: musl@lists.openwall.com
Subject: Re: Changing MMAP_THRESHOLD in malloc() implementation.
Date: Wed, 4 Jul 2018 10:56:29 -0400	[thread overview]
Message-ID: <20180704145629.GQ1392@brightrain.aerifal.cx> (raw)
In-Reply-To: <CAB8f8h3MNR_=jkqyNej=Z4kL+R6Ljy5zUvGnT4gavYSihpZOdg@mail.gmail.com>

On Wed, Jul 04, 2018 at 12:35:02PM +0530, ritesh sonawane wrote:
> sir above statistics are with 64MB page size. we are using system which
> having 2MB(large page)
> 64MB(huge page).

If this is a custom architecture, have you considered using variable
pagesize like x86 and others? Unconditionally using large/huge pages
for everything seems like a really, really bad idea. Aside from
wasting memory, it makes COW latency spikes really high (memcpy a
whole 2MB ot 64MB on fault) and even worse for paging in mmapped files
from a block-device-backed filesystem.

Rich

> On Wed, Jul 4, 2018 at 10:54 AM, ritesh sonawane <rdssonawane2317@gmail.com>
> wrote:
> 
> > Thank you very much for instant reply..
> >
> > Yes sir it is wasting memory for each shared library. But memory wastage
> > even worse when program
> > requesting memory with size more than 224KB(threshold value).
> >
> > ->If a program requests 1GB per request, it can use 45GB at the most.
> > ->If a program requests 512MB per request, it can use 41.5GB at the most.
> > ->If a program requests 225KB per request, it can use about 167MB at the
> > most.
> >
> > As we ported  musl-1.1.14 for our architecture, we are bounded to make
> > change in same base code.
> > we have increased  MMAP_THRESHOLD to 1GB and also changes the calculation
> > for bin index .
> > after that observed improvement in memory utilization. i.e for size 225KB
> > memory used is 47.6 GB.
> >
> > But now facing problem in multi threaded application. As we haven't
> > changed the function pretrim()
> > because there are some hard coded values like '40' and '3' used and
> > unable to understand how
> > these values are decided ..?
> >
> > static int pretrim(struct chunk *self, size_t n, int i, int j)
> > {
> >         size_t n1;
> >         struct chunk *next, *split;
> >
> >         /* We cannot pretrim if it would require re-binning. */
> >         if (j < 40) return 0;
> >         if (j < i+3) {
> >                 if (j != 63) return 0;
> >                 n1 = CHUNK_SIZE(self);
> >                 if (n1-n <= MMAP_THRESHOLD) return 0;
> >         } else {
> >                 n1 = CHUNK_SIZE(self);
> >         }
> >  .....
> >  .....
> > ......
> > }
> >
> > can we get any clue how these value are decided, it will be very
> > helpful for us.
> >
> > Best Regard,
> > Ritesh Sonawane
> >
> > On Tue, Jul 3, 2018 at 8:13 PM, Rich Felker <dalias@libc.org> wrote:
> >
> >> On Tue, Jul 03, 2018 at 12:58:04PM +0530, ritesh sonawane wrote:
> >> > Hi All,
> >> >
> >> > We are using musl-1.1.14 version for our architecture. It is having page
> >> > size of 2MB.
> >> > Due to low threshold value there is more memory wastage. So we want to
> >> > change the value of  MMAP_THRESHOLD.
> >> >
> >> > can anyone please giude us, which factor need to consider to change this
> >> > threshold value ?
> >>
> >> It's not a parameter that can be changed but linked to the scale of
> >> bin sizes. There is no framework to track and reuse freed chunks which
> >> are larger than MMAP_THRESHOLD, so you'd be replacing recoverable
> >> waste from page granularity with unrecoverable waste from inability to
> >> reuse these larger freed chunks except breaking them up into pieces to
> >> satisfy smaller requests.
> >>
> >> I may look into handling this better when replacing musl's malloc at
> >> some point, but if your system forces you to use ridiculously large
> >> pages like 2MB, you've basically committed to wasting huge amounts of
> >> memory anyway (at least 2MB for each shared library in each
> >> process)...
> >>
> >> With musl git-master and future releases, you have the option to link
> >> a malloc replacement library that might be a decent solution to your
> >> problem.
> >>
> >> Rich
> >>
> >
> >


  reply	other threads:[~2018-07-04 14:56 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-07-03  7:28 ritesh sonawane
2018-07-03 14:43 ` Rich Felker
2018-07-04  5:24   ` ritesh sonawane
2018-07-04  7:05     ` ritesh sonawane
2018-07-04 14:56       ` Rich Felker [this message]
2018-07-05  6:52         ` ritesh sonawane
2018-07-05  7:02           ` ritesh sonawane
2018-07-04 14:54     ` Rich Felker

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180704145629.GQ1392@brightrain.aerifal.cx \
    --to=dalias@libc.org \
    --cc=musl@lists.openwall.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).