mailing list of musl libc
 help / color / mirror / code / Atom feed
* Changing MMAP_THRESHOLD in malloc() implementation.
@ 2018-07-03  7:28 ritesh sonawane
  2018-07-03 14:43 ` Rich Felker
  0 siblings, 1 reply; 8+ messages in thread
From: ritesh sonawane @ 2018-07-03  7:28 UTC (permalink / raw)
  To: musl

[-- Attachment #1: Type: text/plain, Size: 322 bytes --]

Hi All,

We are using musl-1.1.14 version for our architecture. It is having page
size of 2MB.
Due to low threshold value there is more memory wastage. So we want to
change the value of  MMAP_THRESHOLD.

can anyone please giude us, which factor need to consider to change this
threshold value ?


Regards,
Ritesh Sonawane

[-- Attachment #2: Type: text/html, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Changing MMAP_THRESHOLD in malloc() implementation.
  2018-07-03  7:28 Changing MMAP_THRESHOLD in malloc() implementation ritesh sonawane
@ 2018-07-03 14:43 ` Rich Felker
  2018-07-04  5:24   ` ritesh sonawane
  0 siblings, 1 reply; 8+ messages in thread
From: Rich Felker @ 2018-07-03 14:43 UTC (permalink / raw)
  To: musl

On Tue, Jul 03, 2018 at 12:58:04PM +0530, ritesh sonawane wrote:
> Hi All,
> 
> We are using musl-1.1.14 version for our architecture. It is having page
> size of 2MB.
> Due to low threshold value there is more memory wastage. So we want to
> change the value of  MMAP_THRESHOLD.
> 
> can anyone please giude us, which factor need to consider to change this
> threshold value ?

It's not a parameter that can be changed but linked to the scale of
bin sizes. There is no framework to track and reuse freed chunks which
are larger than MMAP_THRESHOLD, so you'd be replacing recoverable
waste from page granularity with unrecoverable waste from inability to
reuse these larger freed chunks except breaking them up into pieces to
satisfy smaller requests.

I may look into handling this better when replacing musl's malloc at
some point, but if your system forces you to use ridiculously large
pages like 2MB, you've basically committed to wasting huge amounts of
memory anyway (at least 2MB for each shared library in each
process)...

With musl git-master and future releases, you have the option to link
a malloc replacement library that might be a decent solution to your
problem.

Rich


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Changing MMAP_THRESHOLD in malloc() implementation.
  2018-07-03 14:43 ` Rich Felker
@ 2018-07-04  5:24   ` ritesh sonawane
  2018-07-04  7:05     ` ritesh sonawane
  2018-07-04 14:54     ` Rich Felker
  0 siblings, 2 replies; 8+ messages in thread
From: ritesh sonawane @ 2018-07-04  5:24 UTC (permalink / raw)
  To: musl

[-- Attachment #1: Type: text/plain, Size: 2816 bytes --]

Thank you very much for instant reply..

Yes sir it is wasting memory for each shared library. But memory wastage
even worse when program
requesting memory with size more than 224KB(threshold value).

->If a program requests 1GB per request, it can use 45GB at the most.
->If a program requests 512MB per request, it can use 41.5GB at the most.
->If a program requests 225KB per request, it can use about 167MB at the
most.

As we ported  musl-1.1.14 for our architecture, we are bounded to make
change in same base code.
we have increased  MMAP_THRESHOLD to 1GB and also changes the calculation
for bin index .
after that observed improvement in memory utilization. i.e for size 225KB
memory used is 47.6 GB.

But now facing problem in multi threaded application. As we haven't changed
the function pretrim()
because there are some hard coded values like '40' and '3' used and unable
to understand how
these values are decided ..?

static int pretrim(struct chunk *self, size_t n, int i, int j)
{
        size_t n1;
        struct chunk *next, *split;

        /* We cannot pretrim if it would require re-binning. */
        if (j < 40) return 0;
        if (j < i+3) {
                if (j != 63) return 0;
                n1 = CHUNK_SIZE(self);
                if (n1-n <= MMAP_THRESHOLD) return 0;
        } else {
                n1 = CHUNK_SIZE(self);
        }
 .....
 .....
......
}

can we get any clue how these value are decided, it will be very
helpful for us.

Best Regard,
Ritesh Sonawane

On Tue, Jul 3, 2018 at 8:13 PM, Rich Felker <dalias@libc.org> wrote:

> On Tue, Jul 03, 2018 at 12:58:04PM +0530, ritesh sonawane wrote:
> > Hi All,
> >
> > We are using musl-1.1.14 version for our architecture. It is having page
> > size of 2MB.
> > Due to low threshold value there is more memory wastage. So we want to
> > change the value of  MMAP_THRESHOLD.
> >
> > can anyone please giude us, which factor need to consider to change this
> > threshold value ?
>
> It's not a parameter that can be changed but linked to the scale of
> bin sizes. There is no framework to track and reuse freed chunks which
> are larger than MMAP_THRESHOLD, so you'd be replacing recoverable
> waste from page granularity with unrecoverable waste from inability to
> reuse these larger freed chunks except breaking them up into pieces to
> satisfy smaller requests.
>
> I may look into handling this better when replacing musl's malloc at
> some point, but if your system forces you to use ridiculously large
> pages like 2MB, you've basically committed to wasting huge amounts of
> memory anyway (at least 2MB for each shared library in each
> process)...
>
> With musl git-master and future releases, you have the option to link
> a malloc replacement library that might be a decent solution to your
> problem.
>
> Rich
>

[-- Attachment #2: Type: text/html, Size: 5051 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Changing MMAP_THRESHOLD in malloc() implementation.
  2018-07-04  5:24   ` ritesh sonawane
@ 2018-07-04  7:05     ` ritesh sonawane
  2018-07-04 14:56       ` Rich Felker
  2018-07-04 14:54     ` Rich Felker
  1 sibling, 1 reply; 8+ messages in thread
From: ritesh sonawane @ 2018-07-04  7:05 UTC (permalink / raw)
  To: musl

[-- Attachment #1: Type: text/plain, Size: 3140 bytes --]

sir above statistics are with 64MB page size. we are using system which
having 2MB(large page)
64MB(huge page).

On Wed, Jul 4, 2018 at 10:54 AM, ritesh sonawane <rdssonawane2317@gmail.com>
wrote:

> Thank you very much for instant reply..
>
> Yes sir it is wasting memory for each shared library. But memory wastage
> even worse when program
> requesting memory with size more than 224KB(threshold value).
>
> ->If a program requests 1GB per request, it can use 45GB at the most.
> ->If a program requests 512MB per request, it can use 41.5GB at the most.
> ->If a program requests 225KB per request, it can use about 167MB at the
> most.
>
> As we ported  musl-1.1.14 for our architecture, we are bounded to make
> change in same base code.
> we have increased  MMAP_THRESHOLD to 1GB and also changes the calculation
> for bin index .
> after that observed improvement in memory utilization. i.e for size 225KB
> memory used is 47.6 GB.
>
> But now facing problem in multi threaded application. As we haven't
> changed the function pretrim()
> because there are some hard coded values like '40' and '3' used and
> unable to understand how
> these values are decided ..?
>
> static int pretrim(struct chunk *self, size_t n, int i, int j)
> {
>         size_t n1;
>         struct chunk *next, *split;
>
>         /* We cannot pretrim if it would require re-binning. */
>         if (j < 40) return 0;
>         if (j < i+3) {
>                 if (j != 63) return 0;
>                 n1 = CHUNK_SIZE(self);
>                 if (n1-n <= MMAP_THRESHOLD) return 0;
>         } else {
>                 n1 = CHUNK_SIZE(self);
>         }
>  .....
>  .....
> ......
> }
>
> can we get any clue how these value are decided, it will be very
> helpful for us.
>
> Best Regard,
> Ritesh Sonawane
>
> On Tue, Jul 3, 2018 at 8:13 PM, Rich Felker <dalias@libc.org> wrote:
>
>> On Tue, Jul 03, 2018 at 12:58:04PM +0530, ritesh sonawane wrote:
>> > Hi All,
>> >
>> > We are using musl-1.1.14 version for our architecture. It is having page
>> > size of 2MB.
>> > Due to low threshold value there is more memory wastage. So we want to
>> > change the value of  MMAP_THRESHOLD.
>> >
>> > can anyone please giude us, which factor need to consider to change this
>> > threshold value ?
>>
>> It's not a parameter that can be changed but linked to the scale of
>> bin sizes. There is no framework to track and reuse freed chunks which
>> are larger than MMAP_THRESHOLD, so you'd be replacing recoverable
>> waste from page granularity with unrecoverable waste from inability to
>> reuse these larger freed chunks except breaking them up into pieces to
>> satisfy smaller requests.
>>
>> I may look into handling this better when replacing musl's malloc at
>> some point, but if your system forces you to use ridiculously large
>> pages like 2MB, you've basically committed to wasting huge amounts of
>> memory anyway (at least 2MB for each shared library in each
>> process)...
>>
>> With musl git-master and future releases, you have the option to link
>> a malloc replacement library that might be a decent solution to your
>> problem.
>>
>> Rich
>>
>
>

[-- Attachment #2: Type: text/html, Size: 5622 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Changing MMAP_THRESHOLD in malloc() implementation.
  2018-07-04  5:24   ` ritesh sonawane
  2018-07-04  7:05     ` ritesh sonawane
@ 2018-07-04 14:54     ` Rich Felker
  1 sibling, 0 replies; 8+ messages in thread
From: Rich Felker @ 2018-07-04 14:54 UTC (permalink / raw)
  To: musl

On Wed, Jul 04, 2018 at 10:54:50AM +0530, ritesh sonawane wrote:
> Thank you very much for instant reply..
> 
> Yes sir it is wasting memory for each shared library. But memory wastage
> even worse when program
> requesting memory with size more than 224KB(threshold value).
> 
> ->If a program requests 1GB per request, it can use 45GB at the most.
> ->If a program requests 512MB per request, it can use 41.5GB at the most.
> ->If a program requests 225KB per request, it can use about 167MB at the
> most.

How does this happen? The behavior you should see is just rounding up
of the request to a multiple of the page size, not scaling of the
request. Maybe I don't understand what you're saying here.

> As we ported  musl-1.1.14 for our architecture, we are bounded to make
> change in same base code.
> we have increased  MMAP_THRESHOLD to 1GB and also changes the calculation
> for bin index .
> after that observed improvement in memory utilization. i.e for size 225KB
> memory used is 47.6 GB.
> 
> But now facing problem in multi threaded application. As we haven't changed
> the function pretrim()
> because there are some hard coded values like '40' and '3' used and unable
> to understand how
> these values are decided ..?
> 
> static int pretrim(struct chunk *self, size_t n, int i, int j)
> {
>         size_t n1;
>         struct chunk *next, *split;
> 
>         /* We cannot pretrim if it would require re-binning. */
>         if (j < 40) return 0;
>         if (j < i+3) {
>                 if (j != 63) return 0;
>                 n1 = CHUNK_SIZE(self);
>                 if (n1-n <= MMAP_THRESHOLD) return 0;
>         } else {
>                 n1 = CHUNK_SIZE(self);
>         }
>  .....
>  .....
> .......
> }
> 
> can we get any clue how these value are decided, it will be very
> helpful for us.

You can just disable use of pretrim, at a performance cost -- it's an
optimization to avoid shuffling in and out of bins when the remainder
will go back in the same bin.

Unfortunately you're right that the magic numbers in pretrim aren't
immediately clear in their purpose so I'd have to take some time to
look at what they're doing. Some of the checks might be redundant
(just a short circuit to save time).

Rich


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Changing MMAP_THRESHOLD in malloc() implementation.
  2018-07-04  7:05     ` ritesh sonawane
@ 2018-07-04 14:56       ` Rich Felker
  2018-07-05  6:52         ` ritesh sonawane
  0 siblings, 1 reply; 8+ messages in thread
From: Rich Felker @ 2018-07-04 14:56 UTC (permalink / raw)
  To: musl

On Wed, Jul 04, 2018 at 12:35:02PM +0530, ritesh sonawane wrote:
> sir above statistics are with 64MB page size. we are using system which
> having 2MB(large page)
> 64MB(huge page).

If this is a custom architecture, have you considered using variable
pagesize like x86 and others? Unconditionally using large/huge pages
for everything seems like a really, really bad idea. Aside from
wasting memory, it makes COW latency spikes really high (memcpy a
whole 2MB ot 64MB on fault) and even worse for paging in mmapped files
from a block-device-backed filesystem.

Rich

> On Wed, Jul 4, 2018 at 10:54 AM, ritesh sonawane <rdssonawane2317@gmail.com>
> wrote:
> 
> > Thank you very much for instant reply..
> >
> > Yes sir it is wasting memory for each shared library. But memory wastage
> > even worse when program
> > requesting memory with size more than 224KB(threshold value).
> >
> > ->If a program requests 1GB per request, it can use 45GB at the most.
> > ->If a program requests 512MB per request, it can use 41.5GB at the most.
> > ->If a program requests 225KB per request, it can use about 167MB at the
> > most.
> >
> > As we ported  musl-1.1.14 for our architecture, we are bounded to make
> > change in same base code.
> > we have increased  MMAP_THRESHOLD to 1GB and also changes the calculation
> > for bin index .
> > after that observed improvement in memory utilization. i.e for size 225KB
> > memory used is 47.6 GB.
> >
> > But now facing problem in multi threaded application. As we haven't
> > changed the function pretrim()
> > because there are some hard coded values like '40' and '3' used and
> > unable to understand how
> > these values are decided ..?
> >
> > static int pretrim(struct chunk *self, size_t n, int i, int j)
> > {
> >         size_t n1;
> >         struct chunk *next, *split;
> >
> >         /* We cannot pretrim if it would require re-binning. */
> >         if (j < 40) return 0;
> >         if (j < i+3) {
> >                 if (j != 63) return 0;
> >                 n1 = CHUNK_SIZE(self);
> >                 if (n1-n <= MMAP_THRESHOLD) return 0;
> >         } else {
> >                 n1 = CHUNK_SIZE(self);
> >         }
> >  .....
> >  .....
> > ......
> > }
> >
> > can we get any clue how these value are decided, it will be very
> > helpful for us.
> >
> > Best Regard,
> > Ritesh Sonawane
> >
> > On Tue, Jul 3, 2018 at 8:13 PM, Rich Felker <dalias@libc.org> wrote:
> >
> >> On Tue, Jul 03, 2018 at 12:58:04PM +0530, ritesh sonawane wrote:
> >> > Hi All,
> >> >
> >> > We are using musl-1.1.14 version for our architecture. It is having page
> >> > size of 2MB.
> >> > Due to low threshold value there is more memory wastage. So we want to
> >> > change the value of  MMAP_THRESHOLD.
> >> >
> >> > can anyone please giude us, which factor need to consider to change this
> >> > threshold value ?
> >>
> >> It's not a parameter that can be changed but linked to the scale of
> >> bin sizes. There is no framework to track and reuse freed chunks which
> >> are larger than MMAP_THRESHOLD, so you'd be replacing recoverable
> >> waste from page granularity with unrecoverable waste from inability to
> >> reuse these larger freed chunks except breaking them up into pieces to
> >> satisfy smaller requests.
> >>
> >> I may look into handling this better when replacing musl's malloc at
> >> some point, but if your system forces you to use ridiculously large
> >> pages like 2MB, you've basically committed to wasting huge amounts of
> >> memory anyway (at least 2MB for each shared library in each
> >> process)...
> >>
> >> With musl git-master and future releases, you have the option to link
> >> a malloc replacement library that might be a decent solution to your
> >> problem.
> >>
> >> Rich
> >>
> >
> >


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Changing MMAP_THRESHOLD in malloc() implementation.
  2018-07-04 14:56       ` Rich Felker
@ 2018-07-05  6:52         ` ritesh sonawane
  2018-07-05  7:02           ` ritesh sonawane
  0 siblings, 1 reply; 8+ messages in thread
From: ritesh sonawane @ 2018-07-05  6:52 UTC (permalink / raw)
  To: musl

[-- Attachment #1: Type: text/plain, Size: 4706 bytes --]

> How does this happen? The behavior you should see is just rounding up
> of the request to a multiple of the page size, not scaling of the
> request. Maybe I don't understand what you're saying here.

In our case if threshold value is 224KB then all request more than 224KB
will allocate memory
using mmap() only. In that case size will be aligned to page
boundary(64MB).
So user if application request size 225KB multiple time then for every
request of 225KB one page of
(64MB) will be allocated.  It means if user request 225KB five time then
according to
user (225KB x 5 = 1125KB) is consumed,  But actual memory consumed is (64MB
x 5 = 320MB).


On Wed, Jul 4, 2018 at 8:26 PM, Rich Felker <dalias@libc.org> wrote:

> On Wed, Jul 04, 2018 at 12:35:02PM +0530, ritesh sonawane wrote:
> > sir above statistics are with 64MB page size. we are using system which
> > having 2MB(large page)
> > 64MB(huge page).
>
> If this is a custom architecture, have you considered using variable
> pagesize like x86 and others? Unconditionally using large/huge pages
> for everything seems like a really, really bad idea. Aside from
> wasting memory, it makes COW latency spikes really high (memcpy a
> whole 2MB ot 64MB on fault) and even worse for paging in mmapped files
> from a block-device-backed filesystem.
>
> Rich
>
> > On Wed, Jul 4, 2018 at 10:54 AM, ritesh sonawane <
> rdssonawane2317@gmail.com>
> > wrote:
> >
> > > Thank you very much for instant reply..
> > >
> > > Yes sir it is wasting memory for each shared library. But memory
> wastage
> > > even worse when program
> > > requesting memory with size more than 224KB(threshold value).
> > >
> > > ->If a program requests 1GB per request, it can use 45GB at the most.
> > > ->If a program requests 512MB per request, it can use 41.5GB at the
> most.
> > > ->If a program requests 225KB per request, it can use about 167MB at
> the
> > > most.
> > >
> > > As we ported  musl-1.1.14 for our architecture, we are bounded to make
> > > change in same base code.
> > > we have increased  MMAP_THRESHOLD to 1GB and also changes the
> calculation
> > > for bin index .
> > > after that observed improvement in memory utilization. i.e for size
> 225KB
> > > memory used is 47.6 GB.
> > >
> > > But now facing problem in multi threaded application. As we haven't
> > > changed the function pretrim()
> > > because there are some hard coded values like '40' and '3' used and
> > > unable to understand how
> > > these values are decided ..?
> > >
> > > static int pretrim(struct chunk *self, size_t n, int i, int j)
> > > {
> > >         size_t n1;
> > >         struct chunk *next, *split;
> > >
> > >         /* We cannot pretrim if it would require re-binning. */
> > >         if (j < 40) return 0;
> > >         if (j < i+3) {
> > >                 if (j != 63) return 0;
> > >                 n1 = CHUNK_SIZE(self);
> > >                 if (n1-n <= MMAP_THRESHOLD) return 0;
> > >         } else {
> > >                 n1 = CHUNK_SIZE(self);
> > >         }
> > >  .....
> > >  .....
> > > ......
> > > }
> > >
> > > can we get any clue how these value are decided, it will be very
> > > helpful for us.
> > >
> > > Best Regard,
> > > Ritesh Sonawane
> > >
> > > On Tue, Jul 3, 2018 at 8:13 PM, Rich Felker <dalias@libc.org> wrote:
> > >
> > >> On Tue, Jul 03, 2018 at 12:58:04PM +0530, ritesh sonawane wrote:
> > >> > Hi All,
> > >> >
> > >> > We are using musl-1.1.14 version for our architecture. It is having
> page
> > >> > size of 2MB.
> > >> > Due to low threshold value there is more memory wastage. So we want
> to
> > >> > change the value of  MMAP_THRESHOLD.
> > >> >
> > >> > can anyone please giude us, which factor need to consider to change
> this
> > >> > threshold value ?
> > >>
> > >> It's not a parameter that can be changed but linked to the scale of
> > >> bin sizes. There is no framework to track and reuse freed chunks which
> > >> are larger than MMAP_THRESHOLD, so you'd be replacing recoverable
> > >> waste from page granularity with unrecoverable waste from inability to
> > >> reuse these larger freed chunks except breaking them up into pieces to
> > >> satisfy smaller requests.
> > >>
> > >> I may look into handling this better when replacing musl's malloc at
> > >> some point, but if your system forces you to use ridiculously large
> > >> pages like 2MB, you've basically committed to wasting huge amounts of
> > >> memory anyway (at least 2MB for each shared library in each
> > >> process)...
> > >>
> > >> With musl git-master and future releases, you have the option to link
> > >> a malloc replacement library that might be a decent solution to your
> > >> problem.
> > >>
> > >> Rich
> > >>
> > >
> > >
>

[-- Attachment #2: Type: text/html, Size: 7525 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Changing MMAP_THRESHOLD in malloc() implementation.
  2018-07-05  6:52         ` ritesh sonawane
@ 2018-07-05  7:02           ` ritesh sonawane
  0 siblings, 0 replies; 8+ messages in thread
From: ritesh sonawane @ 2018-07-05  7:02 UTC (permalink / raw)
  To: musl

[-- Attachment #1: Type: text/plain, Size: 5550 bytes --]

 >If this is a custom architecture, have you considered using variable
>pagesize like x86 and others? Unconditionally using large/huge pages
>for everything seems like a really, really bad idea. Aside from
>wasting memory, it makes COW latency spikes really high (memcpy a
>whole 2MB ot 64MB on fault) and even worse for paging in mmapped files
>from a block-device-backed filesystem.

Currently our architecture is not supporting COW and demand paging.
page size is decided at compile time only or it can be passed  to mmap()
syscall like x86  to change page size during run time new memory requests.


On Thu, Jul 5, 2018 at 12:22 PM, ritesh sonawane <rdssonawane2317@gmail.com>
wrote:

> > How does this happen? The behavior you should see is just rounding up
> > of the request to a multiple of the page size, not scaling of the
> > request. Maybe I don't understand what you're saying here.
>
> In our case if threshold value is 224KB then all request more than 224KB
> will allocate memory
> using mmap() only. In that case size will be aligned to page
> boundary(64MB).
> So user if application request size 225KB multiple time then for every
> request of 225KB one page of
> (64MB) will be allocated.  It means if user request 225KB five time then
> according to
> user (225KB x 5 = 1125KB) is consumed,  But actual memory consumed is
> (64MB x 5 = 320MB).
>
>
> On Wed, Jul 4, 2018 at 8:26 PM, Rich Felker <dalias@libc.org> wrote:
>
>> On Wed, Jul 04, 2018 at 12:35:02PM +0530, ritesh sonawane wrote:
>> > sir above statistics are with 64MB page size. we are using system which
>> > having 2MB(large page)
>> > 64MB(huge page).
>>
>> If this is a custom architecture, have you considered using variable
>> pagesize like x86 and others? Unconditionally using large/huge pages
>> for everything seems like a really, really bad idea. Aside from
>> wasting memory, it makes COW latency spikes really high (memcpy a
>> whole 2MB ot 64MB on fault) and even worse for paging in mmapped files
>> from a block-device-backed filesystem.
>>
>> Rich
>>
>> > On Wed, Jul 4, 2018 at 10:54 AM, ritesh sonawane <
>> rdssonawane2317@gmail.com>
>> > wrote:
>> >
>> > > Thank you very much for instant reply..
>> > >
>> > > Yes sir it is wasting memory for each shared library. But memory
>> wastage
>> > > even worse when program
>> > > requesting memory with size more than 224KB(threshold value).
>> > >
>> > > ->If a program requests 1GB per request, it can use 45GB at the most.
>> > > ->If a program requests 512MB per request, it can use 41.5GB at the
>> most.
>> > > ->If a program requests 225KB per request, it can use about 167MB at
>> the
>> > > most.
>> > >
>> > > As we ported  musl-1.1.14 for our architecture, we are bounded to make
>> > > change in same base code.
>> > > we have increased  MMAP_THRESHOLD to 1GB and also changes the
>> calculation
>> > > for bin index .
>> > > after that observed improvement in memory utilization. i.e for size
>> 225KB
>> > > memory used is 47.6 GB.
>> > >
>> > > But now facing problem in multi threaded application. As we haven't
>> > > changed the function pretrim()
>> > > because there are some hard coded values like '40' and '3' used and
>> > > unable to understand how
>> > > these values are decided ..?
>> > >
>> > > static int pretrim(struct chunk *self, size_t n, int i, int j)
>> > > {
>> > >         size_t n1;
>> > >         struct chunk *next, *split;
>> > >
>> > >         /* We cannot pretrim if it would require re-binning. */
>> > >         if (j < 40) return 0;
>> > >         if (j < i+3) {
>> > >                 if (j != 63) return 0;
>> > >                 n1 = CHUNK_SIZE(self);
>> > >                 if (n1-n <= MMAP_THRESHOLD) return 0;
>> > >         } else {
>> > >                 n1 = CHUNK_SIZE(self);
>> > >         }
>> > >  .....
>> > >  .....
>> > > ......
>> > > }
>> > >
>> > > can we get any clue how these value are decided, it will be very
>> > > helpful for us.
>> > >
>> > > Best Regard,
>> > > Ritesh Sonawane
>> > >
>> > > On Tue, Jul 3, 2018 at 8:13 PM, Rich Felker <dalias@libc.org> wrote:
>> > >
>> > >> On Tue, Jul 03, 2018 at 12:58:04PM +0530, ritesh sonawane wrote:
>> > >> > Hi All,
>> > >> >
>> > >> > We are using musl-1.1.14 version for our architecture. It is
>> having page
>> > >> > size of 2MB.
>> > >> > Due to low threshold value there is more memory wastage. So we
>> want to
>> > >> > change the value of  MMAP_THRESHOLD.
>> > >> >
>> > >> > can anyone please giude us, which factor need to consider to
>> change this
>> > >> > threshold value ?
>> > >>
>> > >> It's not a parameter that can be changed but linked to the scale of
>> > >> bin sizes. There is no framework to track and reuse freed chunks
>> which
>> > >> are larger than MMAP_THRESHOLD, so you'd be replacing recoverable
>> > >> waste from page granularity with unrecoverable waste from inability
>> to
>> > >> reuse these larger freed chunks except breaking them up into pieces
>> to
>> > >> satisfy smaller requests.
>> > >>
>> > >> I may look into handling this better when replacing musl's malloc at
>> > >> some point, but if your system forces you to use ridiculously large
>> > >> pages like 2MB, you've basically committed to wasting huge amounts of
>> > >> memory anyway (at least 2MB for each shared library in each
>> > >> process)...
>> > >>
>> > >> With musl git-master and future releases, you have the option to link
>> > >> a malloc replacement library that might be a decent solution to your
>> > >> problem.
>> > >>
>> > >> Rich
>> > >>
>> > >
>> > >
>>
>
>

[-- Attachment #2: Type: text/html, Size: 9951 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-07-05  7:02 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-03  7:28 Changing MMAP_THRESHOLD in malloc() implementation ritesh sonawane
2018-07-03 14:43 ` Rich Felker
2018-07-04  5:24   ` ritesh sonawane
2018-07-04  7:05     ` ritesh sonawane
2018-07-04 14:56       ` Rich Felker
2018-07-05  6:52         ` ritesh sonawane
2018-07-05  7:02           ` ritesh sonawane
2018-07-04 14:54     ` Rich Felker

Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).