From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/12986 Path: news.gmane.org!.POSTED!not-for-mail From: Rich Felker Newsgroups: gmane.linux.lib.musl.general Subject: Re: Changing MMAP_THRESHOLD in malloc() implementation. Date: Wed, 4 Jul 2018 10:56:29 -0400 Message-ID: <20180704145629.GQ1392@brightrain.aerifal.cx> References: <20180703144331.GL1392@brightrain.aerifal.cx> Reply-To: musl@lists.openwall.com NNTP-Posting-Host: blaine.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Trace: blaine.gmane.org 1530716078 30940 195.159.176.226 (4 Jul 2018 14:54:38 GMT) X-Complaints-To: usenet@blaine.gmane.org NNTP-Posting-Date: Wed, 4 Jul 2018 14:54:38 +0000 (UTC) User-Agent: Mutt/1.5.21 (2010-09-15) To: musl@lists.openwall.com Original-X-From: musl-return-13002-gllmg-musl=m.gmane.org@lists.openwall.com Wed Jul 04 16:54:34 2018 Return-path: Envelope-to: gllmg-musl@m.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by blaine.gmane.org with smtp (Exim 4.84_2) (envelope-from ) id 1fajAw-0007vj-0x for gllmg-musl@m.gmane.org; Wed, 04 Jul 2018 16:54:34 +0200 Original-Received: (qmail 11515 invoked by uid 550); 4 Jul 2018 14:56:42 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Original-Received: (qmail 11487 invoked from network); 4 Jul 2018 14:56:41 -0000 Content-Disposition: inline In-Reply-To: Original-Sender: Rich Felker Xref: news.gmane.org gmane.linux.lib.musl.general:12986 Archived-At: On Wed, Jul 04, 2018 at 12:35:02PM +0530, ritesh sonawane wrote: > sir above statistics are with 64MB page size. we are using system which > having 2MB(large page) > 64MB(huge page). If this is a custom architecture, have you considered using variable pagesize like x86 and others? Unconditionally using large/huge pages for everything seems like a really, really bad idea. Aside from wasting memory, it makes COW latency spikes really high (memcpy a whole 2MB ot 64MB on fault) and even worse for paging in mmapped files from a block-device-backed filesystem. Rich > On Wed, Jul 4, 2018 at 10:54 AM, ritesh sonawane > wrote: > > > Thank you very much for instant reply.. > > > > Yes sir it is wasting memory for each shared library. But memory wastage > > even worse when program > > requesting memory with size more than 224KB(threshold value). > > > > ->If a program requests 1GB per request, it can use 45GB at the most. > > ->If a program requests 512MB per request, it can use 41.5GB at the most. > > ->If a program requests 225KB per request, it can use about 167MB at the > > most. > > > > As we ported musl-1.1.14 for our architecture, we are bounded to make > > change in same base code. > > we have increased MMAP_THRESHOLD to 1GB and also changes the calculation > > for bin index . > > after that observed improvement in memory utilization. i.e for size 225KB > > memory used is 47.6 GB. > > > > But now facing problem in multi threaded application. As we haven't > > changed the function pretrim() > > because there are some hard coded values like '40' and '3' used and > > unable to understand how > > these values are decided ..? > > > > static int pretrim(struct chunk *self, size_t n, int i, int j) > > { > > size_t n1; > > struct chunk *next, *split; > > > > /* We cannot pretrim if it would require re-binning. */ > > if (j < 40) return 0; > > if (j < i+3) { > > if (j != 63) return 0; > > n1 = CHUNK_SIZE(self); > > if (n1-n <= MMAP_THRESHOLD) return 0; > > } else { > > n1 = CHUNK_SIZE(self); > > } > > ..... > > ..... > > ...... > > } > > > > can we get any clue how these value are decided, it will be very > > helpful for us. > > > > Best Regard, > > Ritesh Sonawane > > > > On Tue, Jul 3, 2018 at 8:13 PM, Rich Felker wrote: > > > >> On Tue, Jul 03, 2018 at 12:58:04PM +0530, ritesh sonawane wrote: > >> > Hi All, > >> > > >> > We are using musl-1.1.14 version for our architecture. It is having page > >> > size of 2MB. > >> > Due to low threshold value there is more memory wastage. So we want to > >> > change the value of MMAP_THRESHOLD. > >> > > >> > can anyone please giude us, which factor need to consider to change this > >> > threshold value ? > >> > >> It's not a parameter that can be changed but linked to the scale of > >> bin sizes. There is no framework to track and reuse freed chunks which > >> are larger than MMAP_THRESHOLD, so you'd be replacing recoverable > >> waste from page granularity with unrecoverable waste from inability to > >> reuse these larger freed chunks except breaking them up into pieces to > >> satisfy smaller requests. > >> > >> I may look into handling this better when replacing musl's malloc at > >> some point, but if your system forces you to use ridiculously large > >> pages like 2MB, you've basically committed to wasting huge amounts of > >> memory anyway (at least 2MB for each shared library in each > >> process)... > >> > >> With musl git-master and future releases, you have the option to link > >> a malloc replacement library that might be a decent solution to your > >> problem. > >> > >> Rich > >> > > > >