From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/8232 Path: news.gmane.org!not-for-mail From: Rich Felker Newsgroups: gmane.linux.lib.musl.general Subject: Re: Multithreaded program VSZ increasing due to bad free() strategy Date: Tue, 28 Jul 2015 16:47:14 -0400 Message-ID: <20150728204714.GX16376@brightrain.aerifal.cx> References: <20150726171711.0886dba8@vostro> <20150726171048.GN16376@brightrain.aerifal.cx> Reply-To: musl@lists.openwall.com NNTP-Posting-Host: plane.gmane.org Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="/04w6evG8XlLl3ft" X-Trace: ger.gmane.org 1438116454 12122 80.91.229.3 (28 Jul 2015 20:47:34 GMT) X-Complaints-To: usenet@ger.gmane.org NNTP-Posting-Date: Tue, 28 Jul 2015 20:47:34 +0000 (UTC) To: musl@lists.openwall.com Original-X-From: musl-return-8245-gllmg-musl=m.gmane.org@lists.openwall.com Tue Jul 28 22:47:32 2015 Return-path: Envelope-to: gllmg-musl@m.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by plane.gmane.org with smtp (Exim 4.69) (envelope-from ) id 1ZKBmb-0004tO-D3 for gllmg-musl@m.gmane.org; Tue, 28 Jul 2015 22:47:29 +0200 Original-Received: (qmail 8064 invoked by uid 550); 28 Jul 2015 20:47:27 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: Original-Received: (qmail 8040 invoked from network); 28 Jul 2015 20:47:26 -0000 Content-Disposition: inline In-Reply-To: <20150726171048.GN16376@brightrain.aerifal.cx> User-Agent: Mutt/1.5.21 (2010-09-15) Original-Sender: Rich Felker Xref: news.gmane.org gmane.linux.lib.musl.general:8232 Archived-At: --/04w6evG8XlLl3ft Content-Type: text/plain; charset=us-ascii Content-Disposition: inline On Sun, Jul 26, 2015 at 01:10:48PM -0400, Rich Felker wrote: > But in the mean time I'm not sure what the best solution is. I think > we need to find a way to prevent the "wrongly empty" bins from being > visible. Perhaps alloc_fwd and alloc_rev could refrain from using > unbin and unlock_bin and instead help the caller keep a mask of bins > it has locked and which need to be masked off when the free is > finished... The attached patch should mitigate the issue in practice by performing madvise with the bin lock held and the binmap bit already set, so that malloc waits for the lock while madvise is running rather than trying to map new memory. There is still a race window when unnecessary allocation will occur, but it should be much harder to hit. We can work on better fixes later. Rich --/04w6evG8XlLl3ft Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="free_madv.diff" diff --git a/src/malloc/malloc.c b/src/malloc/malloc.c index eb68d55..efe1f5e 100644 --- a/src/malloc/malloc.c +++ b/src/malloc/malloc.c @@ -464,18 +464,6 @@ void free(void *p) if (next->psize != self->csize) a_crash(); for (;;) { - /* Replace middle of large chunks with fresh zero pages */ - if (reclaim && (self->psize & next->csize & C_INUSE)) { - uintptr_t a = (uintptr_t)self + SIZE_ALIGN+PAGE_SIZE-1 & -PAGE_SIZE; - uintptr_t b = (uintptr_t)next - SIZE_ALIGN & -PAGE_SIZE; -#if 1 - __madvise((void *)a, b-a, MADV_DONTNEED); -#else - __mmap((void *)a, b-a, PROT_READ|PROT_WRITE, - MAP_PRIVATE|MAP_ANONYMOUS|MAP_FIXED, -1, 0); -#endif - } - if (self->psize & next->csize & C_INUSE) { self->csize = final_size | C_INUSE; next->psize = final_size | C_INUSE; @@ -517,5 +505,17 @@ void free(void *p) if (!(mal.binmap & 1ULL<