mailing list of musl libc
 help / color / mirror / code / Atom feed
* [PATCH] malloc/expand_heap.c: really try to use all available memory
@ 2017-07-18 12:52 Steven Walter
  2017-07-18 17:27 ` Rich Felker
  0 siblings, 1 reply; 2+ messages in thread
From: Steven Walter @ 2017-07-18 12:52 UTC (permalink / raw)
  To: musl; +Cc: Steven Walter

Previously expand_heap would ask for increasingly larger mmap areas
every time, in order to avoid allocating a bunch of small areas and
fragmenting memory.  However, after a point, we may ask for an mmap area
so large that it can't possibly succeed, even though there is still
system memory available.  This is particularly likely on a 32-bit system
with gigs of RAM, or else on a no-MMU system with pretty much any amount
of RAM.  Without an MMU to make physically-discontiguous pages appear
contigious, the chance of any large mmap succeeding are very low.

To fix this, support decreasing mmap_step once we hit an allocation
failure.  We'll try smaller and smaller amounts until we either ask for
a single page or exactly as many pages as we currently need.  Only if
that fails do we fail the overall request.
---
 src/malloc/expand_heap.c | 27 +++++++++++++++++++--------
 1 file changed, 19 insertions(+), 8 deletions(-)

diff --git a/src/malloc/expand_heap.c b/src/malloc/expand_heap.c
index d8c0be7..6a0dae8 100644
--- a/src/malloc/expand_heap.c
+++ b/src/malloc/expand_heap.c
@@ -61,12 +61,23 @@ void *__expand_heap(size_t *pn)
 		return (void *)(brk-n);
 	}
 
-	size_t min = (size_t)PAGE_SIZE << mmap_step/2;
-	if (n < min) n = min;
-	void *area = __mmap(0, n, PROT_READ|PROT_WRITE,
-		MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
-	if (area == MAP_FAILED) return 0;
-	*pn = n;
-	mmap_step++;
-	return area;
+        while (1) {
+            size_t min = (size_t)PAGE_SIZE << mmap_step/2;
+            size_t size = n;
+            if (size < min) size = min;
+            void *area = __mmap(0, size, PROT_READ|PROT_WRITE,
+                    MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
+            if (area != MAP_FAILED) {
+                *pn = size;
+                mmap_step++;
+                return area;
+            }
+
+            // If we asked for a single page (or the exact allocation
+            // amount) and still didn't get it, we're toast
+            if (size == n || mmap_step < 2)
+                return 0;
+
+            mmap_step--;
+        }
 }
-- 
2.7.4



^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH] malloc/expand_heap.c: really try to use all available memory
  2017-07-18 12:52 [PATCH] malloc/expand_heap.c: really try to use all available memory Steven Walter
@ 2017-07-18 17:27 ` Rich Felker
  0 siblings, 0 replies; 2+ messages in thread
From: Rich Felker @ 2017-07-18 17:27 UTC (permalink / raw)
  To: musl

On Tue, Jul 18, 2017 at 08:52:59AM -0400, Steven Walter wrote:
> Previously expand_heap would ask for increasingly larger mmap areas
> every time, in order to avoid allocating a bunch of small areas and
> fragmenting memory.  However, after a point, we may ask for an mmap area
> so large that it can't possibly succeed, even though there is still
> system memory available.  This is particularly likely on a 32-bit system
> with gigs of RAM, or else on a no-MMU system with pretty much any amount
> of RAM.  Without an MMU to make physically-discontiguous pages appear
> contigious, the chance of any large mmap succeeding are very low.

This is partly intentional, since beyond this point fragmentation is
expected to be catastrophic to the point that it would prevent mmap
(and on nommu, bring down the whole system). But if it's seriously
capping how much memory you can get with malloc on a 32-bit system
(esp one with mmu), some tweaking is probably needed.

> To fix this, support decreasing mmap_step once we hit an allocation
> failure.  We'll try smaller and smaller amounts until we either ask for
> a single page or exactly as many pages as we currently need.  Only if
> that fails do we fail the overall request.

This is probably a really bad idea, though I don't have data to back
that up; it's just an intuition about what happens. Would it work to
instead decrease the growth rate on requests? If you're at the point
where a single page (or a few pages) makes the difference between
success and failure, it's already highly unpredictable (due to aslr &
other memory layout factors) whether your job is going to succeed
either way. If you get in a situation where there are hundreds of megs
free but you can't use them, that's a real problem, but I think it's
one we can solve without dropping down to small requests.

Note that there's also a bad fragmentation behavior from your proposal
if a large expand_heap is only temporarily blocked, e.g. by a
short-lived mmap. In that case, dropping down to smaller requests will
just fragment memory horribly so that subsequent requests (after the
mmap is released) will fail even worse.

Rich


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2017-07-18 17:27 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-18 12:52 [PATCH] malloc/expand_heap.c: really try to use all available memory Steven Walter
2017-07-18 17:27 ` Rich Felker

Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).