mailing list of musl libc
 help / color / mirror / code / Atom feed
* [musl] mallocng switchover - opportunity to test
@ 2020-06-09  3:50 Rich Felker
  2020-06-09 11:09 ` Szabolcs Nagy
  2020-06-11  3:49 ` Rich Felker
  0 siblings, 2 replies; 11+ messages in thread
From: Rich Felker @ 2020-06-09  3:50 UTC (permalink / raw)
  To: musl

[-- Attachment #1: Type: text/plain, Size: 829 bytes --]

I just pushed a series of changes in preparation for upstreaming
mallocng. Before it's actually imported, it can be tested by
performing the following simple 4 steps:

1. mkdir src/malloc/mallocng
2. echo "MALLOC_DIR = mallocng" >> config.mak
3. Dropping the attached files into src/malloc/mallocng
4. Symlinking or copying meta.h, malloc.c, realloc.c, free.c,
   malloc_usable_size.c, and aligned_alloc.c from mallocng source dir
   to src/malloc/mallocng. (You can also include dump.c if desired.)

This produces a near-fully-integrated malloc, including support for
reclaim_gaps donation from ldso. The only functionality missing, which
I expect to flesh out before actual import, is handling of the case of
incomplete malloc replacement by interposition (__malloc_replaced!=0).

Please report any problems encountered.

Rich

[-- Attachment #2: glue.h --]
[-- Type: text/plain, Size: 1754 bytes --]

#ifndef MALLOC_GLUE_H
#define MALLOC_GLUE_H

#include <stdint.h>
#include <sys/mman.h>
#include <pthread.h>
#include <unistd.h>
#include <elf.h>
#include <string.h>
#include "atomic.h"
#include "syscall.h"
#include "libc.h"
#include "lock.h"

// use macros to appropriately namespace these.
#define size_classes __malloc_size_classes
#define ctx __malloc_context
#define alloc_meta __malloc_alloc_meta

#define dump_heap __dump_heap

#if USE_REAL_ASSERT
#include <assert.h>
#else
#undef assert
#define assert(x) do { if (!(x)) a_crash(); } while(0)
#endif

#define brk(p) ((uintptr_t)__syscall(SYS_brk, p))

#define mmap __mmap
#define madvise __madvise
#define mremap __mremap

#ifndef a_clz_32
#define a_clz_32 a_clz_32
static inline int a_clz_32(uint32_t x)
{
#ifdef __GNUC__
	return __builtin_clz(x);
#else
	x--;
	x |= x >> 1;
	x |= x >> 2;
	x |= x >> 4;
	x |= x >> 8;
	x |= x >> 16;
	x++;
	return 31-a_ctz_32(x);
#endif
}
#endif

static inline uint64_t get_random_secret()
{
	uint64_t secret = (uintptr_t)&secret * 1103515245;
	for (size_t i=0; libc.auxv[i]; i+=2)
		if (libc.auxv[i]==AT_RANDOM)
			memcpy(&secret, (char *)libc.auxv[i+1], sizeof secret);
	return secret;
}

#ifndef PAGESIZE
#define PAGESIZE PAGE_SIZE
#endif

static inline size_t get_page_size()
{
	return PAGESIZE;
}

// no portable "is multithreaded" predicate so assume true
#define MT (libc.need_locks)

#define RDLOCK_IS_EXCLUSIVE 1

__attribute__((__visibility__("hidden")))
extern int __malloc_lock[1];

#define LOCK_OBJ_DEF \
int __malloc_lock[1];

static inline void rdlock()
{
	if (MT) LOCK(__malloc_lock);
}
static inline void wrlock()
{
	if (MT) LOCK(__malloc_lock);
}
static inline void unlock()
{
	UNLOCK(__malloc_lock);
}
static inline void upgradelock()
{
}

#endif

[-- Attachment #3: donate.c --]
[-- Type: text/plain, Size: 881 bytes --]

#include <stdlib.h>
#include <stdint.h>
#include <limits.h>
#include <string.h>
#include <sys/mman.h>
#include <errno.h>

#include "meta.h"

static void donate(unsigned char *base, size_t len)
{
	uintptr_t a = (uintptr_t)base;
	uintptr_t b = a + len;
	a += -a & (UNIT-1);
	b -= b & (UNIT-1);
	memset(base, 0, len);
	for (int sc=47; sc>0 && b>a; sc-=4) {
		if (b-a < (size_classes[sc]+1)*UNIT) continue;
		struct meta *m = alloc_meta();
		m->avail_mask = 0;
		m->freed_mask = 1;
		m->mem = (void *)a;
		m->mem->meta = m;
		m->last_idx = 0;
		m->freeable = 0;
		m->sizeclass = sc;
		m->maplen = 0;
		*((unsigned char *)m->mem+UNIT-4) = 0;
		*((unsigned char *)m->mem+UNIT-3) = 255;
		m->mem->storage[size_classes[sc]*UNIT-4] = 0;
		queue(&ctx.active[sc], m);
		a += (size_classes[sc]+1)*UNIT;
	}
}

void __malloc_donate(char *start, char *end)
{
	donate((void *)start, end-start);
}

[-- Attachment #4: replaced.c --]
[-- Type: text/plain, Size: 42 bytes --]

#include "meta.h"

int __malloc_replaced;

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [musl] mallocng switchover - opportunity to test
  2020-06-09  3:50 [musl] mallocng switchover - opportunity to test Rich Felker
@ 2020-06-09 11:09 ` Szabolcs Nagy
  2020-06-09 20:08   ` Rich Felker
  2020-06-11  3:49 ` Rich Felker
  1 sibling, 1 reply; 11+ messages in thread
From: Szabolcs Nagy @ 2020-06-09 11:09 UTC (permalink / raw)
  To: Rich Felker; +Cc: musl

* Rich Felker <dalias@libc.org> [2020-06-08 23:50:10 -0400]:
> This produces a near-fully-integrated malloc, including support for
> reclaim_gaps donation from ldso. The only functionality missing, which
> I expect to flesh out before actual import, is handling of the case of
> incomplete malloc replacement by interposition (__malloc_replaced!=0).

i would actually prefer if we didn't check for __malloc_replaced
in aligned alloc, because i think it does not provide significant
safety, but it prevents the simple RTLD_NEXT wrappers which are
commonly used for simple malloc debugging/tracing/etc (and while
unsafe in general depending on what libc api calls they make,
they likely work in practice).

(the check does not provide safety because existing interposers
written for glibc likely work with musl too without issues:
the only problem is if musl uses aligned alloc somewhere where
glibc does not so an interposer may work on glibc without
interposing aligned alloc but not on musl. for newly written
interposers we just need to document the api contract.)

fyi, i looked at enabling AArch64 MTE in malloc ng, with the
current linux syscall abi patches i saw the following issues:

- brk is not usable (this might change in the final linux patches
  but it's nice that we can turn brk usage off in malloc-ng)

- reclaim_gaps is not usable (in linux, file content may be in
  memory that does not support mte, even for private CoW maps,
  this can be changed in principle but to use reclaim_gaps elf
  changes will be needed anyway so a loader knows it has to
  use PROT_MTE and there is no elf abi design for that yet)

- meta data at the end of an allocation slot cannot share the
  same tagging granule (16byte) with user allocation (otherwise
  accessing it would become akward due to possible concurrent
  tag changes, even if we don't care about protecting that meta
  data with mte) hence 16 bytes has to be reserved for meta data,
  this impacts the overhead of small allocations (the required
  code change is small given it already uses 16byte units, but
  since the layout of allocations is affected this is probably
  the most intrusive change mte requires).

- madvise MADV_FREE means naive tagging of internal/freed memory
  with a reseved internal tag does not work: internal pointers
  cannot access memory after they are zeroed by the kernel.
  this can be fixed in various ways i haven't decided what's
  best yet. enabling mte will cause various regressions and
  different behaviour (e.g. because all pages are written to on
  malloc, calloc, realloc, free) this will be one of them.

if support for this is interesting i can work on patches
that can be upstreamed (e.g. macros conditional on mte
support etc)

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [musl] mallocng switchover - opportunity to test
  2020-06-09 11:09 ` Szabolcs Nagy
@ 2020-06-09 20:08   ` Rich Felker
  2020-06-10  0:58     ` Rich Felker
  2020-06-10  9:19     ` Szabolcs Nagy
  0 siblings, 2 replies; 11+ messages in thread
From: Rich Felker @ 2020-06-09 20:08 UTC (permalink / raw)
  To: musl

On Tue, Jun 09, 2020 at 01:09:14PM +0200, Szabolcs Nagy wrote:
> * Rich Felker <dalias@libc.org> [2020-06-08 23:50:10 -0400]:
> > This produces a near-fully-integrated malloc, including support for
> > reclaim_gaps donation from ldso. The only functionality missing, which
> > I expect to flesh out before actual import, is handling of the case of
> > incomplete malloc replacement by interposition (__malloc_replaced!=0).
> 
> i would actually prefer if we didn't check for __malloc_replaced
> in aligned alloc, because i think it does not provide significant
> safety, but it prevents the simple RTLD_NEXT wrappers which are
> commonly used for simple malloc debugging/tracing/etc (and while
> unsafe in general depending on what libc api calls they make,
> they likely work in practice).
> 
> (the check does not provide safety because existing interposers
> written for glibc likely work with musl too without issues:
> the only problem is if musl uses aligned alloc somewhere where
> glibc does not so an interposer may work on glibc without
> interposing aligned alloc but not on musl. for newly written
> interposers we just need to document the api contract.)

I'm not sure about this, and how it interacts with our definition of
posix_memalign and memalign in terms of aligned_alloc.

> fyi, i looked at enabling AArch64 MTE in malloc ng, with the
> current linux syscall abi patches i saw the following issues:
> 
> - brk is not usable (this might change in the final linux patches
>   but it's nice that we can turn brk usage off in malloc-ng)

mallocng does not use brk for allocations, only (at most) for
metadata areas. And the metadata areas are supposed to be in a place
that's hard to hit via memory errors. So in some sense it may be
acceptable to just omit MTE for them. Disabling brk puts them in a
more predictable place (mmap zone), but being able to protect them
with MTE (with a tag not used for anything else) should give much
greated protection anyway.

> - reclaim_gaps is not usable (in linux, file content may be in
>   memory that does not support mte, even for private CoW maps,
>   this can be changed in principle but to use reclaim_gaps elf
>   changes will be needed anyway so a loader knows it has to
>   use PROT_MTE and there is no elf abi design for that yet)

Sometimes (often) the gaps will be in bss outside p_filesz, so those
should be usable even without fixing this on the kernel side. But
indeed it would be best to just have it always work right.

Probably __malloc_donate should, if built for MTE, attempt to mprotect
with PROT_MTE and decline to use the memory if it can't.

> - meta data at the end of an allocation slot cannot share the
>   same tagging granule (16byte) with user allocation (otherwise
>   accessing it would become akward due to possible concurrent
>   tag changes, even if we don't care about protecting that meta
>   data with mte) hence 16 bytes has to be reserved for meta data,
>   this impacts the overhead of small allocations (the required
>   code change is small given it already uses 16byte units, but
>   since the layout of allocations is affected this is probably
>   the most intrusive change mte requires).

Yes, the hard-coded 4 needs to be made into a macro. Sorry I didn't gt
a chance to review/merge your patch for that when you posted it; I had
too many other changes in process.

> - madvise MADV_FREE means naive tagging of internal/freed memory
>   with a reseved internal tag does not work: internal pointers
>   cannot access memory after they are zeroed by the kernel.
>   this can be fixed in various ways i haven't decided what's
>   best yet. enabling mte will cause various regressions and
>   different behaviour (e.g. because all pages are written to on
>   malloc, calloc, realloc, free) this will be one of them.

Can you clarify what goes wrong here? There shouldn't be access to
memory that's been freed except at time of enframe, and enframe should
be able to set a new tag at the same time it performs the access.

> if support for this is interesting i can work on patches
> that can be upstreamed (e.g. macros conditional on mte
> support etc)

There is interest, at least from me, and I hope we can also influence
improvement of things on the ELF and kernel sides.

I think it would be helpful to see patches even if they're a total
hack, before you work on polishing them for inclusion, since I might
have ideas how to change things to make the patches simpler and make
MTE less invasive.

Rich

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [musl] mallocng switchover - opportunity to test
  2020-06-09 20:08   ` Rich Felker
@ 2020-06-10  0:58     ` Rich Felker
  2020-06-10  8:33       ` Szabolcs Nagy
  2020-06-10  9:19     ` Szabolcs Nagy
  1 sibling, 1 reply; 11+ messages in thread
From: Rich Felker @ 2020-06-10  0:58 UTC (permalink / raw)
  To: musl

On Tue, Jun 09, 2020 at 04:08:00PM -0400, Rich Felker wrote:
> On Tue, Jun 09, 2020 at 01:09:14PM +0200, Szabolcs Nagy wrote:
> > * Rich Felker <dalias@libc.org> [2020-06-08 23:50:10 -0400]:
> > > This produces a near-fully-integrated malloc, including support for
> > > reclaim_gaps donation from ldso. The only functionality missing, which
> > > I expect to flesh out before actual import, is handling of the case of
> > > incomplete malloc replacement by interposition (__malloc_replaced!=0).
> > 
> > i would actually prefer if we didn't check for __malloc_replaced
> > in aligned alloc, because i think it does not provide significant
> > safety, but it prevents the simple RTLD_NEXT wrappers which are
> > commonly used for simple malloc debugging/tracing/etc (and while
> > unsafe in general depending on what libc api calls they make,
> > they likely work in practice).
> > 
> > (the check does not provide safety because existing interposers
> > written for glibc likely work with musl too without issues:
> > the only problem is if musl uses aligned alloc somewhere where
> > glibc does not so an interposer may work on glibc without
> > interposing aligned alloc but not on musl. for newly written
> > interposers we just need to document the api contract.)
> 
> I'm not sure about this, and how it interacts with our definition of
> posix_memalign and memalign in terms of aligned_alloc.

What do you think of this proposal:

Have ldso track both whether malloc was replaced and whether
aligned_alloc was replaced. If malloc was replaced but aligned_alloc
wasn't, aligned_alloc fails with ENOMEM. If both were replaced and our
internal aligned_alloc still gets called, assume some sort of wrapping
is going on and allow it to proceed.

With mallocng, this is "safe" against misuse in the sense that it will
trap rather than corrupting memory if the contract is violated.

Rich

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [musl] mallocng switchover - opportunity to test
  2020-06-10  0:58     ` Rich Felker
@ 2020-06-10  8:33       ` Szabolcs Nagy
  2020-06-11 20:05         ` Rich Felker
  0 siblings, 1 reply; 11+ messages in thread
From: Szabolcs Nagy @ 2020-06-10  8:33 UTC (permalink / raw)
  To: Rich Felker; +Cc: musl

* Rich Felker <dalias@libc.org> [2020-06-09 20:58:26 -0400]:

> On Tue, Jun 09, 2020 at 04:08:00PM -0400, Rich Felker wrote:
> > On Tue, Jun 09, 2020 at 01:09:14PM +0200, Szabolcs Nagy wrote:
> > > * Rich Felker <dalias@libc.org> [2020-06-08 23:50:10 -0400]:
> > > > This produces a near-fully-integrated malloc, including support for
> > > > reclaim_gaps donation from ldso. The only functionality missing, which
> > > > I expect to flesh out before actual import, is handling of the case of
> > > > incomplete malloc replacement by interposition (__malloc_replaced!=0).
> > > 
> > > i would actually prefer if we didn't check for __malloc_replaced
> > > in aligned alloc, because i think it does not provide significant
> > > safety, but it prevents the simple RTLD_NEXT wrappers which are
> > > commonly used for simple malloc debugging/tracing/etc (and while
> > > unsafe in general depending on what libc api calls they make,
> > > they likely work in practice).
> > > 
> > > (the check does not provide safety because existing interposers
> > > written for glibc likely work with musl too without issues:
> > > the only problem is if musl uses aligned alloc somewhere where
> > > glibc does not so an interposer may work on glibc without
> > > interposing aligned alloc but not on musl. for newly written
> > > interposers we just need to document the api contract.)
> > 
> > I'm not sure about this, and how it interacts with our definition of
> > posix_memalign and memalign in terms of aligned_alloc.
> 
> What do you think of this proposal:
> 
> Have ldso track both whether malloc was replaced and whether
> aligned_alloc was replaced. If malloc was replaced but aligned_alloc
> wasn't, aligned_alloc fails with ENOMEM. If both were replaced and our
> internal aligned_alloc still gets called, assume some sort of wrapping
> is going on and allow it to proceed.
> 
> With mallocng, this is "safe" against misuse in the sense that it will
> trap rather than corrupting memory if the contract is violated.

sounds good to me.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [musl] mallocng switchover - opportunity to test
  2020-06-09 20:08   ` Rich Felker
  2020-06-10  0:58     ` Rich Felker
@ 2020-06-10  9:19     ` Szabolcs Nagy
  2020-06-10 16:41       ` Rich Felker
  1 sibling, 1 reply; 11+ messages in thread
From: Szabolcs Nagy @ 2020-06-10  9:19 UTC (permalink / raw)
  To: Rich Felker; +Cc: musl

* Rich Felker <dalias@libc.org> [2020-06-09 16:08:00 -0400]:
> On Tue, Jun 09, 2020 at 01:09:14PM +0200, Szabolcs Nagy wrote:
> > - reclaim_gaps is not usable (in linux, file content may be in
> >   memory that does not support mte, even for private CoW maps,
> >   this can be changed in principle but to use reclaim_gaps elf
> >   changes will be needed anyway so a loader knows it has to
> >   use PROT_MTE and there is no elf abi design for that yet)
> 
> Sometimes (often) the gaps will be in bss outside p_filesz, so those
> should be usable even without fixing this on the kernel side. But
> indeed it would be best to just have it always work right.
> 
> Probably __malloc_donate should, if built for MTE, attempt to mprotect
> with PROT_MTE and decline to use the memory if it can't.

yeah checking mprotect return value works.

> > - madvise MADV_FREE means naive tagging of internal/freed memory
> >   with a reseved internal tag does not work: internal pointers
> >   cannot access memory after they are zeroed by the kernel.
> >   this can be fixed in various ways i haven't decided what's
> >   best yet. enabling mte will cause various regressions and
> >   different behaviour (e.g. because all pages are written to on
> >   malloc, calloc, realloc, free) this will be one of them.
> 
> Can you clarify what goes wrong here? There shouldn't be access to
> memory that's been freed except at time of enframe, and enframe should
> be able to set a new tag at the same time it performs the access.

the simplest model i could come up with was just tagging
right after mmap/mprotect everything with a reserved tag
and all metadata pointers use that tag.

then user allocations get a different tag and only the
user owned range (plus padding at the end) uses that tag
which is cleared on free (back to the reserved tag).

this means internal pointer handling does not need to
deal with tagging at all: all metadata pointers have
the right tag to access all mapped non-user memory.

this model breaks if tags can be zeroed after free so
metadata allocation has to ensure it retags before
accessing memory, but that requires more changes
(e.g. enframe does various accesses in freed memory
when that is reused but with a different offset, now
we don't know what is the right tag for such access:
is it 0 or the reserved tag? we can just always retag
but then enframe will always tag which is more code
change)

if the 'reserved tag' is the 0 tag then the model still
works, but meta data is less protected against invalid
access with non-heap pointers (which will have 0 tag).

> > if support for this is interesting i can work on patches
> > that can be upstreamed (e.g. macros conditional on mte
> > support etc)
> 
> There is interest, at least from me, and I hope we can also influence
> improvement of things on the ELF and kernel sides.
> 
> I think it would be helpful to see patches even if they're a total
> hack, before you work on polishing them for inclusion, since I might
> have ideas how to change things to make the patches simpler and make
> MTE less invasive.

ok.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [musl] mallocng switchover - opportunity to test
  2020-06-10  9:19     ` Szabolcs Nagy
@ 2020-06-10 16:41       ` Rich Felker
  0 siblings, 0 replies; 11+ messages in thread
From: Rich Felker @ 2020-06-10 16:41 UTC (permalink / raw)
  To: musl

On Wed, Jun 10, 2020 at 11:19:43AM +0200, Szabolcs Nagy wrote:
> * Rich Felker <dalias@libc.org> [2020-06-09 16:08:00 -0400]:
> > On Tue, Jun 09, 2020 at 01:09:14PM +0200, Szabolcs Nagy wrote:
> > > - reclaim_gaps is not usable (in linux, file content may be in
> > >   memory that does not support mte, even for private CoW maps,
> > >   this can be changed in principle but to use reclaim_gaps elf
> > >   changes will be needed anyway so a loader knows it has to
> > >   use PROT_MTE and there is no elf abi design for that yet)
> > 
> > Sometimes (often) the gaps will be in bss outside p_filesz, so those
> > should be usable even without fixing this on the kernel side. But
> > indeed it would be best to just have it always work right.
> > 
> > Probably __malloc_donate should, if built for MTE, attempt to mprotect
> > with PROT_MTE and decline to use the memory if it can't.
> 
> yeah checking mprotect return value works.
> 
> > > - madvise MADV_FREE means naive tagging of internal/freed memory
> > >   with a reseved internal tag does not work: internal pointers
> > >   cannot access memory after they are zeroed by the kernel.
> > >   this can be fixed in various ways i haven't decided what's
> > >   best yet. enabling mte will cause various regressions and
> > >   different behaviour (e.g. because all pages are written to on
> > >   malloc, calloc, realloc, free) this will be one of them.
> > 
> > Can you clarify what goes wrong here? There shouldn't be access to
> > memory that's been freed except at time of enframe, and enframe should
> > be able to set a new tag at the same time it performs the access.
> 
> the simplest model i could come up with was just tagging
> right after mmap/mprotect everything with a reserved tag
> and all metadata pointers use that tag.
> 
> then user allocations get a different tag and only the
> user owned range (plus padding at the end) uses that tag
> which is cleared on free (back to the reserved tag).
> 
> this means internal pointer handling does not need to
> deal with tagging at all: all metadata pointers have
> the right tag to access all mapped non-user memory.
> 
> this model breaks if tags can be zeroed after free so
> metadata allocation has to ensure it retags before
> accessing memory, but that requires more changes
> (e.g. enframe does various accesses in freed memory
> when that is reused but with a different offset, now
> we don't know what is the right tag for such access:
> is it 0 or the reserved tag? we can just always retag
> but then enframe will always tag which is more code
> change)

Yes, you can just always retag there. At that point the caller owns
the entire slot and nothing else can observe/touch it except debug
code in dump.c which doesn't need to work. (FWIW the ownership model
here is sufficiently strong that enframe is even called without the
lock held.)

> if the 'reserved tag' is the 0 tag then the model still
> works, but meta data is less protected against invalid
> access with non-heap pointers (which will have 0 tag).

Yes exactly. However I'm not sure it's really that big an issue, and
it might be the best solution. If you never use tag 0 for valid heap
allocations, only freed heap space, then any UAF/DF attempts will trap
due to the tag 0 not matching the nonzero tag in the dangling pointer.
Of course fixed-offset-from-mmapped-library accesses are still
possible and won't trap, but I'm not sure they're valuable in attacks.
The only threat here is probably information leak from freed memory,
and if we're going to retag freed memory with tag zero maybe it makes
sense to just memset it all to zero at the same time (is memset any
slower than "memset just the tag"?).

Rich

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [musl] mallocng switchover - opportunity to test
  2020-06-09  3:50 [musl] mallocng switchover - opportunity to test Rich Felker
  2020-06-09 11:09 ` Szabolcs Nagy
@ 2020-06-11  3:49 ` Rich Felker
  2020-06-11  4:33   ` Rich Felker
  1 sibling, 1 reply; 11+ messages in thread
From: Rich Felker @ 2020-06-11  3:49 UTC (permalink / raw)
  To: musl

On Mon, Jun 08, 2020 at 11:50:10PM -0400, Rich Felker wrote:
> I just pushed a series of changes in preparation for upstreaming
> mallocng. Before it's actually imported, it can be tested by
> performing the following simple 4 steps:
> 
> 1. mkdir src/malloc/mallocng
> 2. echo "MALLOC_DIR = mallocng" >> config.mak
> 3. Dropping the attached files into src/malloc/mallocng
> 4. Symlinking or copying meta.h, malloc.c, realloc.c, free.c,
>    malloc_usable_size.c, and aligned_alloc.c from mallocng source dir
>    to src/malloc/mallocng. (You can also include dump.c if desired.)
> 
> This produces a near-fully-integrated malloc, including support for
> reclaim_gaps donation from ldso. The only functionality missing, which
> I expect to flesh out before actual import, is handling of the case of
> incomplete malloc replacement by interposition (__malloc_replaced!=0).
> 
> Please report any problems encountered.

For reference -- I should have mentioned in the original post -- the
above is with musl commit 384c0131ccda2656dec23a0416ad3f14101151a7
and mallocng-draft commit c0d6d87596f565e652e126f54aa1a2afaecc0e52.

I'll have an update to these posted soon.

Rich

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [musl] mallocng switchover - opportunity to test
  2020-06-11  3:49 ` Rich Felker
@ 2020-06-11  4:33   ` Rich Felker
  0 siblings, 0 replies; 11+ messages in thread
From: Rich Felker @ 2020-06-11  4:33 UTC (permalink / raw)
  To: musl

[-- Attachment #1: Type: text/plain, Size: 1543 bytes --]

On Wed, Jun 10, 2020 at 11:49:34PM -0400, Rich Felker wrote:
> On Mon, Jun 08, 2020 at 11:50:10PM -0400, Rich Felker wrote:
> > I just pushed a series of changes in preparation for upstreaming
> > mallocng. Before it's actually imported, it can be tested by
> > performing the following simple 4 steps:
> > 
> > 1. mkdir src/malloc/mallocng
> > 2. echo "MALLOC_DIR = mallocng" >> config.mak
> > 3. Dropping the attached files into src/malloc/mallocng
> > 4. Symlinking or copying meta.h, malloc.c, realloc.c, free.c,
> >    malloc_usable_size.c, and aligned_alloc.c from mallocng source dir
> >    to src/malloc/mallocng. (You can also include dump.c if desired.)
> > 
> > This produces a near-fully-integrated malloc, including support for
> > reclaim_gaps donation from ldso. The only functionality missing, which
> > I expect to flesh out before actual import, is handling of the case of
> > incomplete malloc replacement by interposition (__malloc_replaced!=0).
> > 
> > Please report any problems encountered.
> 
> For reference -- I should have mentioned in the original post -- the
> above is with musl commit 384c0131ccda2656dec23a0416ad3f14101151a7
> and mallocng-draft commit c0d6d87596f565e652e126f54aa1a2afaecc0e52.
> 
> I'll have an update to these posted soon.

Updated upsteam. Reduction in what's needed to integrate, only 2 files
now, smaller than before (attached).

musl: ca36573ecf
mallocng-draft: 55c1e76bb0

Files needed from mallocng: meta.h, malloc.c, realloc.c, free.c,
malloc_usable_size.c, and aligned_alloc.c

Rich

[-- Attachment #2: glue.h --]
[-- Type: text/plain, Size: 1518 bytes --]

#ifndef MALLOC_GLUE_H
#define MALLOC_GLUE_H

#include <stdint.h>
#include <sys/mman.h>
#include <pthread.h>
#include <unistd.h>
#include <elf.h>
#include <string.h>
#include "atomic.h"
#include "syscall.h"
#include "libc.h"
#include "lock.h"
#include "dynlink.h"

// use macros to appropriately namespace these.
#define size_classes __malloc_size_classes
#define ctx __malloc_context
#define alloc_meta __malloc_alloc_meta
#define is_allzero __malloc_allzerop
#define dump_heap __dump_heap

#if USE_REAL_ASSERT
#include <assert.h>
#else
#undef assert
#define assert(x) do { if (!(x)) a_crash(); } while(0)
#endif

#define brk(p) ((uintptr_t)__syscall(SYS_brk, p))

#define mmap __mmap
#define madvise __madvise
#define mremap __mremap

#define DISABLE_ALIGNED_ALLOC (__malloc_replaced && !__aligned_alloc_replaced)

static inline uint64_t get_random_secret()
{
	uint64_t secret = (uintptr_t)&secret * 1103515245;
	for (size_t i=0; libc.auxv[i]; i+=2)
		if (libc.auxv[i]==AT_RANDOM)
			memcpy(&secret, (char *)libc.auxv[i+1]+8, sizeof secret);
	return secret;
}

#ifndef PAGESIZE
#define PAGESIZE PAGE_SIZE
#endif

#define MT (libc.need_locks)

#define RDLOCK_IS_EXCLUSIVE 1

__attribute__((__visibility__("hidden")))
extern int __malloc_lock[1];

#define LOCK_OBJ_DEF \
int __malloc_lock[1];

static inline void rdlock()
{
	if (MT) LOCK(__malloc_lock);
}
static inline void wrlock()
{
	if (MT) LOCK(__malloc_lock);
}
static inline void unlock()
{
	UNLOCK(__malloc_lock);
}
static inline void upgradelock()
{
}

#endif

[-- Attachment #3: donate.c --]
[-- Type: text/plain, Size: 881 bytes --]

#include <stdlib.h>
#include <stdint.h>
#include <limits.h>
#include <string.h>
#include <sys/mman.h>
#include <errno.h>

#include "meta.h"

static void donate(unsigned char *base, size_t len)
{
	uintptr_t a = (uintptr_t)base;
	uintptr_t b = a + len;
	a += -a & (UNIT-1);
	b -= b & (UNIT-1);
	memset(base, 0, len);
	for (int sc=47; sc>0 && b>a; sc-=4) {
		if (b-a < (size_classes[sc]+1)*UNIT) continue;
		struct meta *m = alloc_meta();
		m->avail_mask = 0;
		m->freed_mask = 1;
		m->mem = (void *)a;
		m->mem->meta = m;
		m->last_idx = 0;
		m->freeable = 0;
		m->sizeclass = sc;
		m->maplen = 0;
		*((unsigned char *)m->mem+UNIT-4) = 0;
		*((unsigned char *)m->mem+UNIT-3) = 255;
		m->mem->storage[size_classes[sc]*UNIT-4] = 0;
		queue(&ctx.active[sc], m);
		a += (size_classes[sc]+1)*UNIT;
	}
}

void __malloc_donate(char *start, char *end)
{
	donate((void *)start, end-start);
}

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [musl] mallocng switchover - opportunity to test
  2020-06-10  8:33       ` Szabolcs Nagy
@ 2020-06-11 20:05         ` Rich Felker
  2020-06-12 17:42           ` Szabolcs Nagy
  0 siblings, 1 reply; 11+ messages in thread
From: Rich Felker @ 2020-06-11 20:05 UTC (permalink / raw)
  To: musl

On Wed, Jun 10, 2020 at 10:33:06AM +0200, Szabolcs Nagy wrote:
> > > I'm not sure about this, and how it interacts with our definition of
> > > posix_memalign and memalign in terms of aligned_alloc.
> > 
> > What do you think of this proposal:
> > 
> > Have ldso track both whether malloc was replaced and whether
> > aligned_alloc was replaced. If malloc was replaced but aligned_alloc
> > wasn't, aligned_alloc fails with ENOMEM. If both were replaced and our
> > internal aligned_alloc still gets called, assume some sort of wrapping
> > is going on and allow it to proceed.
> > 
> > With mallocng, this is "safe" against misuse in the sense that it will
> > trap rather than corrupting memory if the contract is violated.
> 
> sounds good to me.

As of commit 1fc67fc117f9d25d240d46bbef78ebccacec7097 it should behave
as proposed here. Does this look/behave like what you expected?

Rich

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [musl] mallocng switchover - opportunity to test
  2020-06-11 20:05         ` Rich Felker
@ 2020-06-12 17:42           ` Szabolcs Nagy
  0 siblings, 0 replies; 11+ messages in thread
From: Szabolcs Nagy @ 2020-06-12 17:42 UTC (permalink / raw)
  To: Rich Felker; +Cc: musl

* Rich Felker <dalias@libc.org> [2020-06-11 16:05:30 -0400]:
> On Wed, Jun 10, 2020 at 10:33:06AM +0200, Szabolcs Nagy wrote:
> > > > I'm not sure about this, and how it interacts with our definition of
> > > > posix_memalign and memalign in terms of aligned_alloc.
> > > 
> > > What do you think of this proposal:
> > > 
> > > Have ldso track both whether malloc was replaced and whether
> > > aligned_alloc was replaced. If malloc was replaced but aligned_alloc
> > > wasn't, aligned_alloc fails with ENOMEM. If both were replaced and our
> > > internal aligned_alloc still gets called, assume some sort of wrapping
> > > is going on and allow it to proceed.
> > > 
> > > With mallocng, this is "safe" against misuse in the sense that it will
> > > trap rather than corrupting memory if the contract is violated.
> > 
> > sounds good to me.
> 
> As of commit 1fc67fc117f9d25d240d46bbef78ebccacec7097 it should behave
> as proposed here. Does this look/behave like what you expected?

yes this works for me, thanks.

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2020-06-12 17:42 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-09  3:50 [musl] mallocng switchover - opportunity to test Rich Felker
2020-06-09 11:09 ` Szabolcs Nagy
2020-06-09 20:08   ` Rich Felker
2020-06-10  0:58     ` Rich Felker
2020-06-10  8:33       ` Szabolcs Nagy
2020-06-11 20:05         ` Rich Felker
2020-06-12 17:42           ` Szabolcs Nagy
2020-06-10  9:19     ` Szabolcs Nagy
2020-06-10 16:41       ` Rich Felker
2020-06-11  3:49 ` Rich Felker
2020-06-11  4:33   ` Rich Felker

Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).