mailing list of musl libc
 help / color / mirror / Atom feed
* [musl] [PATCH] harden against unauthorized writes to the atexit function lists
@ 2020-12-02  0:55 Ariadne Conill
  2020-12-02  1:37 ` Rich Felker
  0 siblings, 1 reply; 4+ messages in thread
From: Ariadne Conill @ 2020-12-02  0:55 UTC (permalink / raw)
  To: musl; +Cc: Ariadne Conill

previously, the first atexit list block was stored in BSS, which means an
attacker could potentially ascertain its location and modify it, allowing
for its abuse as a code execution mechanism.

by moving the atexit list into a series of anonymous mmaped pages, we can
use mprotect to protect the atexit lists by keeping them readonly when they
are not being mutated by the __cxa_atexit() function.
---
 src/exit/atexit.c | 69 ++++++++++++++++++++++++++++++++---------------
 1 file changed, 47 insertions(+), 22 deletions(-)

diff --git a/src/exit/atexit.c b/src/exit/atexit.c
index 854e9fdd..8bf1cfcb 100644
--- a/src/exit/atexit.c
+++ b/src/exit/atexit.c
@@ -1,5 +1,6 @@
 #include <stdlib.h>
 #include <stdint.h>
+#include <sys/mman.h>
 #include "libc.h"
 #include "lock.h"
 #include "fork_impl.h"
@@ -9,17 +10,17 @@
 #define realloc undef
 #define free undef
 
-/* Ensure that at least 32 atexit handlers can be registered without malloc */
-#define COUNT 32
-
 static struct fl
 {
 	struct fl *next;
-	void (*f[COUNT])(void *);
-	void *a[COUNT];
-} builtin, *head;
+	struct fl_pair {
+		void (*f)(void *);
+		void *a;
+	} pairs[];
+} *head;
 
 static int slot;
+static size_t slot_max;
 static volatile int lock[1];
 volatile int *const __atexit_lockptr = lock;
 
@@ -27,12 +28,19 @@ void __funcs_on_exit()
 {
 	void (*func)(void *), *arg;
 	LOCK(lock);
-	for (; head; head=head->next, slot=COUNT) while(slot-->0) {
-		func = head->f[slot];
-		arg = head->a[slot];
-		UNLOCK(lock);
-		func(arg);
-		LOCK(lock);
+	for (; head; head=head->next, slot=slot_max) {
+		struct fl *next = head->next;
+
+		while(slot-->0) {
+			func = head->pairs[slot].f;
+			arg = head->pairs[slot].a;
+			UNLOCK(lock);
+			func(arg);
+			LOCK(lock);
+		}
+
+		munmap(head, PAGE_SIZE);
+		head = next;
 	}
 }
 
@@ -42,28 +50,45 @@ void __cxa_finalize(void *dso)
 
 int __cxa_atexit(void (*func)(void *), void *arg, void *dso)
 {
+	struct fl *cursor;
+
 	LOCK(lock);
 
-	/* Defer initialization of head so it can be in BSS */
-	if (!head) head = &builtin;
+	/* Figure out how many slots we can have per page: page size minus one pointer then divided by two for function-argument pairs. */
+	slot_max = (PAGE_SIZE - sizeof(void *)) / sizeof(struct fl_pair);
+
+	/* Determine if a new allocation is necessary. */
+	if (!head || slot == slot_max)
+		cursor = 0;
+	else
+		cursor = head;
 
-	/* If the current function list is full, add a new one */
-	if (slot==COUNT) {
-		struct fl *new_fl = calloc(sizeof(struct fl), 1);
-		if (!new_fl) {
+	/* If a new allocation is necessary, allocate it read-write, otherwise make the current atexit list read-write. */
+	if (!cursor) {
+		cursor = mmap(0, PAGE_SIZE, PROT_READ | PROT_WRITE, MAP_ANON | MAP_PRIVATE, -1, 0);
+		if (cursor == MAP_FAILED) {
 			UNLOCK(lock);
 			return -1;
 		}
-		new_fl->next = head;
-		head = new_fl;
+		cursor->next = head;
+		head = cursor;
 		slot = 0;
+	} else if (mprotect(cursor, PAGE_SIZE, PROT_READ | PROT_WRITE)) {
+		UNLOCK(lock);
+		return -1;
 	}
 
 	/* Append function to the list. */
-	head->f[slot] = func;
-	head->a[slot] = arg;
+	cursor->pairs[slot].f = func;
+	cursor->pairs[slot].a = arg;
 	slot++;
 
+	/* Mark the atexit list read-only to avoid its abuse. */
+	if (mprotect(cursor, PAGE_SIZE, PROT_READ)) {
+		UNLOCK(lock);
+		return -1;
+	}
+
 	UNLOCK(lock);
 	return 0;
 }
-- 
2.29.2


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [musl] [PATCH] harden against unauthorized writes to the atexit function lists
  2020-12-02  0:55 [musl] [PATCH] harden against unauthorized writes to the atexit function lists Ariadne Conill
@ 2020-12-02  1:37 ` Rich Felker
  2020-12-03 17:55   ` Matthew Maurer
  0 siblings, 1 reply; 4+ messages in thread
From: Rich Felker @ 2020-12-02  1:37 UTC (permalink / raw)
  To: musl

On Tue, Dec 01, 2020 at 05:55:39PM -0700, Ariadne Conill wrote:
> previously, the first atexit list block was stored in BSS, which means an
> attacker could potentially ascertain its location and modify it, allowing
> for its abuse as a code execution mechanism.
> 
> by moving the atexit list into a series of anonymous mmaped pages, we can
> use mprotect to protect the atexit lists by keeping them readonly when they
> are not being mutated by the __cxa_atexit() function.

This is a non-starter. atexit is specifically required by the standard
to succeed when called no more than 32 times, which is why we have 32
built-in slots that always exist. If you really want to pursue
something here it should probably just be protecting the pointers with
some secret...

Rich

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [musl] [PATCH] harden against unauthorized writes to the atexit function lists
  2020-12-02  1:37 ` Rich Felker
@ 2020-12-03 17:55   ` Matthew Maurer
  2020-12-03 19:16     ` Rich Felker
  0 siblings, 1 reply; 4+ messages in thread
From: Matthew Maurer @ 2020-12-03 17:55 UTC (permalink / raw)
  To: musl

[-- Attachment #1: Type: text/plain, Size: 1589 bytes --]

While we are not a supported system, I would also note that I work on a
project which *does* expect `atexit` to work properly, but does not have a
working anonymous mmap system due to how resource management works for us.

If ASLR is enabled, that should already mitigate the ability of an attacker
to write to this location. While placing these in anonymous pages would be
a further defense against cases where a pointer is leaked *and* an
arbitrary overwrite is present, this would be an argument for a more
complete form of randomization (see also pagerando,
https://lists.llvm.org/pipermail/llvm-dev/2017-June/113794.html ) rather
than trying to randomize the location of individual chunks of sensitive
data.

On Tue, Dec 1, 2020 at 5:37 PM Rich Felker <dalias@libc.org> wrote:

> On Tue, Dec 01, 2020 at 05:55:39PM -0700, Ariadne Conill wrote:
> > previously, the first atexit list block was stored in BSS, which means an
> > attacker could potentially ascertain its location and modify it, allowing
> > for its abuse as a code execution mechanism.
> >
> > by moving the atexit list into a series of anonymous mmaped pages, we can
> > use mprotect to protect the atexit lists by keeping them readonly when
> they
> > are not being mutated by the __cxa_atexit() function.
>
> This is a non-starter. atexit is specifically required by the standard
> to succeed when called no more than 32 times, which is why we have 32
> built-in slots that always exist. If you really want to pursue
> something here it should probably just be protecting the pointers with
> some secret...
>
> Rich
>

[-- Attachment #2: Type: text/html, Size: 2034 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [musl] [PATCH] harden against unauthorized writes to the atexit function lists
  2020-12-03 17:55   ` Matthew Maurer
@ 2020-12-03 19:16     ` Rich Felker
  0 siblings, 0 replies; 4+ messages in thread
From: Rich Felker @ 2020-12-03 19:16 UTC (permalink / raw)
  To: Matthew Maurer; +Cc: musl

On Thu, Dec 03, 2020 at 09:55:02AM -0800, Matthew Maurer wrote:
> While we are not a supported system, I would also note that I work on a
> project which *does* expect `atexit` to work properly, but does not have a
> working anonymous mmap system due to how resource management works for us.
> 
> If ASLR is enabled, that should already mitigate the ability of an attacker
> to write to this location. While placing these in anonymous pages would be
> a further defense against cases where a pointer is leaked *and* an
> arbitrary overwrite is present, this would be an argument for a more
> complete form of randomization (see also pagerando,
> https://lists.llvm.org/pipermail/llvm-dev/2017-June/113794.html ) rather
> than trying to randomize the location of individual chunks of sensitive
> data.

Discussion on IRC and subsequently on Twitter seems to have
established that there are no direct paths for a reasonable program
following libc interface contracts to clobber libc .data/.bss memory
via attacker-controlled offset from a pointer valid for writing. So
(assuming dynamic linking; otherwise it may be possible via
offset from application .data/.bss) attacks on atexit array seem to
require already having achieved significant control over the victim
process.

Rich

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-12-03 19:17 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-02  0:55 [musl] [PATCH] harden against unauthorized writes to the atexit function lists Ariadne Conill
2020-12-02  1:37 ` Rich Felker
2020-12-03 17:55   ` Matthew Maurer
2020-12-03 19:16     ` Rich Felker

mailing list of musl libc

This inbox may be cloned and mirrored by anyone:

	git clone --mirror http://inbox.vuxu.org/musl

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V1 musl musl/ http://inbox.vuxu.org/musl \
		musl@inbox.vuxu.org
	public-inbox-index musl

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.vuxu.org/vuxu.archive.musl


code repositories for the project(s) associated with this inbox:

	https://git.vuxu.org/mirror/musl/

AGPL code for this site: git clone https://public-inbox.org/public-inbox.git