mailing list of musl libc
 help / color / mirror / code / Atom feed
* [musl] Bug report: Reproduction of seg fault caused by musl thread creation race condition
@ 2020-01-27  0:33 Simon
  2020-01-27  5:28 ` Markus Wichmann
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Simon @ 2020-01-27  0:33 UTC (permalink / raw)
  To: musl

[-- Attachment #1: Type: text/plain, Size: 672 bytes --]

Hello! I recently had some C code which works normally with glibc but seg
faults sometimes with musl. I managed to reproduce the seg fault via
musl-gcc and Alpine Linux and document it here [1]. Seems to be some kind
of race condition, so hopefully you guys also get a seg fault when you
follow my reproduction steps. Hope this helps and looking forward to any
feedback or helping further if possible, Simon

P.S. musl newbie question: Why does my binary built on Alpine Linux report
a .so being loaded by ldd, but on Ubuntu the same binary is reported as
being static? Also, detail that here [1] too.

[1] https://gist.github.com/simonhf/6d8097f4d6caa572cc42354f494b20ef

[-- Attachment #2: Type: text/html, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [musl] Bug report: Reproduction of seg fault caused by musl thread creation race condition
  2020-01-27  0:33 [musl] Bug report: Reproduction of seg fault caused by musl thread creation race condition Simon
@ 2020-01-27  5:28 ` Markus Wichmann
  2020-01-27 17:51 ` Rich Felker
  2020-02-03 21:14 ` [musl] Why does musl printf() use so much more stack than other implementations when printf()ing floating point numbers? Simon
  2 siblings, 0 replies; 10+ messages in thread
From: Markus Wichmann @ 2020-01-27  5:28 UTC (permalink / raw)
  To: musl

On Sun, Jan 26, 2020 at 04:33:57PM -0800, Simon wrote:
> Hello! I recently had some C code which works normally with glibc but seg
> faults sometimes with musl. I managed to reproduce the seg fault via
> musl-gcc and Alpine Linux and document it here [1]. Seems to be some kind
> of race condition, so hopefully you guys also get a seg fault when you
> follow my reproduction steps. Hope this helps and looking forward to any
> feedback or helping further if possible, Simon

Race condition yes, but in your code. Until pthread_create() has
returned, you cannot be certain that it has stored the new thread's ID
into my_pthread, yet you access my_pthread from the new thread without
synchronization. What I presume is happening, is that the new thread
starts executing before the main thread has a chance to perform that
store. Glibc implements pthread_create() differently and somehow gets
around the problem at least most of the time.

Incidentally, that also means you are reading my_pthread from the other
thread while it is being written in the main thread. That is a data
race, and thus undefined behavior.

You could add a mutex or an rwlock around my_pthread, or a barrier after
the pthread_create() and before the first access in the new thread.

>
> P.S. musl newbie question: Why does my binary built on Alpine Linux report
> a .so being loaded by ldd, but on Ubuntu the same binary is reported as
> being static? Also, detail that here [1] too.
>
> [1] https://gist.github.com/simonhf/6d8097f4d6caa572cc42354f494b20ef

Not a clue, but I guess that Ubuntu's ldd tries to get around the
arbitrary code execution problem with ldd by detecting nonstandard
interpreters and refusing to use those. See, ldd has no way of actually
telling the entire set of needed shared libraries except to run the
dynamic interpreter on the file, but if the executable is malicious,
then the interpreter is attacker-controlled and might also be malicious.
That used to be a well-known social engineering trick. Just ring up your
sysadmin and tell them "Hey man, I have a binary XYZ here that says it
needs some shared lib. Can you help me?" Then the sysadmin runs ldd on
the binary and the dynamic interpreter runs with sysadmin privileges.
What could go wrong?

Ciao,
Markus

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [musl] Bug report: Reproduction of seg fault caused by musl thread creation race condition
  2020-01-27  0:33 [musl] Bug report: Reproduction of seg fault caused by musl thread creation race condition Simon
  2020-01-27  5:28 ` Markus Wichmann
@ 2020-01-27 17:51 ` Rich Felker
  2020-01-27 19:59   ` Simon
  2020-02-03 21:14 ` [musl] Why does musl printf() use so much more stack than other implementations when printf()ing floating point numbers? Simon
  2 siblings, 1 reply; 10+ messages in thread
From: Rich Felker @ 2020-01-27 17:51 UTC (permalink / raw)
  To: Simon; +Cc: musl

On Sun, Jan 26, 2020 at 04:33:57PM -0800, Simon wrote:
> Hello! I recently had some C code which works normally with glibc but seg
> faults sometimes with musl. I managed to reproduce the seg fault via
> musl-gcc and Alpine Linux and document it here [1]. Seems to be some kind
> of race condition, so hopefully you guys also get a seg fault when you
> follow my reproduction steps. Hope this helps and looking forward to any
> feedback or helping further if possible, Simon
> 
> [1] https://gist.github.com/simonhf/6d8097f4d6caa572cc42354f494b20ef

This behavior was originally intentional. In general, if a function is
specified to modify pointed-to memory as output as a side effect of
success, that does not give it license to modify it on failure. And
since pthread_create can't commit to success until after the thread is
created, it would have to hold back start of the new thread
unnecessarily to guarantee that the result is written before the new
thread starts. (Note that it can't simply write the value from both
the caller and the new thread; the latter could end up writing to the
pthread_t object after the end of its lifetime.)

Moreover, there is no expectation from the application that it should
be able to read the result object from the new thread without
additional synchronization. The wording of the spec is:

    "Upon successful completion, pthread_create() shall store the ID
     ^^^^^^^^^^^^^^^^^^^^^^^^^^
    of the created thread in the location referenced by thread."

Until completion of (observation of & synchronization with return
from) pthread_create, nothing can be said about value of the object;
access to it is unsynchronized.

With that said, the specification for pthread_create does *allow*
implementations that store the value speculatively before success:

    "If pthread_create() fails, no new thread is created and the
    contents of the location referenced by thread are undefined."

I was not aware of this when writing it. So we could change it, but it
doesn't seem like a very good idea to do so; any code relying on it is
non-portable/racy. If the new thread needs its own id, there's an easy
and portable way to obtain it: pthread_self().

Are there reasons you still think the alternate behavior would be
preferable?

Rich

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [musl] Bug report: Reproduction of seg fault caused by musl thread creation race condition
  2020-01-27 17:51 ` Rich Felker
@ 2020-01-27 19:59   ` Simon
  2020-01-27 20:37     ` Szabolcs Nagy
  0 siblings, 1 reply; 10+ messages in thread
From: Simon @ 2020-01-27 19:59 UTC (permalink / raw)
  To: Rich Felker; +Cc: musl

[-- Attachment #1: Type: text/plain, Size: 6663 bytes --]

>
> Upon successful completion, pthread_create() shall store the ID
>

"Upon successfully completion" is somewhat vague wording IMO which could
result in misinterpretation? You seem to have interpreted it as "Upon
function completion", but surely if clone() succeeds, and the new thread
starts running, then there is already "successful completion" even before
pthread_create() has returned to caller?

It's also interesting that the latest man pages for pthread_create() [2]
have changed the wording substantially:

Before returning, a successful call to pthread_create() stores the ID of
> the new thread in the buffer pointed to by thread; this identifier is used
> to refer to the thread in subsequent calls to other pthreads functions.
>

This seems more specific than "Upon successful completion" but still
doesn't exactly tie down whether the thread ID should be written before the
new thread starts execution, only saying (somewhen) "Before returning" the
thread ID should be written by pthread_create(). In a way, it's backing up
the musl thread ID store implementation because it says 'subsequent calls
to other pthreads functions' which implies calls after pthread_create()
returns, whether in the same thread or other threads including the newly
created thread.

any code relying on it is non-portable/racy.
>

I checked the glibc pthread_create() implementation and it appears to set
the thread ID before the new thread executes. So from that POV any code
using glibc relying on this is arguably non-portable, but not racy since
there is no race with the glibc implementation?

Are there reasons you still think the alternate behavior would be
> preferable?
>

I'm not advocating for C/C++ code to access the thread ID like this.
However, assuming (a big if...) that long existing code relying on this --
that hasn't been compiled with musl yet -- exists and is not racy and works
fine with glibc, but will intermittently fail with the musl implementation,
wouldn't it be 'defensive programming' [3] to move the thread ID store in
the musl pthread_create() implementation? Worse, a binary compiled with
musl and this issue might already exist and be "working", but like the code
I provided, only fails if run by gdb or strace or specific timing
situations... and maybe gdb or strace have never been run on it to date? So
it's a sort of 'sleeper' issue?

Personally I think the proper fix is to (a) write the thread ID before the
new thread starts executing because it's common sense that things should
work that way, and existing code written with locks will continue to work,
and existing code written without locks will also just work, and (b)
whether fixing or not, try to update the various pthread_create()
documentation to account for this edge case, so that it's obvious one way
or the other. This would (a) fix the cause of the confusion, (b) avoid
users of pthread_create() having to fall back to common sense, and (c)
optionally make all existing caller implementations more robust?

If the new thread needs its own id, there's an easy and portable way to
> obtain it: pthread_self().
>

Interestingly, my initial implementation with pthread_self() actually
failed via glibc, or at least one of the pthread functions using it seemed
to fail... I cannot remember the exact circumstances unfortunately. Which
is why I started using the thread ID returned by pthread_create() in the
first place, because that didn't fail and no lock was needed. My personal
philosophy with threads and locks is to try and minimize their use
where-ever possible, and that's the path which prompted this email thread
in the first place because I developed the working code with glibc and only
later tried compiling and running with musl and to my surprise it didn't
work...

Anyway, I hope this is food for thought and no worries if you decide not to
change anything. It's been fun bringing this to your attention and
interacting with you! And if there's anything more I can do to help further
then please let me know!

--
Simon

[1] https://pubs.opengroup.org/onlinepubs/007908799/xsh/pthread_create.html
[2] http://man7.org/linux/man-pages/man3/pthread_create.3.html
[3] https://en.wikipedia.org/wiki/Defensive_programming

On Mon, Jan 27, 2020 at 9:51 AM Rich Felker <dalias@libc.org> wrote:

> On Sun, Jan 26, 2020 at 04:33:57PM -0800, Simon wrote:
> > Hello! I recently had some C code which works normally with glibc but seg
> > faults sometimes with musl. I managed to reproduce the seg fault via
> > musl-gcc and Alpine Linux and document it here [1]. Seems to be some kind
> > of race condition, so hopefully you guys also get a seg fault when you
> > follow my reproduction steps. Hope this helps and looking forward to any
> > feedback or helping further if possible, Simon
> >
> > [1] https://gist.github.com/simonhf/6d8097f4d6caa572cc42354f494b20ef
>
> This behavior was originally intentional. In general, if a function is
> specified to modify pointed-to memory as output as a side effect of
> success, that does not give it license to modify it on failure. And
> since pthread_create can't commit to success until after the thread is
> created, it would have to hold back start of the new thread
> unnecessarily to guarantee that the result is written before the new
> thread starts. (Note that it can't simply write the value from both
> the caller and the new thread; the latter could end up writing to the
> pthread_t object after the end of its lifetime.)
>
> Moreover, there is no expectation from the application that it should
> be able to read the result object from the new thread without
> additional synchronization. The wording of the spec is:
>
>     "Upon successful completion, pthread_create() shall store the ID
>      ^^^^^^^^^^^^^^^^^^^^^^^^^^
>     of the created thread in the location referenced by thread."
>
> Until completion of (observation of & synchronization with return
> from) pthread_create, nothing can be said about value of the object;
> access to it is unsynchronized.
>
> With that said, the specification for pthread_create does *allow*
> implementations that store the value speculatively before success:
>
>     "If pthread_create() fails, no new thread is created and the
>     contents of the location referenced by thread are undefined."
>
> I was not aware of this when writing it. So we could change it, but it
> doesn't seem like a very good idea to do so; any code relying on it is
> non-portable/racy. If the new thread needs its own id, there's an easy
> and portable way to obtain it: pthread_self().
>
> Are there reasons you still think the alternate behavior would be
> preferable?
>
> Rich
>

[-- Attachment #2: Type: text/html, Size: 8692 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [musl] Bug report: Reproduction of seg fault caused by musl thread creation race condition
  2020-01-27 19:59   ` Simon
@ 2020-01-27 20:37     ` Szabolcs Nagy
  2020-01-27 20:46       ` Rich Felker
  0 siblings, 1 reply; 10+ messages in thread
From: Szabolcs Nagy @ 2020-01-27 20:37 UTC (permalink / raw)
  To: musl; +Cc: Rich Felker, Simon

* Simon <simonhf@gmail.com> [2020-01-27 11:59:24 -0800]:
> > Upon successful completion, pthread_create() shall store the ID
> >
> 
> "Upon successfully completion" is somewhat vague wording IMO which could
> result in misinterpretation? You seem to have interpreted it as "Upon
> function completion", but surely if clone() succeeds, and the new thread
> starts running, then there is already "successful completion" even before
> pthread_create() has returned to caller?

there is no ambiguity here.

completion of a function call, such as pthread_create,
is well defined and it's the point that's sequenced
after the call returns, you have to synchronize with
that point to read the value from another thread.

> This seems more specific than "Upon successful completion" but still
> doesn't exactly tie down whether the thread ID should be written before the

the point is to not tie the behaviour down so you cannot
rely on it, this allows implementation freedom.

> I checked the glibc pthread_create() implementation and it appears to set
> the thread ID before the new thread executes. So from that POV any code
> using glibc relying on this is arguably non-portable, but not racy since
> there is no race with the glibc implementation?

it is racy because glibc gives no guarantee, i.e. an
update may change this implementation internal detail.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [musl] Bug report: Reproduction of seg fault caused by musl thread creation race condition
  2020-01-27 20:37     ` Szabolcs Nagy
@ 2020-01-27 20:46       ` Rich Felker
  0 siblings, 0 replies; 10+ messages in thread
From: Rich Felker @ 2020-01-27 20:46 UTC (permalink / raw)
  To: musl; +Cc: Simon

On Mon, Jan 27, 2020 at 09:37:59PM +0100, Szabolcs Nagy wrote:
> * Simon <simonhf@gmail.com> [2020-01-27 11:59:24 -0800]:
> > > Upon successful completion, pthread_create() shall store the ID
> > >
> > 
> > "Upon successfully completion" is somewhat vague wording IMO which could
> > result in misinterpretation? You seem to have interpreted it as "Upon
> > function completion", but surely if clone() succeeds, and the new thread
> > starts running, then there is already "successful completion" even before
> > pthread_create() has returned to caller?
> 
> there is no ambiguity here.
> 
> completion of a function call, such as pthread_create,
> is well defined and it's the point that's sequenced
> after the call returns, you have to synchronize with
> that point to read the value from another thread.
> 
> > This seems more specific than "Upon successful completion" but still
> > doesn't exactly tie down whether the thread ID should be written before the
> 
> the point is to not tie the behaviour down so you cannot
> rely on it, this allows implementation freedom.

I don't think this particular case of implementation freedom matters a
lot to musl, but applications should still respect it because it may
matter to implementations where more of thread creation is handled by
the kernel and the thread id (in the pthread_t sense) really doesn't
exist until the syscall returns.

I see some good reasons for and against changing it but would probably
lean towards leaving it alone as the "default course of action" --
mainly in case there are reasons against that aren't immediately
apparent to us right now..

Rich

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [musl] Why does musl printf() use so much more stack than other implementations when printf()ing floating point numbers?
  2020-01-27  0:33 [musl] Bug report: Reproduction of seg fault caused by musl thread creation race condition Simon
  2020-01-27  5:28 ` Markus Wichmann
  2020-01-27 17:51 ` Rich Felker
@ 2020-02-03 21:14 ` Simon
  2020-02-03 21:57   ` Rich Felker
  2 siblings, 1 reply; 10+ messages in thread
From: Simon @ 2020-02-03 21:14 UTC (permalink / raw)
  To: musl

[-- Attachment #1: Type: text/plain, Size: 318 bytes --]

I recently noticed that musl printf() implementation uses surprisingly more
stack space than other implementations, but only if printing floating point
numbers, and made some notes here [1]. Any ideas why this happens, and any
chance of fixing it?

[1] https://gist.github.com/simonhf/2a7b7eb98d2a10c549e8cc858bbefd53

[-- Attachment #2: Type: text/html, Size: 641 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [musl] Why does musl printf() use so much more stack than other implementations when printf()ing floating point numbers?
  2020-02-03 21:14 ` [musl] Why does musl printf() use so much more stack than other implementations when printf()ing floating point numbers? Simon
@ 2020-02-03 21:57   ` Rich Felker
  2020-02-03 23:05     ` Szabolcs Nagy
  0 siblings, 1 reply; 10+ messages in thread
From: Rich Felker @ 2020-02-03 21:57 UTC (permalink / raw)
  To: musl; +Cc: Simon

On Mon, Feb 03, 2020 at 01:14:21PM -0800, Simon wrote:
> I recently noticed that musl printf() implementation uses surprisingly more
> stack space than other implementations, but only if printing floating point
> numbers, and made some notes here [1]. Any ideas why this happens, and any
> chance of fixing it?
> 
> [1] https://gist.github.com/simonhf/2a7b7eb98d2a10c549e8cc858bbefd53

It's fundamental; ability to exactly print arbitrary floating point
numbers takes considerable working space unless you want to spend
O(n³) time or so (n=exponent value) to keep recomputing things. The
minimum needed is probably only around 2/3 of what we use, so it would
be possible to reduce slightly, but I doubt a savings of <3k is worth
the complexity of ensuring it would still be safe and correct.

Note that on archs without extended long double type, which covers
everything used in extreme low-memory embedded environments, the
memory usage is far lower. This is because it's proportional to the
max possible exponent value, which is 1k instead of 16k if nothing
larger than IEEE double is supported.

I don't know exactly what glibc does, but it's likely they're just
using malloc, which is going to be incorrect because it can fail
dynamically with OOM.

In principle we could also make the working array a VLA and compute
smaller bounds on the size needed when precision is limited (the
common case). This might really be a practical "fix" for cases people
care about, and it would also solve the problem where LLVM makes
printf *always* use ~9k stack because it hoists the lifetime of the
floating point working array all the way to the top when inlining
(this is arguably a serious optimization bug since it can transform
all sorts of code that's possible to execute into code that's
impossible to execute due to huge stack requirements). By having it be
a VLA whose size isn't determined except in the floating point path,
LLVM wouldn't be able to hoist it like that.

Making this change would still be significant work though, mainly in
verification that the bounds are correct and that there are no cases
where the smaller array can be made to overflow.

Rich

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [musl] Why does musl printf() use so much more stack than other implementations when printf()ing floating point numbers?
  2020-02-03 21:57   ` Rich Felker
@ 2020-02-03 23:05     ` Szabolcs Nagy
  2020-02-03 23:52       ` Szabolcs Nagy
  0 siblings, 1 reply; 10+ messages in thread
From: Szabolcs Nagy @ 2020-02-03 23:05 UTC (permalink / raw)
  To: musl; +Cc: Simon

* Rich Felker <dalias@libc.org> [2020-02-03 16:57:13 -0500]:

> On Mon, Feb 03, 2020 at 01:14:21PM -0800, Simon wrote:
> > I recently noticed that musl printf() implementation uses surprisingly more
> > stack space than other implementations, but only if printing floating point
> > numbers, and made some notes here [1]. Any ideas why this happens, and any
> > chance of fixing it?
> > 
> > [1] https://gist.github.com/simonhf/2a7b7eb98d2a10c549e8cc858bbefd53
> 
> It's fundamental; ability to exactly print arbitrary floating point
> numbers takes considerable working space unless you want to spend
> O(n³) time or so (n=exponent value) to keep recomputing things. The
> minimum needed is probably only around 2/3 of what we use, so it would
> be possible to reduce slightly, but I doubt a savings of <3k is worth
> the complexity of ensuring it would still be safe and correct.
> 
> Note that on archs without extended long double type, which covers
> everything used in extreme low-memory embedded environments, the
> memory usage is far lower. This is because it's proportional to the
> max possible exponent value, which is 1k instead of 16k if nothing
> larger than IEEE double is supported.

the musl stack usage is fixed, independent of input when decimal
formatting is done so it can be easily tested. (and yes the size
is mainly determined by the long double exponent range and close
to optimal if performance matters.)

i think stack usage is < 9K not just for printf but any libc call,
currently the exceptions are execl, nftw and regcomp (from which
execl is not a bug the other two could be fixed).

> I don't know exactly what glibc does, but it's likely they're just
> using malloc, which is going to be incorrect because it can fail
> dynamically with OOM.

glibc uses variable amount of stack and it can be big, so
there is a check and then an alloca falls back to malloc.
(so yes it can probably fail with oom and not as-safe).

the alloca threshold is 64k, i don't know if printf can
actually hit that (there are multiple allocas in printf,
some have smaller bounds).

i don't think the actual worst case memory usage is known,
but i can easily imagine it to be above 64k on all targets
(glibc supports _Float128).

as a consequence validating printf using code on glibc
cannot be done by naive tests: in production different
inputs will be used so different stack usage or oom
failure may happen.

> 
> In principle we could also make the working array a VLA and compute
> smaller bounds on the size needed when precision is limited (the
> common case). This might really be a practical "fix" for cases people
> care about, and it would also solve the problem where LLVM makes
> printf *always* use ~9k stack because it hoists the lifetime of the
> floating point working array all the way to the top when inlining
> (this is arguably a serious optimization bug since it can transform
> all sorts of code that's possible to execute into code that's
> impossible to execute due to huge stack requirements). By having it be
> a VLA whose size isn't determined except in the floating point path,
> LLVM wouldn't be able to hoist it like that.
> 
> Making this change would still be significant work though, mainly in
> verification that the bounds are correct and that there are no cases
> where the smaller array can be made to overflow.
> 
> Rich

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [musl] Why does musl printf() use so much more stack than other implementations when printf()ing floating point numbers?
  2020-02-03 23:05     ` Szabolcs Nagy
@ 2020-02-03 23:52       ` Szabolcs Nagy
  0 siblings, 0 replies; 10+ messages in thread
From: Szabolcs Nagy @ 2020-02-03 23:52 UTC (permalink / raw)
  To: musl, Simon

* Szabolcs Nagy <nsz@port70.net> [2020-02-04 00:05:35 +0100]:
> glibc uses variable amount of stack and it can be big, so
> there is a check and then an alloca falls back to malloc.
> (so yes it can probably fail with oom and not as-safe).
> 
> the alloca threshold is 64k, i don't know if printf can
> actually hit that (there are multiple allocas in printf,
> some have smaller bounds).

ok i was curious, it seems glibc allocates a temp
buf of the size of the output assuming wchar_t,
i.e. unbounded based on user input, and this
allocations can fall back to malloc.

otherwise glibc should allocate around the same
stack as musl (i.e. 9K), so the glibc worst case
stack usage is about 64K+9K and it may do an
arbitrary large malloc instead of the large alloca.

tested with

 sprintf(s, "%.99999Lf\n", 0x1p-16445L);

on x86_64 glibc 2.29 with gdb, this does 3 mallocs
of size 100031, 400012, 100004, so about 600K, and
uses about 9K stack. (i dont know why there are 2
100k mallocs)

musl mallocs 0K and uses < 9K stack.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-02-03 23:52 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-27  0:33 [musl] Bug report: Reproduction of seg fault caused by musl thread creation race condition Simon
2020-01-27  5:28 ` Markus Wichmann
2020-01-27 17:51 ` Rich Felker
2020-01-27 19:59   ` Simon
2020-01-27 20:37     ` Szabolcs Nagy
2020-01-27 20:46       ` Rich Felker
2020-02-03 21:14 ` [musl] Why does musl printf() use so much more stack than other implementations when printf()ing floating point numbers? Simon
2020-02-03 21:57   ` Rich Felker
2020-02-03 23:05     ` Szabolcs Nagy
2020-02-03 23:52       ` Szabolcs Nagy

Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).