mailing list of musl libc
 help / color / mirror / code / Atom feed
From: Jens Gustedt <jens.gustedt@inria.fr>
Cc: musl@lists.openwall.com
Subject: Re: Stdio resource usage
Date: Thu, 21 Feb 2019 17:27:55 +0100	[thread overview]
Message-ID: <20190221172755.799b0b63@inria.fr> (raw)
In-Reply-To: <20190221160937.GF19969@voyager>

[-- Attachment #1: Type: text/plain, Size: 1930 bytes --]

Hello,

On Thu, 21 Feb 2019 17:09:37 +0100 Markus Wichmann <nullplan@gmx.net>
wrote:

> On Wed, Feb 20, 2019 at 02:24:23PM -0500, Rich Felker wrote:
> So, what are our choices?
> 
> - Heap allocation: But that can fail. Now, printf() is actually
> allowed to fail, but no-one expects it to. I would expect such
> behavior to be problematic at best.
> - Static allocation: Without synchronization this won't be
> thread-safe, with synchronization it won't be re-entrant. Now, as far
> as I could see, the printf() family is actually not required to be
> re-entrant (e.g. signal-safety(7) fails to list any of them), but I
> have seen sprintf() in signal handlers in the wild (well, exception
> handlers, really).
> - Thread-local static allocation: Which is always a hassle in libc,
> and does not take care of re-entrancy. It would only solve the
>   thread-safety issue.
> - As-needed stack allocation (e.g. alloca()): This fails to prevent
> the worst case allocation, though it would make the average allocation
>   more bearable. But I don't know if especially clever compilers like
>   clang wouldn't optimize this stuff away, and we'd be back to square
>   one.

Perhaps the latter, but maybe with VLA? Unfortunately these techniques
have no reliable error detection mechanism.

For the static allocation strategy one could try to implement
something like a "bounded" stack, that is two or three versions of the
data in a array, protected by a lock and a counter, such that at least
one level of signal handler could still use it. But this is probably a
bit tedious to implement.

Jens

-- 
:: INRIA Nancy Grand Est ::: Camus ::::::: ICube/ICPS :::
:: ::::::::::::::: office Strasbourg : +33 368854536   ::
:: :::::::::::::::::::::: gsm France : +33 651400183   ::
:: ::::::::::::::: gsm international : +49 15737185122 ::
:: http://icube-icps.unistra.fr/index.php/Jens_Gustedt ::

[-- Attachment #2: Digitale Signatur von OpenPGP --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

  reply	other threads:[~2019-02-21 16:27 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-19 23:34 Nick Bray
2019-02-20  2:43 ` Rich Felker
2019-02-20 10:49   ` Szabolcs Nagy
2019-02-20 15:47     ` Markus Wichmann
2019-02-20 16:37       ` Szabolcs Nagy
2019-02-20 17:13         ` Rich Felker
2019-02-20 18:34       ` A. Wilcox
2019-02-20 19:11         ` Markus Wichmann
2019-02-20 19:24           ` Rich Felker
2019-02-21 16:09             ` Markus Wichmann
2019-02-21 16:27               ` Jens Gustedt [this message]
2019-02-21 17:02               ` Rich Felker

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190221172755.799b0b63@inria.fr \
    --to=jens.gustedt@inria.fr \
    --cc=musl@lists.openwall.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).