From: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
To: Szabolcs Nagy <nsz@port70.net>
Cc: musl <musl@lists.openwall.com>,
Michael Jeanson <mjeanson@efficios.com>,
Richard Purdie <richard.purdie@linuxfoundation.org>,
Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Subject: Re: sysconf(_SC_NPROCESSORS_CONF) returns the wrong value
Date: Tue, 2 Apr 2019 18:38:18 -0400 (EDT) [thread overview]
Message-ID: <1935008937.138.1554244698897.JavaMail.zimbra@efficios.com> (raw)
In-Reply-To: <20190402190253.GT26605@port70.net>
----- On Apr 2, 2019, at 3:02 PM, Szabolcs Nagy nsz@port70.net wrote:
> * Mathieu Desnoyers <mathieu.desnoyers@efficios.com> [2019-03-26 14:01:08
> -0400]:
>> ----- On Mar 26, 2019, at 1:45 PM, Szabolcs Nagy nsz@port70.net wrote:
>> > i agree that the current behaviour is not ideal, but
>> > iterating over /sys/devices/system/cpu/cpu* may not
>> > be correct either.. based on current linux api docs.
>> >
>> > i don't understand why is that number different from the
>> > cpu set in /sys/devices/system/cpu/possible
>>
>> I suspect both iteration over /sys/devices/system/cpu/cpu* and
>> content of /sys/devices/system/cpu/possible should provide the
>> same result. However, looking at Linux
>> Documentation/ABI/testing/sysfs-devices-system-cpu ,
>> it appears that /sys/devices/system/cpu/possible was introduced
>> in December 2008, whereas /sys/devices/system/cpu/cpu#/ was there
>> pre-git history.
>>
>> This could explain why glibc uses the iteration method.
>>
>> Thoughts ?
>
> as far as i can tell the cpu iteration method is valid,
> and that directory list cannot change after boot (is this
> guaranteed by the linux abi in the future?), so as long
> as /sys is mounted we can get the number, but it's fairly
> ugly.. does lttng have fallback code if sysconf returns -1?
> if it does maybe musl should just do that (or somebody has
> to write cancellation safe directory traversal code)
Nope, the lttng-ust ring buffer won't be usable anyway. I
*think* we need to improve the error handling for that scenario
within our consumer daemon which is responsible for buffer
allocation. I don't think we properly error out if the internal
variable tracking the number of CPUs in the system stays at its
unitialized value of 0.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
prev parent reply other threads:[~2019-04-02 22:38 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-03-15 21:02 Jonathan Rajotte-Julien
2019-03-16 2:25 ` Szabolcs Nagy
2019-03-16 13:58 ` Mathieu Desnoyers
2019-03-16 14:28 ` Jonathan Rajotte-Julien
2019-03-19 19:11 ` Florian Weimer
2019-03-26 16:23 ` Jonathan Rajotte-Julien
2019-03-26 17:45 ` Szabolcs Nagy
2019-03-26 18:01 ` Mathieu Desnoyers
2019-04-02 19:02 ` Szabolcs Nagy
2019-04-02 22:38 ` Mathieu Desnoyers [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1935008937.138.1554244698897.JavaMail.zimbra@efficios.com \
--to=mathieu.desnoyers@efficios.com \
--cc=jonathan.rajotte-julien@efficios.com \
--cc=mjeanson@efficios.com \
--cc=musl@lists.openwall.com \
--cc=nsz@port70.net \
--cc=richard.purdie@linuxfoundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
Code repositories for project(s) associated with this public inbox
https://git.vuxu.org/mirror/musl/
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).