From: Vincent Donnefort <email@example.com> To: Rich Felker <firstname.lastname@example.org> Cc: Samuel Holland <email@example.com>, firstname.lastname@example.org, jason <email@example.com> Subject: Re: [musl] [PATCH v3] sysconf: add _SC_NPROCESSORS_CONF support Date: Mon, 12 Jul 2021 19:12:10 +0100 Message-ID: <20210712181210.GA147578@e124901.cambridge.arm.com> (raw) In-Reply-To: <20210712171454.GZ13220@brightrain.aerifal.cx> [...] > > > If they're mandated stable interfaces that "happen" to have the exact > > > data we need, I don't see what the objection to using them is. A > > > better question would be whether access to them is restricted in any > > > hardening profiles. At least they don't seem to be hidden by the > > > hidepid mount option. > > > > Indeed the function hasn't changed for 10y, which is I guess somewhat stable > > and it is currently walking through all the possible CPUs (which is what we want > > to count). Also, this file is currently always present whenever procfs is > > mounted. > > > > Nonetheless, as this is human-readable, nothing mandates the format. And as an > > example, on my desktop machine, with 448 allocated CPUS, the first line of > > /proc/softirqs (the line that contains all the CPUs) is 4949 long. The > > "possible" mask describes same information with only 6 characters. > > It's not just a matter of age and de facto stability. Kernel policy is > that the procfs files are stable interfaces. Despite being "human > readable", they're also intended to be readable by, and actually read > by, utilities such as the procps suite of tools, "top"-type programs, > etc. > What I wanted to emphasis is the cost of reading that interface vs the CPU mask in the sysfs. The only gain is for a system with a _CONF user, where some CPUs have been hotunplug'ed, and where the sysfs isn't mounted while the procfs is. This can surely happen, but compared with the cost of parsing a long string (see the ~x1000 in the example above) it doesn't seem a good trade-off to me.
next prev parent reply other threads:[~2021-07-12 18:12 UTC|newest] Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-07-10 5:31 jason 2021-07-10 17:11 ` Rich Felker 2021-07-10 17:29 ` Samuel Holland 2021-07-10 19:44 ` Rich Felker 2021-07-12 13:08 ` Vincent Donnefort 2021-07-12 15:41 ` Rich Felker 2021-07-12 16:32 ` Vincent Donnefort 2021-07-12 17:14 ` Rich Felker 2021-07-12 18:12 ` Vincent Donnefort [this message] 2021-08-07 14:52 ` James Y Knight 2021-08-07 16:29 ` Leah Neukirchen -- strict thread matches above, loose matches on Subject: below -- 2021-07-06 9:55 Vincent Donnefort
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210712181210.GA147578@e124901.cambridge.arm.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
mailing list of musl libc This inbox may be cloned and mirrored by anyone: git clone --mirror https://inbox.vuxu.org/musl # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V1 musl musl/ https://inbox.vuxu.org/musl \ email@example.com public-inbox-index musl Example config snippet for mirrors. Newsgroup available over NNTP: nntp://inbox.vuxu.org/vuxu.archive.musl code repositories for the project(s) associated with this inbox: https://git.vuxu.org/mirror/musl/ AGPL code for this site: git clone https://public-inbox.org/public-inbox.git