* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
@ 2024-09-29 16:56 Douglas McIlroy
2024-09-29 20:29 ` Rob Pike
2024-09-29 21:24 ` Ralph Corderoy
0 siblings, 2 replies; 61+ messages in thread
From: Douglas McIlroy @ 2024-09-29 16:56 UTC (permalink / raw)
To: TUHS main list
[-- Attachment #1: Type: text/plain, Size: 834 bytes --]
>>> malloc(0) isn't undefined behaviour but implementation defined.
>>
>> In modern C there is no difference between those two concepts.
> Can you explain more about your view
There certainly is a difference, but in this case the practical
implications are the same: avoid malloc(0). malloc(0) lies at the high end
of a range of severity of concerns about implementation-definedness. At the
low end are things like the size of ints, which only affects applications
that may confront very large numbers. In the middle is the default
signedness of chars, which generally may be mitigated by explicit type
declarations.
For the size of ints, C offers guardrails like INT_MAX. There is no test to
discern what an error return from malloc(0) means.
Is there any other C construct that implementation-definedness renders
useless?
Doug
[-- Attachment #2: Type: text/html, Size: 1044 bytes --]
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-29 16:56 [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum) Douglas McIlroy
@ 2024-09-29 20:29 ` Rob Pike
2024-09-29 21:13 ` Rik Farrow
` (2 more replies)
2024-09-29 21:24 ` Ralph Corderoy
1 sibling, 3 replies; 61+ messages in thread
From: Rob Pike @ 2024-09-29 20:29 UTC (permalink / raw)
To: Douglas McIlroy; +Cc: TUHS main list
[-- Attachment #1: Type: text/plain, Size: 1317 bytes --]
Gradually the writers of optimizing compilers have leaned so hard on the
implementation-defined and undefined behaviors that, while far from
useless, C and C++ have become non-portable and dangerously insecure, as
well as often very surprising to the point that the US government arguing
against using them.
-rob
On Mon, Sep 30, 2024 at 2:56 AM Douglas McIlroy <
douglas.mcilroy@dartmouth.edu> wrote:
> >>> malloc(0) isn't undefined behaviour but implementation defined.
> >>
> >> In modern C there is no difference between those two concepts.
>
> > Can you explain more about your view
>
> There certainly is a difference, but in this case the practical
> implications are the same: avoid malloc(0). malloc(0) lies at the high end
> of a range of severity of concerns about implementation-definedness. At the
> low end are things like the size of ints, which only affects applications
> that may confront very large numbers. In the middle is the default
> signedness of chars, which generally may be mitigated by explicit type
> declarations.
>
> For the size of ints, C offers guardrails like INT_MAX. There is no test
> to discern what an error return from malloc(0) means.
>
> Is there any other C construct that implementation-definedness renders
> useless?
>
> Doug
>
>
[-- Attachment #2: Type: text/html, Size: 2024 bytes --]
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-29 20:29 ` Rob Pike
@ 2024-09-29 21:13 ` Rik Farrow
2024-09-29 22:21 ` Rich Salz
2024-09-30 19:12 ` Steffen Nurpmeso
2 siblings, 0 replies; 61+ messages in thread
From: Rik Farrow @ 2024-09-29 21:13 UTC (permalink / raw)
To: Rob Pike; +Cc: Douglas McIlroy, TUHS main list
[-- Attachment #1: Type: text/plain, Size: 737 bytes --]
On Sun, Sep 29, 2024 at 1:29 PM Rob Pike <robpike@gmail.com> wrote:
> Gradually the writers of optimizing compilers have leaned so hard on the
> implementation-defined and undefined behaviors that, while far from
> useless, C and C++ have become non-portable and dangerously insecure, as
> well as often very surprising to the point that the US government arguing
> against using them.
>
>>
>> Thank goodness. I loved C when I encountered it, because the alternative
was Z80 assembler. I loved having structs, because I was lousy at using
offsets (off by one so often you'd think I would just have adjusted for
being wrong).
I'll take the guardrails, please.
C: the assault rifle of programming languages...
Rik
[-- Attachment #2: Type: text/html, Size: 1374 bytes --]
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-29 16:56 [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum) Douglas McIlroy
2024-09-29 20:29 ` Rob Pike
@ 2024-09-29 21:24 ` Ralph Corderoy
1 sibling, 0 replies; 61+ messages in thread
From: Ralph Corderoy @ 2024-09-29 21:24 UTC (permalink / raw)
To: Douglas McIlroy; +Cc: TUHS
Hi Doug,
> > > > malloc(0) isn't undefined behaviour but implementation defined.
> > >
> > > In modern C there is no difference between those two concepts.
>
> There certainly is a difference, but in this case the practical
> implications are the same: avoid malloc(0).
Many programs wrap malloc() in some way, even if it's just to exit on
failure, so working around malloc(0) is easy enough.
void *saneloc(size_t size) {
void *p = malloc(size);
if (p || size)
return p;
return malloc(1);
}
> In the middle is the default signedness of chars, which generally may
> be mitigated by explicit type declarations.
Similarly, the signedness of an ‘int i: 3’ bit-field.
(1) Whether a "plain" int bit-field is treated as a signed int
bit-field or as an unsigned int bit-field (6.7.2, 6.7.2.1).
> Is there any other C construct that implementation-definedness renders
> useless?
There's the '-' in "%[3-7]" for fscanf(3).
(35) The interpretation of a − character that is neither the first
nor the last character, nor the second where a ^ character is
the first, in the scanlist for %[ conversion in the fscanf or
fwscanf function (7.23.6.2, 7.31.2.1).
--
Cheers, Ralph.
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-29 20:29 ` Rob Pike
2024-09-29 21:13 ` Rik Farrow
@ 2024-09-29 22:21 ` Rich Salz
2024-09-29 23:56 ` Rob Pike
2024-09-30 19:12 ` Steffen Nurpmeso
2 siblings, 1 reply; 61+ messages in thread
From: Rich Salz @ 2024-09-29 22:21 UTC (permalink / raw)
To: Rob Pike; +Cc: Douglas McIlroy, TUHS main list
[-- Attachment #1: Type: text/plain, Size: 405 bytes --]
>
> C and C++ have become non-portable and dangerously insecure, as well as
> often very surprising to the point that the US government arguing against
> using them.
>
I thought their main arguments were to use memory-safe languages. Are you
saying the C language can be as safe s go, rust, etc., by language design?
(I don't think you are, but the sentence I quoted kinda implies that, at
least to me.)
[-- Attachment #2: Type: text/html, Size: 686 bytes --]
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-29 22:21 ` Rich Salz
@ 2024-09-29 23:56 ` Rob Pike
2024-09-30 0:36 ` Larry McVoy
0 siblings, 1 reply; 61+ messages in thread
From: Rob Pike @ 2024-09-29 23:56 UTC (permalink / raw)
To: Rich Salz; +Cc: Douglas McIlroy, TUHS main list
[-- Attachment #1: Type: text/plain, Size: 576 bytes --]
I'm saying the exact opposite: they are unavoidably unsafe.
-rob
On Mon, Sep 30, 2024 at 8:21 AM Rich Salz <rich.salz@gmail.com> wrote:
> C and C++ have become non-portable and dangerously insecure, as well as
>> often very surprising to the point that the US government arguing against
>> using them.
>>
>
> I thought their main arguments were to use memory-safe languages. Are you
> saying the C language can be as safe s go, rust, etc., by language design?
> (I don't think you are, but the sentence I quoted kinda implies that, at
> least to me.)
>
[-- Attachment #2: Type: text/html, Size: 1396 bytes --]
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-29 23:56 ` Rob Pike
@ 2024-09-30 0:36 ` Larry McVoy
2024-09-30 0:55 ` Larry McVoy
2024-09-30 1:09 ` Luther Johnson
0 siblings, 2 replies; 61+ messages in thread
From: Larry McVoy @ 2024-09-30 0:36 UTC (permalink / raw)
To: Rob Pike; +Cc: Douglas McIlroy, TUHS main list
It doesn't have to be that way, C could be evolved, I built a very C
like language (to the point that one of my engineers, who hated the
new language on principle, fixed a bug in some diffs that flew by,
he thought he was fixing a bug in C). No pointers, reference counted
garbage collection, pass by value or reference, switch values could be
anything, values, variables, regular expressions, etc.
If I had infinite energy and money, I'd fund a gcc dialect of that C.
Alas, I don't. But C is very fixable.
On Mon, Sep 30, 2024 at 09:56:47AM +1000, Rob Pike wrote:
> I'm saying the exact opposite: they are unavoidably unsafe.
>
> -rob
>
>
> On Mon, Sep 30, 2024 at 8:21???AM Rich Salz <rich.salz@gmail.com> wrote:
>
> > C and C++ have become non-portable and dangerously insecure, as well as
> >> often very surprising to the point that the US government arguing against
> >> using them.
> >>
> >
> > I thought their main arguments were to use memory-safe languages. Are you
> > saying the C language can be as safe s go, rust, etc., by language design?
> > (I don't think you are, but the sentence I quoted kinda implies that, at
> > least to me.)
> >
--
---
Larry McVoy Retired to fishing http://www.mcvoy.com/lm/boat
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-30 0:36 ` Larry McVoy
@ 2024-09-30 0:55 ` Larry McVoy
2024-09-30 1:09 ` Luther Johnson
1 sibling, 0 replies; 61+ messages in thread
From: Larry McVoy @ 2024-09-30 0:55 UTC (permalink / raw)
To: Rob Pike; +Cc: Douglas McIlroy, TUHS main list
And one other comment. I've seen all the nay sayers saying this is
undefined and that will trip you up, C is full of land mines, etc, etc.
I ran a company that developed a product that was orders of magnitude more
complex than the v7 kernel (low bar but still) all in C and we had *NONE*
of those supposed problems. We were careful to use stuff that worked,
I'm "famous" in that company as the guy was viewed as "that was invented
after 1980 so Larry won't let us use it". Not true, we used mmap and used
POSIX signals, but mostly true. If you stick to the basics, C just works.
And is portable, we supported every Unix (even SCO), MacOS, Windows, and
all the Linux variants from ARM to IBM mainframes. It was fine. We did
ship our own libc, NetBSD's because it had fpush() and we used that to
do compression and CRC checking and it got around other crappy libc
implementations.
If you want to go out of your way to find places where it doesn't, umm,
go you I guess, but why go there? I have an existence proof that you
can use C sensibly and it is fine. The company made it to 18 years
before the open source guys shut us down, not a bad run.
All that said, I get it, you want guard rails. You are not wrong, the
caliber of programmers these days are nowhere near Bell Labs or Sun or
my guys. I'm not sure what I'd do if I were starting over, I'd lean
towards C but would take a hard look at Rust.
On Sun, Sep 29, 2024 at 05:36:30PM -0700, Larry McVoy wrote:
> It doesn't have to be that way, C could be evolved, I built a very C
> like language (to the point that one of my engineers, who hated the
> new language on principle, fixed a bug in some diffs that flew by,
> he thought he was fixing a bug in C). No pointers, reference counted
> garbage collection, pass by value or reference, switch values could be
> anything, values, variables, regular expressions, etc.
>
> If I had infinite energy and money, I'd fund a gcc dialect of that C.
> Alas, I don't. But C is very fixable.
>
> On Mon, Sep 30, 2024 at 09:56:47AM +1000, Rob Pike wrote:
> > I'm saying the exact opposite: they are unavoidably unsafe.
> >
> > -rob
> >
> >
> > On Mon, Sep 30, 2024 at 8:21???AM Rich Salz <rich.salz@gmail.com> wrote:
> >
> > > C and C++ have become non-portable and dangerously insecure, as well as
> > >> often very surprising to the point that the US government arguing against
> > >> using them.
> > >>
> > >
> > > I thought their main arguments were to use memory-safe languages. Are you
> > > saying the C language can be as safe s go, rust, etc., by language design?
> > > (I don't think you are, but the sentence I quoted kinda implies that, at
> > > least to me.)
> > >
>
> --
> ---
> Larry McVoy Retired to fishing http://www.mcvoy.com/lm/boat
--
---
Larry McVoy Retired to fishing http://www.mcvoy.com/lm/boat
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-30 0:36 ` Larry McVoy
2024-09-30 0:55 ` Larry McVoy
@ 2024-09-30 1:09 ` Luther Johnson
2024-09-30 1:37 ` Luther Johnson
1 sibling, 1 reply; 61+ messages in thread
From: Luther Johnson @ 2024-09-30 1:09 UTC (permalink / raw)
To: tuhs
C# addresses some of the things being discussed here. I've used it, I
don't care for it all that much, I prefer straight, not-at-all modern C,
but I think there are probably a few dialects over the years (Objective
C ?) that have addressed some of these desires for a "better C, but not
C++". Do others here have comments on these inspired by C, kind of
C-like, but with a few other computer science components, thrown into
the language machine ?
On 09/29/2024 05:36 PM, Larry McVoy wrote:
> It doesn't have to be that way, C could be evolved, I built a very C
> like language (to the point that one of my engineers, who hated the
> new language on principle, fixed a bug in some diffs that flew by,
> he thought he was fixing a bug in C). No pointers, reference counted
> garbage collection, pass by value or reference, switch values could be
> anything, values, variables, regular expressions, etc.
>
> If I had infinite energy and money, I'd fund a gcc dialect of that C.
> Alas, I don't. But C is very fixable.
>
> On Mon, Sep 30, 2024 at 09:56:47AM +1000, Rob Pike wrote:
>> I'm saying the exact opposite: they are unavoidably unsafe.
>>
>> -rob
>>
>>
>> On Mon, Sep 30, 2024 at 8:21???AM Rich Salz <rich.salz@gmail.com> wrote:
>>
>>> C and C++ have become non-portable and dangerously insecure, as well as
>>>> often very surprising to the point that the US government arguing against
>>>> using them.
>>>>
>>> I thought their main arguments were to use memory-safe languages. Are you
>>> saying the C language can be as safe s go, rust, etc., by language design?
>>> (I don't think you are, but the sentence I quoted kinda implies that, at
>>> least to me.)
>>>
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-30 1:09 ` Luther Johnson
@ 2024-09-30 1:37 ` Luther Johnson
2024-09-30 3:52 ` ron minnich
2024-10-01 12:43 ` arnold
0 siblings, 2 replies; 61+ messages in thread
From: Luther Johnson @ 2024-09-30 1:37 UTC (permalink / raw)
To: tuhs
'Go' is also a pretty C-like advanced C kind of thing. What do Go
writers think of it vs. C, for safety, reliability, clarity of
expression, etc. ?
On 09/29/2024 06:09 PM, Luther Johnson wrote:
> C# addresses some of the things being discussed here. I've used it, I
> don't care for it all that much, I prefer straight, not-at-all modern
> C, but I think there are probably a few dialects over the years
> (Objective C ?) that have addressed some of these desires for a
> "better C, but not C++". Do others here have comments on these
> inspired by C, kind of C-like, but with a few other computer science
> components, thrown into the language machine ?
>
> On 09/29/2024 05:36 PM, Larry McVoy wrote:
>> It doesn't have to be that way, C could be evolved, I built a very C
>> like language (to the point that one of my engineers, who hated the
>> new language on principle, fixed a bug in some diffs that flew by,
>> he thought he was fixing a bug in C). No pointers, reference counted
>> garbage collection, pass by value or reference, switch values could be
>> anything, values, variables, regular expressions, etc.
>>
>> If I had infinite energy and money, I'd fund a gcc dialect of that C.
>> Alas, I don't. But C is very fixable.
>>
>> On Mon, Sep 30, 2024 at 09:56:47AM +1000, Rob Pike wrote:
>>> I'm saying the exact opposite: they are unavoidably unsafe.
>>>
>>> -rob
>>>
>>>
>>> On Mon, Sep 30, 2024 at 8:21???AM Rich Salz <rich.salz@gmail.com>
>>> wrote:
>>>
>>>> C and C++ have become non-portable and dangerously insecure, as
>>>> well as
>>>>> often very surprising to the point that the US government arguing
>>>>> against
>>>>> using them.
>>>>>
>>>> I thought their main arguments were to use memory-safe languages.
>>>> Are you
>>>> saying the C language can be as safe s go, rust, etc., by language
>>>> design?
>>>> (I don't think you are, but the sentence I quoted kinda implies
>>>> that, at
>>>> least to me.)
>>>>
>
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-30 1:37 ` Luther Johnson
@ 2024-09-30 3:52 ` ron minnich
2024-10-01 12:43 ` arnold
1 sibling, 0 replies; 61+ messages in thread
From: ron minnich @ 2024-09-30 3:52 UTC (permalink / raw)
To: Luther Johnson; +Cc: tuhs
[-- Attachment #1: Type: text/plain, Size: 2730 bytes --]
Data point. I hope this is not too far out of TUHS scope, but ... you asked.
In 2010, we at sandia journeyed to Usenix to take Russ's course on Go.
At that time, we had created megatux, all written in C, all based on
earlier HPC work at LANL, that allowed us to run 80,000 or so Windows VMs
on a 400 node cluster, and from there run real malware to study it (and, in
one case, fix a bug :-).
We got done Russ's course, and on the way home, I said "we're moving it all
to Go." Nobody disagreed. We never once regretted that decision.
On Sun, Sep 29, 2024 at 6:47 PM Luther Johnson <luther.johnson@makerlisp.com>
wrote:
> 'Go' is also a pretty C-like advanced C kind of thing. What do Go
> writers think of it vs. C, for safety, reliability, clarity of
> expression, etc. ?
>
> On 09/29/2024 06:09 PM, Luther Johnson wrote:
> > C# addresses some of the things being discussed here. I've used it, I
> > don't care for it all that much, I prefer straight, not-at-all modern
> > C, but I think there are probably a few dialects over the years
> > (Objective C ?) that have addressed some of these desires for a
> > "better C, but not C++". Do others here have comments on these
> > inspired by C, kind of C-like, but with a few other computer science
> > components, thrown into the language machine ?
> >
> > On 09/29/2024 05:36 PM, Larry McVoy wrote:
> >> It doesn't have to be that way, C could be evolved, I built a very C
> >> like language (to the point that one of my engineers, who hated the
> >> new language on principle, fixed a bug in some diffs that flew by,
> >> he thought he was fixing a bug in C). No pointers, reference counted
> >> garbage collection, pass by value or reference, switch values could be
> >> anything, values, variables, regular expressions, etc.
> >>
> >> If I had infinite energy and money, I'd fund a gcc dialect of that C.
> >> Alas, I don't. But C is very fixable.
> >>
> >> On Mon, Sep 30, 2024 at 09:56:47AM +1000, Rob Pike wrote:
> >>> I'm saying the exact opposite: they are unavoidably unsafe.
> >>>
> >>> -rob
> >>>
> >>>
> >>> On Mon, Sep 30, 2024 at 8:21???AM Rich Salz <rich.salz@gmail.com>
> >>> wrote:
> >>>
> >>>> C and C++ have become non-portable and dangerously insecure, as
> >>>> well as
> >>>>> often very surprising to the point that the US government arguing
> >>>>> against
> >>>>> using them.
> >>>>>
> >>>> I thought their main arguments were to use memory-safe languages.
> >>>> Are you
> >>>> saying the C language can be as safe s go, rust, etc., by language
> >>>> design?
> >>>> (I don't think you are, but the sentence I quoted kinda implies
> >>>> that, at
> >>>> least to me.)
> >>>>
> >
>
>
[-- Attachment #2: Type: text/html, Size: 3686 bytes --]
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-29 20:29 ` Rob Pike
2024-09-29 21:13 ` Rik Farrow
2024-09-29 22:21 ` Rich Salz
@ 2024-09-30 19:12 ` Steffen Nurpmeso
2024-09-30 20:03 ` Rich Salz
2024-09-30 20:14 ` Rik Farrow
2 siblings, 2 replies; 61+ messages in thread
From: Steffen Nurpmeso @ 2024-09-30 19:12 UTC (permalink / raw)
To: Rob Pike; +Cc: Douglas McIlroy, TUHS main list
Rob Pike wrote in
<CAKzdPgwJ7-_BWztOQKiB6h5a+OGwXtefsD47Fq+GDwGGF7N4UA@mail.gmail.com>:
|On Mon, Sep 30, 2024 at 2:56 AM Douglas McIlroy <
|douglas.mcilroy@dartmouth.edu> wrote:
|>>>> malloc(0) isn't undefined behaviour but implementation defined.
|>>>
|>>> In modern C there is no difference between those two concepts.
|>
|>> Can you explain more about your view
|>
|> There certainly is a difference, but in this case the practical
|> implications are the same: avoid malloc(0). malloc(0) lies at the \
|> high end
|> of a range of severity of concerns about implementation-definedness. \
|> At the
|> low end are things like the size of ints, which only affects applications
|> that may confront very large numbers. In the middle is the default
|> signedness of chars, which generally may be mitigated by explicit type
|> declarations.
|>
|> For the size of ints, C offers guardrails like INT_MAX. There is no test
|> to discern what an error return from malloc(0) means.
|>
|> Is there any other C construct that implementation-definedness renders
|> useless?
|Gradually the writers of optimizing compilers have leaned so hard on the
|implementation-defined and undefined behaviors that, while far from
|useless, C and C++ have become non-portable and dangerously insecure, as
|well as often very surprising to the point that the US government arguing
|against using them.
Never attribute to malice what can adequately be explained by
incompetence.
is the signature of Poul-Henning Kamp (whose email regarding that
cstr list (i did not yet have TM-74-1273-1 aka the C Reference
Manual as of 1974-01-15, thank you!) was forwarded by Warren).
Only propaganda left everywhere, including here, namely for Rust.
But your statement in particular reminds me of "Who's afraid of
a big bad optimizing compiler?" of LWN.net, from July 15, 2019.
It may be fun to read by some. (Offline, searching surely works.)
It ends with
Acknowledgments
We owe thanks to a surprisingly large number of compiler
writers and members of the C and C++ standards committees who
introduced us to some of the things a big bad optimizing
compiler can do,[.]
But i think those eh people there (that US government) surely have
been hammered with "memory-safe language" a thousand times, and
noone ever told them that even the eldest C can be used in a safe
way; on freebsd-hackers (PHK is an early FreeBSD committer) there
was one of their bikeshed threads around September 5th, after
several CVEs where fixed, and another long time major contributor
started a thread saying things like
|>|The real takeaway here is that C is no longer sufficient for writing
|>|high quality code in the 2020s. Everyone needs to adapt their tools.
which (also) i (not FreeBSD, only by heart, maybe) spoke against.
He was hailing Option<Box<Vec<u8>>> or Vec::with_capacity(262144)
as solutions to the CVEs. (Which, as far as i looked, had nothing
really to do with C as such; one "guilty" programmer said, as far
as i understood that, the same for "his CVE".)
Vectors and string "objects" with (optionally) checked index
access and such are uncomfortable, but easy to do, also with C,
and right from the start (i said).
(P.S.: i miss bit enumerations in C and C++, as compilers get
stricter and stricter (you cannot even enum1|enum2 without
warnings no more, in i think C23; without cast, of course), but
bit flags "can" only come in via preprocessor constants, and are
completely unchecked. And enum1|enum2 *i* often have, if some
subtype adds flags on top of a basic type, isn't that natural, no
support on that front. And cast-less "super class casts".
I had not downloaded cstr#108-the_c++_programming_language yet.)
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-30 19:12 ` Steffen Nurpmeso
@ 2024-09-30 20:03 ` Rich Salz
2024-09-30 21:15 ` Steffen Nurpmeso
2024-09-30 22:14 ` Bakul Shah via TUHS
2024-09-30 20:14 ` Rik Farrow
1 sibling, 2 replies; 61+ messages in thread
From: Rich Salz @ 2024-09-30 20:03 UTC (permalink / raw)
To: TUHS main list; +Cc: Douglas McIlroy
[-- Attachment #1: Type: text/plain, Size: 493 bytes --]
On Mon, Sep 30, 2024 at 3:12 PM Steffen Nurpmeso <steffen@sdaoden.eu> wrote
> noone ever told them that even the eldest C can be used in a safe
> way;
Perhaps we have different meanings of the word safe.
void foo(char *p) { /* interesting stuff here */ ; free(p); }
void bar() { char *p = malloc(20);
foo(p);
printf("foo is %s\n", p);
foo(p);
}
Why should I have to think about this code when the language already knows
what is wrong.
[-- Attachment #2: Type: text/html, Size: 958 bytes --]
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-30 19:12 ` Steffen Nurpmeso
2024-09-30 20:03 ` Rich Salz
@ 2024-09-30 20:14 ` Rik Farrow
2024-09-30 22:00 ` Steffen Nurpmeso
2024-10-01 12:53 ` Dan Cross
1 sibling, 2 replies; 61+ messages in thread
From: Rik Farrow @ 2024-09-30 20:14 UTC (permalink / raw)
To: Rob Pike, Douglas McIlroy, TUHS main list
[-- Attachment #1: Type: text/plain, Size: 712 bytes --]
This is the 'problem' with C/C++: it's not the language itself so much as
the people who are allowed, or forced, to use it. Many, if not all, of the
people on this list have worked with great programmers, when most
programmers are average at best. I saw some terrible things back when doing
technical sales support for a startup selling a graphics library with C
bindings. I came away convinced that most of the 'programmers' I was
training were truly clueless.
Rik
On Mon, Sep 30, 2024 at 12:12 PM Steffen Nurpmeso
> Never attribute to malice what can adequately be explained by
> incompetence.
>
> is the signature of Poul-Henning Kamp (whose email regarding that
> cstr list <snip>
>
[-- Attachment #2: Type: text/html, Size: 1003 bytes --]
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-30 20:03 ` Rich Salz
@ 2024-09-30 21:15 ` Steffen Nurpmeso
2024-09-30 22:14 ` Bakul Shah via TUHS
1 sibling, 0 replies; 61+ messages in thread
From: Steffen Nurpmeso @ 2024-09-30 21:15 UTC (permalink / raw)
To: Rich Salz; +Cc: TUHS main list, Douglas McIlroy
Rich Salz wrote in
<CAFH29tp4fZR7ct57F-BmyqoJwwRfHkSbiVPS1mj89e-_gzhsHQ@mail.gmail.com>:
|On Mon, Sep 30, 2024 at 3:12 PM Steffen Nurpmeso <steffen@sdaoden.eu> wrote
|> noone ever told them that even the eldest C can be used in a safe
|> way;
|
|Perhaps we have different meanings of the word safe.
|
| void foo(char *p) { /* interesting stuff here */ ; free(p); }
| void bar() { char *p = malloc(20);
| foo(p);
| printf("foo is %s\n", p);
| foo(p);
|}
|Why should I have to think about this code when the language already knows
|what is wrong.
It can also be used in an unsafe way?
O-ha. Sounds dangerous in my ears (given human behaviour in
particular). I was so focused on winning against Putin, .. that
i did realize this context.
(Btw i like The Cures new song "Alone", after 14 years!, and its
accompanying thrilling mega mega boom boom boom video.)
P.S.: In the real world i would now, as the conservative and
integer (up against the floating-point!) person that i am point
"you" to C++, and simply pass a string object.
But let us pass a reference so we gain compile-time non-NIL "meat"
(vegan!!), and impressive state-of-the-art speed characteristics.
Human behaviour can destroy everything: faster so with C++!
--End of <CAFH29tp4fZR7ct57F-BmyqoJwwRfHkSbiVPS1mj89e-_gzhsHQ@mail.gmail\
.com>
Btw i hate these random message-ids, they reveal nothing! Like in
all those rooms i never visit. I am all for the wonderful Klaus
von Dohnanyi, now also 96 years old, and his saying "Ach bitte,
sagen Sie nicht Chatroom, sagen Sie Plauderstübchen" ("Oh please,
do not say chatroom, just say [little talk chamber][ʃtyːpçən]" [1]
(needs scripting-enabled browser).
[1] https://translate.google.com/?sl=auto&tl=en&text=Plauderst%C3%BCbchen&op=translate
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-30 20:14 ` Rik Farrow
@ 2024-09-30 22:00 ` Steffen Nurpmeso
2024-10-01 12:53 ` Dan Cross
1 sibling, 0 replies; 61+ messages in thread
From: Steffen Nurpmeso @ 2024-09-30 22:00 UTC (permalink / raw)
To: Rik Farrow; +Cc: Douglas McIlroy, TUHS main list
Rik Farrow wrote in
<CACY3YMHzg+6U_zTuhMTORgfh_Kse6MTspaGDfuUjXb4vLvV9mw@mail.gmail.com>:
|On Mon, Sep 30, 2024 at 12:12 PM Steffen Nurpmeso
|
|| Never attribute to malice what can adequately be explained by
|| incompetence.
|
||is the signature of Poul-Henning Kamp (whose email regarding that
||cstr list <snip>
|This is the 'problem' with C/C++: it's not the language itself so much as
|the people who are allowed, or forced, to use it. Many, if not all, of the
|people on this list have worked with great programmers, when most
|programmers are average at best. I saw some terrible things back when doing
|technical sales support for a startup selling a graphics library with C
|bindings. I came away convinced that most of the 'programmers' I was
|training were truly clueless.
I cannot comment on that, humans "get a good job with more pay you
are ok", have interests here and there, have (unfulfilled)
desires, problems with family or partner, aka "that sex machine",
and so in the end one can be lucky if that was not Jeffrey Dahmer
or something, and you do not end up as canned. "Or forced", yes!
Here on this lists are (me aside) intellectual but especially
witty people who love(d) their (likely) even-more-than-a-job, at
the "top of the pyramid", Mr. McIlroy just again remembered an
impressive scenario of how these people were (and are)
self-driving up that spiral staircase with nothing but the help of
their mind and a free library .. which was available for them.
Other than that, you know, there are plenty of languages with
plenty of support (syntax checks, sanitizers, "debug stuff"), far
beyond vim(1), let that start with JAVA (documented pretty well
right from the start as far as i know), and all the other options
that arose since then, and in parts are used. I personally "go on
the gums" if i have to work with OpenSSL, and image processing is
no fun either, so it could be it was me who drove you down...
(And complexity is never easy, i think, to noone.)
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-30 20:03 ` Rich Salz
2024-09-30 21:15 ` Steffen Nurpmeso
@ 2024-09-30 22:14 ` Bakul Shah via TUHS
2024-10-01 1:42 ` Alexis
1 sibling, 1 reply; 61+ messages in thread
From: Bakul Shah via TUHS @ 2024-09-30 22:14 UTC (permalink / raw)
To: Richard Salz; +Cc: TUHS main list, Douglas McIlroy
On Sep 30, 2024, at 1:03 PM, Rich Salz <rich.salz@gmail.com> wrote:
>
>
> On Mon, Sep 30, 2024 at 3:12 PM Steffen Nurpmeso <steffen@sdaoden.eu> wrote
> noone ever told them that even the eldest C can be used in a safe
> way;
> Perhaps we have different meanings of the word safe.
>
> void foo(char *p) { /* interesting stuff here */ ; free(p); }
> void bar() { char *p = malloc(20);
> foo(p);
> printf("foo is %s\n", p);
> foo(p);
> }
> Why should I have to think about this code when the language already knows what is wrong.
The language doesn't know! The compiler can't know without the programmer
indicating this somehow, especially if foo() is an extern function.
I am still interested in making C safer (to secure as best as possible
all the zillions of lines of C code in OS kernels). The question is, can
we retrofit safety features into C without doing major violence to it
& turning it into an ugly mess? No idea if this is even feasible but seems
worth exploring with possibly a great ROI.
Take the above code as an example. It is "free()" that invalidates the
value of its argument on return and this property is then inherited by its
callers. One idea is to declare free as follows:
void free(`zap void*ptr); // `zap is says *ptr will be invalid on return
Now a compiler can see and complain that foo() will break this and
insist that foo() too must express the same thing. So we change it to
void foo(`zap char* p) { ... free(p); }
Now the compiler knows p can't dereferenced after calling foo() and can
complain on seeing p being printf'ed.
This was just an example of an idea -- remains to be seen if it amounts to
anything useful. In an earlier email I had explored bounds checking. Clearly
all such extensions would have to play well together as well as with the
existing language. My hope is that the language can be evolved in this way
and gradually kernel code can be fixed up to benefit from it.
Bakul
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-30 22:14 ` Bakul Shah via TUHS
@ 2024-10-01 1:42 ` Alexis
0 siblings, 0 replies; 61+ messages in thread
From: Alexis @ 2024-10-01 1:42 UTC (permalink / raw)
To: tuhs
Bakul Shah via TUHS <tuhs@tuhs.org> writes:
> I am still interested in making C safer (to secure as best as
> possible
> all the zillions of lines of C code in OS kernels). The question
> is,
> can we retrofit safety features into C without doing major
> violence to
> it & turning it into an ugly mess? No idea if this is even
> feasible
> but seems worth exploring with possibly a great ROI.
Related: Ten years ago, Pascal Cuoq, Matthew Flatt, and John
Regehr
proposed "Friendly C":
> We are not trying to fix the deficiencies of the C language nor
> making
> an argument for or against C. Rather, we are trying rescue the
> predictable little language that we all know is hiding within
> the C
> standard. This language generates tight code and doesn’t make
> you feel
> like the compiler is your enemy. We want to decrease the rate of
> bit
> rot in existing C code and also to reduce the auditing overhead
> for
> safety-critical and security-critical C code. The intended
> audience
> for -std=friendly-c is people writing low-level systems such as
> operating systems, embedded systems, and programming language
> runtimes. These people typically have a good guess about what
> instructions the compiler will emit for each line of C code they
> write, and they simply do not want the compiler silently
> throwing out
> code. If they need code to be faster, they’ll change how it is
> written.
-- https://blog.regehr.org/archives/1180
Some of the concrete features proposed included:
> 1. The value of a pointer to an object whose lifetime has ended
> remains the same as it was when the object was alive.
>
> 2. Signed integer overflow results in two’s complement wrapping
> behavior at the bitwidth of the promoted type.
>
> 3. Shift by negative or shift-past-bitwidth produces an
> unspecified
> result.
i seem to recall there have been other proposals in the vein of
"Friendly C", but they're not coming to mind right now.
Alexis.
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-30 1:37 ` Luther Johnson
2024-09-30 3:52 ` ron minnich
@ 2024-10-01 12:43 ` arnold
1 sibling, 0 replies; 61+ messages in thread
From: arnold @ 2024-10-01 12:43 UTC (permalink / raw)
To: tuhs, luther.johnson
Luther Johnson <luther.johnson@makerlisp.com> wrote:
> 'Go' is also a pretty C-like advanced C kind of thing. What do Go
> writers think of it vs. C, for safety, reliability, clarity of
> expression, etc. ?
I have what to say about the topics in this thread, but I wanted
to answer this. I've been working in Go for about two years
in my current $DAYJOB. I haven't done as much of it as I'd like.
As preface, I've been programming (heavily) in C since 1983 and quite
a fair amount in C++ since 1999 or so, with Python in the mix more
recently.
Overall, I like Go. The freedom from manual memory management
takes some getting used to, but is very liberating once you do.
I do find it too terse in some cases, mostly in the initialization
syntax, and in the use of nested function objects.
I also find Go modules to be totally opaque, I don't understand
them at all.
One thing I'm still getting used to is the scoping with := vs. =.
Here's a nasty bug that I just figured out this week. Consider:
var (
clientSet *kubernetes.Interface
)
func main() {
....
// set the global var (we think)
clientSet, err := cluster.MakeClient() // or whatever
....
}
func otherfunc() {
// use the global var (but not really)
l := clientSet.CoreV1().NetworkPolicyList(ns).Items
}
In main(), I *think* I'm assigning to the global clientSet so that I
can use it later. But because of the 'err' and the :=, I've actually
created a local variable that shadows the global one, and in otherfunc(),
the global clientSet is still nil. Kaboom!
The correct way to write the code is:
var err error
clientSet, err = cluster.MakeClient() // or whatever
"When the light went on, it was blinding."
Of course, the Goland IDE actually warns me that this is the case,
by changing the color of clientSet in the assignment, but it's an
extremely subtle warning, and if you don't hover over it, and you're
not paying a lot of attention to the coloring, you miss it.
So, I like Go, and for a new project that I wouldn't write in Awk
or Python, I would use Go. The time or two I've looked at Rust,
it seemed to be just too difficult to learn, as well as still
evolving too fast. It does look like Rust will eventually replace
C and C++ for new systems level code. We can hope that will be
a good thing.
My two cents,
Arnold
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-30 20:14 ` Rik Farrow
2024-09-30 22:00 ` Steffen Nurpmeso
@ 2024-10-01 12:53 ` Dan Cross
1 sibling, 0 replies; 61+ messages in thread
From: Dan Cross @ 2024-10-01 12:53 UTC (permalink / raw)
To: Rik Farrow; +Cc: Douglas McIlroy, TUHS main list
On Mon, Sep 30, 2024 at 4:22 PM Rik Farrow <rik@rikfarrow.com> wrote:
> This is the 'problem' with C/C++: it's not the language itself so much as the people who are allowed, or forced, to use it.
Programmer ability is certainly an issue, but I would suggest that
another goes back to what Rob was alluding to: compiler writers have
taken too much advantage of UB, making it difficult to write
well-formed programs that last.
The `realloc` function I mentioned earlier is a good case in point;
the first ANSI C standard says this: "If ptr is a null pointer, the
realloc function behaves like the malloc function for the specified
size. ... If size is zero and ptr is not a null pointer, the object it
points to is freed." While the description of `malloc` doesn't say
thing about what happens when `size` is 0, perhaps making `realloc(0,
NULL)` nominally UB (??), the behavior of `realloc(0, ptr)` is clearly
well defined when `ptr` is not nil, and it's entirely possible that
programs were written with that well-defined behavior as an
assumption. (Worth mentioning is that this language was changed in
C99, and implementations started differing from there.)
But now, C23 has made `realloc(0, ptr)` UB, regardless of the value of
`ptr`, and since compiler writers have given themselves license to
take an extremely broad view of what they can do if a program exhibits
UB, programs that were previously well-defined with respect to C90 may
well stop working properly when compiled with modern compilers. I
don't think this is a hypothetical; C programs that appear to be
working as expected for years have, and will continue, to suddenly
break when compiled with a newer compiler, because the programmer
tripped a UB trigger somewhere along the way, likely without even
recognizing it. Moreover, I don't believe that there are any
non-trivial C programs out there that don't have such timebombs
lurking throughout. How could they not, if things that were previously
well-defined can become UB in subsequent revisions of the standard?
Perhaps I've mentioned it before, but a great example of the
surprising nature of UB is the following program:
unsigned short mul(unsigned short a, unsigned short b) { return a * b; }
Is this tiny function always well-defined? Sadly, no, at least not on
most common platforms where `int` is 32 bits and `short` is 16. On
such platforms, the "usual arithmetic conversions" will kick in before
the multiplication, and the values will be converted to _signed_ ints
and _then_ multiplied; the product will then be converted back to
`unsigned short`. And while the type conversion process both ways is
well-defined, there exist values a,b of type unsigned short so that
a*b will overflow a signed 32-bit int (consider 0xffff*0xffff), and
signed integer overflow is UB; a compiler would be well within its
rights to assume that such overflow can never occur and generate, say,
a saturating multiplication instruction if it so chose. This would
work, be perfectly legal, and almost certainly be surprising to the
programmer.
The fix is simple, of course:
unsigned short
mul(unsigned short a, unsigned short b)
{
unsigned int aa = a, bb = b;
return aa * bb;
}
But one would have to know to write such a thing in the first place.
> Many, if not all, of the people on this list have worked with great programmers, when most programmers are average at best. I saw some terrible things back when doing technical sales support for a startup selling a graphics library with C bindings. I came away convinced that most of the 'programmers' I was training were truly clueless.
My sense is that tossing in bad programmers is just throwing gasoline
onto a dumpster fire. Particularly when they look to charlatans like
Robert Martin or Allen Holub as sources of education and inspiration
instead of seeking out proper sources of education.
- Dan C.
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-02 20:42 ` Dan Cross
2024-10-02 21:54 ` Marc Donner
@ 2024-10-05 17:45 ` arnold
1 sibling, 0 replies; 61+ messages in thread
From: arnold @ 2024-10-05 17:45 UTC (permalink / raw)
To: crossd, arnold; +Cc: tuhs, rik, bakul
Dan Cross <crossd@gmail.com> wrote:
> On Wed, Oct 2, 2024 at 2:27 AM <arnold@skeeve.com> wrote:
> > Rik Farrow <rik@rikfarrow.com> wrote:
> > > And my comment about seeing code produced by programmers while doing sales
> > > support dates from 1990. This isn't something new,
> >
> > Also true. In the late 80s I was a sysadmin at Emory U. We had a
> > Vax connected to BITNET with funky hardware and UREP, the Unix RSCS
> > Emulation Program, from the University of Pennsylvania. Every time
> > I had to dive into that code, I felt like I needed a shower afterwards. :-)
>
> Uh oh, lest the UPenn alumni among us get angry (high, Ron!) I feel I
> must point out that UREP wasn't from the University of Pennsylvania,
> but rather, from The Pennsylvania State University (yes, "The" is part
> of the name). UPenn (upenn.edu) is an Ivy in Philly; Penn State
> (psu.edu) is a state school in University Park, which is next to State
> College (really, that's the name of the town) with satellite campuses
> scattered around the state.
>
> - Dan C.
Ooops. My bad. Thanks Dan, for the correction.
Either way, the code was awful. :-)
Arnold
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-02 20:42 ` Dan Cross
@ 2024-10-02 21:54 ` Marc Donner
2024-10-05 17:45 ` arnold
1 sibling, 0 replies; 61+ messages in thread
From: Marc Donner @ 2024-10-02 21:54 UTC (permalink / raw)
To: Dan Cross; +Cc: rik, tuhs, bakul
[-- Attachment #1: Type: text/plain, Size: 1586 bytes --]
RSCS. Sigh. Remote Spooling Communications Subsystem.
I suppose I could praise it for its elegant layering of abstractions ...
just send a virtual card deck to the virtual card reader on the virtual
machine being used by your correspondent.
Or I could curse it for its absurdity - really, a virtual card deck? 80
character EBCDIC records.
An amazing concept.
=====
nygeek.net
mindthegapdialogs.com/home <https://www.mindthegapdialogs.com/home>
On Wed, Oct 2, 2024 at 4:43 PM Dan Cross <crossd@gmail.com> wrote:
> On Wed, Oct 2, 2024 at 2:27 AM <arnold@skeeve.com> wrote:
> > Rik Farrow <rik@rikfarrow.com> wrote:
> > > And my comment about seeing code produced by programmers while doing
> sales
> > > support dates from 1990. This isn't something new,
> >
> > Also true. In the late 80s I was a sysadmin at Emory U. We had a
> > Vax connected to BITNET with funky hardware and UREP, the Unix RSCS
> > Emulation Program, from the University of Pennsylvania. Every time
> > I had to dive into that code, I felt like I needed a shower afterwards.
> :-)
>
> Uh oh, lest the UPenn alumni among us get angry (high, Ron!) I feel I
> must point out that UREP wasn't from the University of Pennsylvania,
> but rather, from The Pennsylvania State University (yes, "The" is part
> of the name). UPenn (upenn.edu) is an Ivy in Philly; Penn State
> (psu.edu) is a state school in University Park, which is next to State
> College (really, that's the name of the town) with satellite campuses
> scattered around the state.
>
> - Dan C.
>
[-- Attachment #2: Type: text/html, Size: 3022 bytes --]
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-02 5:49 ` arnold
@ 2024-10-02 20:42 ` Dan Cross
2024-10-02 21:54 ` Marc Donner
2024-10-05 17:45 ` arnold
0 siblings, 2 replies; 61+ messages in thread
From: Dan Cross @ 2024-10-02 20:42 UTC (permalink / raw)
To: arnold; +Cc: rik, tuhs, bakul
On Wed, Oct 2, 2024 at 2:27 AM <arnold@skeeve.com> wrote:
> Rik Farrow <rik@rikfarrow.com> wrote:
> > And my comment about seeing code produced by programmers while doing sales
> > support dates from 1990. This isn't something new,
>
> Also true. In the late 80s I was a sysadmin at Emory U. We had a
> Vax connected to BITNET with funky hardware and UREP, the Unix RSCS
> Emulation Program, from the University of Pennsylvania. Every time
> I had to dive into that code, I felt like I needed a shower afterwards. :-)
Uh oh, lest the UPenn alumni among us get angry (high, Ron!) I feel I
must point out that UREP wasn't from the University of Pennsylvania,
but rather, from The Pennsylvania State University (yes, "The" is part
of the name). UPenn (upenn.edu) is an Ivy in Philly; Penn State
(psu.edu) is a state school in University Park, which is next to State
College (really, that's the name of the town) with satellite campuses
scattered around the state.
- Dan C.
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 20:34 ` Rik Farrow
2024-10-02 0:55 ` Steffen Nurpmeso
@ 2024-10-02 5:49 ` arnold
2024-10-02 20:42 ` Dan Cross
1 sibling, 1 reply; 61+ messages in thread
From: arnold @ 2024-10-02 5:49 UTC (permalink / raw)
To: rik, arnold; +Cc: tuhs, bakul
Rik Farrow <rik@rikfarrow.com> wrote:
> And my comment about seeing code produced by programmers while doing sales
> support dates from 1990. This isn't something new,
Also true. In the late 80s I was a sysadmin at Emory U. We had a
Vax connected to BITNET with funky hardware and UREP, the Unix RSCS
Emulation Program, from the University of Pennsylvania. Every time
I had to dive into that code, I felt like I needed a shower afterwards. :-)
Arnold
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 20:34 ` Rik Farrow
@ 2024-10-02 0:55 ` Steffen Nurpmeso
2024-10-02 5:49 ` arnold
1 sibling, 0 replies; 61+ messages in thread
From: Steffen Nurpmeso @ 2024-10-02 0:55 UTC (permalink / raw)
To: Rik Farrow; +Cc: tuhs, bakul
Rik Farrow wrote in
<CACY3YMGcSm+ATwbz1TmKKoOQeKCPsoTnT4u93vFdKpZyyHCZ7A@mail.gmail.com>:
|On Tue, Oct 1, 2024 at 12:07 PM <arnold@skeeve.com> wrote:
|> Bakul Shah via TUHS <tuhs@tuhs.org> wrote:
...
|>> Sounds like boomer mentality... Kids these days... :-) Also sounds like
|>> the kind of arguments assembly language programmers presented when *we*
|>> were the "kids" trying out "structured programming"!
|>
|> It's not that they're intrinsically unqualified. They were never
|> taught, so they don't know what they're doing. I'm unqualified to
|> fly a plane because I never learned or practiced, not because I'm not
|> intelligent enough. Same thing for many of today's programmers
|> and C / C++.
|And my comment about seeing code produced by programmers while doing sales
|support dates from 1990. This isn't something new, from my perspective. I
|was working in a small programming shop where there were a handful of
|excellent programmers, and then sent out to help customers get started
|using their libraries. That's when I experienced seeing things that still
|make me cringe.
Btw the "official Linux firmware"
https://git.kernel.org/?p=linux/kernel/git/firmware/linux-firmware.git;a=summary
introduced a dependency for the "rdfind" utility i think ~two
years ago (it was later made optional) for this code:
$verbose "Finding duplicate files"
rdfind -makesymlinks true -makeresultsfile false "$destdir" >/dev/null
find "$destdir" -type l | while read -r l; do
target="$(realpath "$l")"
$verbose "Correcting path for $l"
ln -fs "$(realpath --relative-to="$(dirname "$(realpath -s "$l")")" "$target")" "$l"
done
I proposed (because it really drove me mad, i have nothing to do
with Linux kernel stuff etc shall you think that)
(
cd "$destdir" && find . -type f | xargs cksum | sort | {
ls= lf=
while read s1 s2 f; do
s="$s1 $s2"
#$verbose $s $f
if [ "$s" = "$ls" ] && cmp "$lf" "$f"; then
$verbose 'duplicate '"${lf##*/}" "${f#./*}"
rm -f "$f"
#ln -s "${lf#./*}" "${f#./*}"
ln -s "${lf##*/}" "${f#./*}"
else
ls=$s
lf=$f
fi
done
}
)
(as a draft, with only light testing, but it is not far from doing
it at maximum) which only uses POSIX default tools etc, but these
guys from very big companies (RedHat; the guy who did *that* is
from AMD) did not even respond, at least to that.
(At times i tried to get rid of rsync dependency of kernel
makefile officially, as that can also be done via plain shell
tools, they at least answered "what is wrong with rsync".)
Maybe because the patch also included
- compress="zstd --compress --quiet --stdout"
+ compress="zstd -T0 --ultra -22 --compress --quiet --stdout"
but that only brought the firmware into line with the normal Linux
kernel make zstd usage. I will never know.
I think what i am trying to say is that maybe "time is money" in
addition to anything else. (I never heard about rdfind though.
Btw its manual (it has one!) says
SEE ALSO
md5sum, sha1sum, find, symlinks
were cksum is a standard tool. So it is. Everyone its own
infrastructure, how large it is; you all only get the binary
updates anyway, my Linux distribution compiles from source; and
what the mesa library alone has grown in new dependencies that are
mostly never needed let alone at runtime, like YAML, that has been
done at release-tarball-creation time in the past. At least
here.)
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 19:07 ` arnold
@ 2024-10-01 20:34 ` Rik Farrow
2024-10-02 0:55 ` Steffen Nurpmeso
2024-10-02 5:49 ` arnold
0 siblings, 2 replies; 61+ messages in thread
From: Rik Farrow @ 2024-10-01 20:34 UTC (permalink / raw)
To: arnold; +Cc: tuhs, bakul
[-- Attachment #1: Type: text/plain, Size: 1189 bytes --]
And my comment about seeing code produced by programmers while doing sales
support dates from 1990. This isn't something new, from my perspective. I
was working in a small programming shop where there were a handful of
excellent programmers, and then sent out to help customers get started
using their libraries. That's when I experienced seeing things that still
make me cringe.
Rik
On Tue, Oct 1, 2024 at 12:07 PM <arnold@skeeve.com> wrote:
> Bakul Shah via TUHS <tuhs@tuhs.org> wrote:
>
> >
> > > Thus Go and Rust are good things, taking the sharp tools out of the
> > > hands of the people who aren't qualified to use them. Same thing
> Python.
> >
> > Sounds like boomer mentality... Kids these days... :-) Also sounds like
> > the kind of arguments assembly language programmers presented when *we*
> > were the "kids" trying out "structured programming"!
>
> It's not that they're intrinsically unqualified. They were never
> taught, so they don't know what they're doing. I'm unqualified to
> fly a plane because I never learned or practiced, not because I'm not
> intelligent enough. Same thing for many of today's programmers
> and C / C++.
>
[-- Attachment #2: Type: text/html, Size: 1664 bytes --]
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 15:44 ` Bakul Shah via TUHS
@ 2024-10-01 19:07 ` arnold
2024-10-01 20:34 ` Rik Farrow
0 siblings, 1 reply; 61+ messages in thread
From: arnold @ 2024-10-01 19:07 UTC (permalink / raw)
To: tuhs, bakul
Bakul Shah via TUHS <tuhs@tuhs.org> wrote:
>
> > Thus Go and Rust are good things, taking the sharp tools out of the
> > hands of the people who aren't qualified to use them. Same thing Python.
>
> Sounds like boomer mentality... Kids these days... :-) Also sounds like
> the kind of arguments assembly language programmers presented when *we*
> were the "kids" trying out "structured programming"!
It's not that they're intrinsically unqualified. They were never
taught, so they don't know what they're doing. I'm unqualified to
fly a plane because I never learned or practiced, not because I'm not
intelligent enough. Same thing for many of today's programmers
and C / C++.
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 14:56 ` Dan Cross
2024-10-01 15:08 ` Stuff Received
2024-10-01 15:20 ` Larry McVoy
@ 2024-10-01 19:04 ` arnold
2 siblings, 0 replies; 61+ messages in thread
From: arnold @ 2024-10-01 19:04 UTC (permalink / raw)
To: luther.johnson, crossd; +Cc: tuhs
Dan Cross <crossd@gmail.com> wrote:
> I talk to a lot of academics, and I think they see the situation
> differently than is presented here. In a nutshell, the way a lot of
> them look at it, the amount of computer science in the world increases
> constantly while the amount of time they have to teach that to
> undergraduates remains fixed. As a result, they have to pick and
> choose what they teach very, very carefully, balancing a number of
> criteria as they do so. What this translates to in the real world
> isn't that the bar is lowered, but that the bar is different.
You also have a lot of self-taught programmers, and graduates
of bootcamp programs, getting into programming.
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 13:47 ` arnold
2024-10-01 14:01 ` Larry McVoy
@ 2024-10-01 16:49 ` Paul Winalski
1 sibling, 0 replies; 61+ messages in thread
From: Paul Winalski @ 2024-10-01 16:49 UTC (permalink / raw)
To: arnold; +Cc: Computer Old Farts Followers
[-- Attachment #1: Type: text/plain, Size: 860 bytes --]
On Tue, Oct 1, 2024 at 10:07 AM <arnold@skeeve.com> wrote:
[regarding writing an Ada compiler as a class project]
> Did you do generics? That and the run time, which had some real-time
> bits to it (*IIRC*, it's been a long time), as well as the cross
> object code type checking, would have been real bears.
>
> Like many things, the first 90% is easy, the second 90% is hard. :-)
>
> I was in DEC's compiler group when they were implementing Ada for VAX/VMS.
It gets very tricky when routine libraries are involved. Just figuring
out the compilation order can be a real bear (part of this is the cross
object code type checking you mention).
From my viewpoint Ada suffered two problems. First, it was such a large
language and very tricky to implement--even more so than PL/I. Second, it
had US Government cooties.
-Paul W.
[-- Attachment #2: Type: text/html, Size: 1226 bytes --]
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 13:13 ` arnold
2024-10-01 13:32 ` Larry McVoy
2024-10-01 15:44 ` Bakul Shah via TUHS
@ 2024-10-01 16:40 ` Paul Winalski
2 siblings, 0 replies; 61+ messages in thread
From: Paul Winalski @ 2024-10-01 16:40 UTC (permalink / raw)
To: arnold; +Cc: Computer Old Farts Followers
[-- Attachment #1: Type: text/plain, Size: 412 bytes --]
On Tue, Oct 1, 2024 at 9:13 AM <arnold@skeeve.com> wrote:
> This goes back to the evolution thing. At the time, C was a huge
> step up from FORTRAN and assembly.
>
Certainly it's a step up (and a BIG step up) from assembly. But I'd say C
is a step sidewise from Fortran. An awful lot of HPTC programming involves
throwing multidimensional arrays around and C is not suitable for that.
-Paul W.
[-- Attachment #2: Type: text/html, Size: 718 bytes --]
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 15:38 ` Peter Weinberger (温博格) via TUHS
@ 2024-10-01 15:50 ` ron minnich
0 siblings, 0 replies; 61+ messages in thread
From: ron minnich @ 2024-10-01 15:50 UTC (permalink / raw)
To: Peter Weinberger (温博格); +Cc: Luther Johnson, tuhs
[-- Attachment #1: Type: text/plain, Size: 4457 bytes --]
by the way, if you are hankering to do bare metal programming again, and
you're done with C, tinygo is great, and it works on 200 or so boards, from
the bbc micro on up.
tinygo gives you all the nice bits from go, and uses llvm as the backend,
so programs can be very small.
Tamago is also bare metal go, using the standard Go compiler, and has even
flown in space.
I had thought Rust and C would always be the only path to very low level
programming in tiny environments, but it turned out I was too pessimistic.
https://docs.google.com/presentation/d/1nqZicux6SloS8AynBu0B7cJYxw5pKu4IfCzq_vqWccw/edit?usp=sharing
It's been years now since I used C on bare metal; it's Rust or Go for me.
These languages give me all I ever needed for "first instruction after
reset" programming.
On Tue, Oct 1, 2024 at 8:38 AM Peter Weinberger (温博格) via TUHS <
tuhs@tuhs.org> wrote:
> Each generation bemoans the qualities of following generations.
> (Perhaps justifiably in this case, but we're acting out a stereotyped
> pattern.)
>
> Having a mental model of a computer is a good idea, but I'd rather not
> have to teach the details of PDP-11 condition codes (which changed in
> unexpected ways over time) Now caches loom large, and should such a
> course provide a mental model of smart phones? [I think it should, and
> the course should cover secure boot, but that leaves lots less time
> for assembly language.]
>
> On Tue, Oct 1, 2024 at 11:20 AM Larry McVoy <lm@mcvoy.com> wrote:
> >
> > On Tue, Oct 01, 2024 at 10:56:13AM -0400, Dan Cross wrote:
> > > On Tue, Oct 1, 2024 at 10:32???AM Luther Johnson
> > > <luther.johnson@makerlisp.com> wrote:
> > > > I think because the of the orders of magnitude increase in the demand
> > > > for programmers, we now have a very large number of programmers with
> > > > little or no math and science (and computer science doesn't count in
> the
> > > > point I'm trying to make here, if that's your only science, you're
> not
> > > > going to have the models in your head from other disciplines to give
> you
> > > > useful analogs) background, and that's a big change from 40 years
> ago.
> > > > So that has had an effect on who is programming, how they think about
> > > > it, and how languages have been marketed to that programming
> audience. IMHO.
> > >
> > > I've found a grounding in mathematics useful for programming, but
> > > beyond some knowledge of the physical constraints that the universe
> > > places on us and a very healthy appreciation for the scientific
> > > method, I'm having a hard time understanding how the hard sciences
> > > would help out too much. Electrical engineering seems like it would be
> > > more useful, than, say, chemistry or geology.
> > >
> > > I talk to a lot of academics, and I think they see the situation
> > > differently than is presented here. In a nutshell, the way a lot of
> > > them look at it, the amount of computer science in the world increases
> > > constantly while the amount of time they have to teach that to
> > > undergraduates remains fixed. As a result, they have to pick and
> > > choose what they teach very, very carefully, balancing a number of
> > > criteria as they do so. What this translates to in the real world
> > > isn't that the bar is lowered, but that the bar is different.
> >
> > I really wish that they made students take something like the PDP-11
> > assembly class - it was really systems architecture, you learned the
> > basic idea of a computer: a CPU, a bus to talk to memory, a bus to
> > talk to I/O, how a stack works, ideally how a context switch works
> > though that kinda blows minds (I personally don't think you are a
> > kernel programmer if you haven't implemented swtch() or at least
> > walked the code and understood all of it).
> >
> > I did all that and developed a mental model of all computers that
> > has helped me over the last 4 decades. Yes, my model is overly
> > simplistic but it still works, even on the x86 craziness. I don't
> > know how you could get to that mental model with x86, x86 is too
> > weird. I don't really know which architecture is close to the
> > simplicity of a PDP-11 today. Anyone?
> >
> > If I were teaching it, I'd just get a PDP-11 simulator and teach
> > on that. Maybe.
> > --
> > ---
> > Larry McVoy Retired to fishing
> http://www.mcvoy.com/lm/boat
>
[-- Attachment #2: Type: text/html, Size: 5646 bytes --]
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 13:13 ` arnold
2024-10-01 13:32 ` Larry McVoy
@ 2024-10-01 15:44 ` Bakul Shah via TUHS
2024-10-01 19:07 ` arnold
2024-10-01 16:40 ` Paul Winalski
2 siblings, 1 reply; 61+ messages in thread
From: Bakul Shah via TUHS @ 2024-10-01 15:44 UTC (permalink / raw)
To: tuhs
> Thus Go and Rust are good things, taking the sharp tools out of the
> hands of the people who aren't qualified to use them. Same thing Python.
Sounds like boomer mentality... Kids these days... :-) Also sounds like
the kind of arguments assembly language programmers presented when *we*
were the "kids" trying out "structured programming"!
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 15:20 ` Larry McVoy
@ 2024-10-01 15:38 ` Peter Weinberger (温博格) via TUHS
2024-10-01 15:50 ` ron minnich
0 siblings, 1 reply; 61+ messages in thread
From: Peter Weinberger (温博格) via TUHS @ 2024-10-01 15:38 UTC (permalink / raw)
To: Larry McVoy; +Cc: Luther Johnson, tuhs
Each generation bemoans the qualities of following generations.
(Perhaps justifiably in this case, but we're acting out a stereotyped
pattern.)
Having a mental model of a computer is a good idea, but I'd rather not
have to teach the details of PDP-11 condition codes (which changed in
unexpected ways over time) Now caches loom large, and should such a
course provide a mental model of smart phones? [I think it should, and
the course should cover secure boot, but that leaves lots less time
for assembly language.]
On Tue, Oct 1, 2024 at 11:20 AM Larry McVoy <lm@mcvoy.com> wrote:
>
> On Tue, Oct 01, 2024 at 10:56:13AM -0400, Dan Cross wrote:
> > On Tue, Oct 1, 2024 at 10:32???AM Luther Johnson
> > <luther.johnson@makerlisp.com> wrote:
> > > I think because the of the orders of magnitude increase in the demand
> > > for programmers, we now have a very large number of programmers with
> > > little or no math and science (and computer science doesn't count in the
> > > point I'm trying to make here, if that's your only science, you're not
> > > going to have the models in your head from other disciplines to give you
> > > useful analogs) background, and that's a big change from 40 years ago.
> > > So that has had an effect on who is programming, how they think about
> > > it, and how languages have been marketed to that programming audience. IMHO.
> >
> > I've found a grounding in mathematics useful for programming, but
> > beyond some knowledge of the physical constraints that the universe
> > places on us and a very healthy appreciation for the scientific
> > method, I'm having a hard time understanding how the hard sciences
> > would help out too much. Electrical engineering seems like it would be
> > more useful, than, say, chemistry or geology.
> >
> > I talk to a lot of academics, and I think they see the situation
> > differently than is presented here. In a nutshell, the way a lot of
> > them look at it, the amount of computer science in the world increases
> > constantly while the amount of time they have to teach that to
> > undergraduates remains fixed. As a result, they have to pick and
> > choose what they teach very, very carefully, balancing a number of
> > criteria as they do so. What this translates to in the real world
> > isn't that the bar is lowered, but that the bar is different.
>
> I really wish that they made students take something like the PDP-11
> assembly class - it was really systems architecture, you learned the
> basic idea of a computer: a CPU, a bus to talk to memory, a bus to
> talk to I/O, how a stack works, ideally how a context switch works
> though that kinda blows minds (I personally don't think you are a
> kernel programmer if you haven't implemented swtch() or at least
> walked the code and understood all of it).
>
> I did all that and developed a mental model of all computers that
> has helped me over the last 4 decades. Yes, my model is overly
> simplistic but it still works, even on the x86 craziness. I don't
> know how you could get to that mental model with x86, x86 is too
> weird. I don't really know which architecture is close to the
> simplicity of a PDP-11 today. Anyone?
>
> If I were teaching it, I'd just get a PDP-11 simulator and teach
> on that. Maybe.
> --
> ---
> Larry McVoy Retired to fishing http://www.mcvoy.com/lm/boat
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 14:56 ` Dan Cross
2024-10-01 15:08 ` Stuff Received
@ 2024-10-01 15:20 ` Larry McVoy
2024-10-01 15:38 ` Peter Weinberger (温博格) via TUHS
2024-10-01 19:04 ` arnold
2 siblings, 1 reply; 61+ messages in thread
From: Larry McVoy @ 2024-10-01 15:20 UTC (permalink / raw)
To: Dan Cross; +Cc: Luther Johnson, tuhs
On Tue, Oct 01, 2024 at 10:56:13AM -0400, Dan Cross wrote:
> On Tue, Oct 1, 2024 at 10:32???AM Luther Johnson
> <luther.johnson@makerlisp.com> wrote:
> > I think because the of the orders of magnitude increase in the demand
> > for programmers, we now have a very large number of programmers with
> > little or no math and science (and computer science doesn't count in the
> > point I'm trying to make here, if that's your only science, you're not
> > going to have the models in your head from other disciplines to give you
> > useful analogs) background, and that's a big change from 40 years ago.
> > So that has had an effect on who is programming, how they think about
> > it, and how languages have been marketed to that programming audience. IMHO.
>
> I've found a grounding in mathematics useful for programming, but
> beyond some knowledge of the physical constraints that the universe
> places on us and a very healthy appreciation for the scientific
> method, I'm having a hard time understanding how the hard sciences
> would help out too much. Electrical engineering seems like it would be
> more useful, than, say, chemistry or geology.
>
> I talk to a lot of academics, and I think they see the situation
> differently than is presented here. In a nutshell, the way a lot of
> them look at it, the amount of computer science in the world increases
> constantly while the amount of time they have to teach that to
> undergraduates remains fixed. As a result, they have to pick and
> choose what they teach very, very carefully, balancing a number of
> criteria as they do so. What this translates to in the real world
> isn't that the bar is lowered, but that the bar is different.
I really wish that they made students take something like the PDP-11
assembly class - it was really systems architecture, you learned the
basic idea of a computer: a CPU, a bus to talk to memory, a bus to
talk to I/O, how a stack works, ideally how a context switch works
though that kinda blows minds (I personally don't think you are a
kernel programmer if you haven't implemented swtch() or at least
walked the code and understood all of it).
I did all that and developed a mental model of all computers that
has helped me over the last 4 decades. Yes, my model is overly
simplistic but it still works, even on the x86 craziness. I don't
know how you could get to that mental model with x86, x86 is too
weird. I don't really know which architecture is close to the
simplicity of a PDP-11 today. Anyone?
If I were teaching it, I'd just get a PDP-11 simulator and teach
on that. Maybe.
--
---
Larry McVoy Retired to fishing http://www.mcvoy.com/lm/boat
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 14:56 ` Dan Cross
@ 2024-10-01 15:08 ` Stuff Received
2024-10-01 15:20 ` Larry McVoy
2024-10-01 19:04 ` arnold
2 siblings, 0 replies; 61+ messages in thread
From: Stuff Received @ 2024-10-01 15:08 UTC (permalink / raw)
To: COFF
[-->COFF]
On 2024-10-01 10:56, Dan Cross wrote (in part):
> I've found a grounding in mathematics useful for programming, but
> beyond some knowledge of the physical constraints that the universe
> places on us and a very healthy appreciation for the scientific
> method, I'm having a hard time understanding how the hard sciences
> would help out too much. Electrical engineering seems like it would be
> more useful, than, say, chemistry or geology.
I see this as related to the old question about whether it is easier to
teach domain experts to program or teach programmers about the domain.
(I worked for a company that wrote/sold scientific libraries for
embedded systems.) We had a mixture but the former was often easier.
S.
>
> I talk to a lot of academics, and I think they see the situation
> differently than is presented here. In a nutshell, the way a lot of
> them look at it, the amount of computer science in the world increases
> constantly while the amount of time they have to teach that to
> undergraduates remains fixed. As a result, they have to pick and
> choose what they teach very, very carefully, balancing a number of
> criteria as they do so. What this translates to in the real world
> isn't that the bar is lowered, but that the bar is different.
>
> - Dan C.
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 14:25 ` Luther Johnson
@ 2024-10-01 14:56 ` Dan Cross
2024-10-01 15:08 ` Stuff Received
` (2 more replies)
0 siblings, 3 replies; 61+ messages in thread
From: Dan Cross @ 2024-10-01 14:56 UTC (permalink / raw)
To: Luther Johnson; +Cc: tuhs
On Tue, Oct 1, 2024 at 10:32 AM Luther Johnson
<luther.johnson@makerlisp.com> wrote:
> I think because the of the orders of magnitude increase in the demand
> for programmers, we now have a very large number of programmers with
> little or no math and science (and computer science doesn't count in the
> point I'm trying to make here, if that's your only science, you're not
> going to have the models in your head from other disciplines to give you
> useful analogs) background, and that's a big change from 40 years ago.
> So that has had an effect on who is programming, how they think about
> it, and how languages have been marketed to that programming audience. IMHO.
I've found a grounding in mathematics useful for programming, but
beyond some knowledge of the physical constraints that the universe
places on us and a very healthy appreciation for the scientific
method, I'm having a hard time understanding how the hard sciences
would help out too much. Electrical engineering seems like it would be
more useful, than, say, chemistry or geology.
I talk to a lot of academics, and I think they see the situation
differently than is presented here. In a nutshell, the way a lot of
them look at it, the amount of computer science in the world increases
constantly while the amount of time they have to teach that to
undergraduates remains fixed. As a result, they have to pick and
choose what they teach very, very carefully, balancing a number of
criteria as they do so. What this translates to in the real world
isn't that the bar is lowered, but that the bar is different.
- Dan C.
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 14:01 ` Larry McVoy
2024-10-01 14:18 ` arnold
@ 2024-10-01 14:25 ` Luther Johnson
2024-10-01 14:56 ` Dan Cross
1 sibling, 1 reply; 61+ messages in thread
From: Luther Johnson @ 2024-10-01 14:25 UTC (permalink / raw)
To: tuhs
I think because the of the orders of magnitude increase in the demand
for programmers, we now have a very large number of programmers with
little or no math and science (and computer science doesn't count in the
point I'm trying to make here, if that's your only science, you're not
going to have the models in your head from other disciplines to give you
useful analogs) background, and that's a big change from 40 years ago.
So that has had an effect on who is programming, how they think about
it, and how languages have been marketed to that programming audience. IMHO.
On 10/01/2024 07:01 AM, Larry McVoy wrote:
> On Tue, Oct 01, 2024 at 07:47:10AM -0600, arnold@skeeve.com wrote:
>> Larry McVoy <lm@mcvoy.com> wrote:
>>
>>> On Tue, Oct 01, 2024 at 07:13:04AM -0600, arnold@skeeve.com wrote:
>>>> Would the word have been better off if Ada had caught on everywhere?
>>>> Probably. When I was in grad school studying language design, circa 1982,
>>>> it was expected to do so. But the language was VERY challenging for
>>>> compiler writers.
>>> Huh. Rob Netzer and I, as grad students, took cs701 and cs702 at UW Madison.
>>> It was the compilers course (701) and the really hard compilers course (702)
>>> at the time. The first course was to write a compiler for a subset of Ada
>>> and the second on increased the subset to be almost complete.
>>>
>>> We were supposed to do it on an IBM mainframe because the professor had his
>>> own version of lex/yacc there. Rob had a 3b1 and asked if we could do it
>>> there if he rewrote the parser stuff. Prof said sure.
>>>
>>> In one semester we had a compiler, no optimizer and not much in the
>>> way of graceful error handling, but it compiled stuff that ran. We did
>>> all of Ada other than late binding of variables (I think that was Ada's
>>> templates) and threads and probably some other stuff I don't remember.
>> Did you do generics? That and the run time, which had some real-time
>> bits to it (*IIRC*, it's been a long time), as well as the cross
>> object code type checking, would have been real bears.
> None of those ring a bell so
>
>> Like many things, the first 90% is easy, the second 90% is hard. :-)
> I guess we did the easy stuff :-(
>
>>> I don't consider myself to be that good of a programmer, I can point to
>>> dozens of people my age that can run circles around me and I'm sure there
>>> are many more.
>> You are undoubtedly better than you give yourself credit for, even
>> if there were people who could run circles around you. I learned
>> a long time ago, that no matter how good you are, there's always
>> someone better than you at something. I decided long ago to not
>> try to compete with Superman.
> Funny, I've come to the same conclusion, both in programming and my
> retirement hobby. There is always someone way better than me, but
> you are correct, that doesn't mean I'm awful. Just have more to
> learn.
>
> A buddy pointed out that I was probably better than 80% of the people
> leaving the dock, it's just I fish with a guy who is better than pretty
> much everyone.
>
>>> But apparently the bar is pretty low these days and I agree, that's sad.
>> And it makes it much less fun to be out in the working world. :-(
> As a guy in his 2nd retirement (1st didn't stick) I can tell you I am
> so happy not having to deal with work stuff. My buddies who are still
> working tell me stories I find difficult to believe. They all say I'm
> so politically incorrect that I wouldn't last a week in today's world.
> If their stories are true, yeah, that's not for me.
>
> Weird politics and crappy programmers, count me out.
>
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 14:01 ` Larry McVoy
@ 2024-10-01 14:18 ` arnold
2024-10-01 14:25 ` Luther Johnson
1 sibling, 0 replies; 61+ messages in thread
From: arnold @ 2024-10-01 14:18 UTC (permalink / raw)
To: lm, arnold; +Cc: tuhs
Closing off the thread...
> > Did you do generics? That and the run time, which had some real-time
> > bits to it (*IIRC*, it's been a long time), as well as the cross
> > object code type checking, would have been real bears.
>
> None of those ring a bell so
>
> > Like many things, the first 90% is easy, the second 90% is hard. :-)
>
> I guess we did the easy stuff :-(
Even the easy stuff is good learning.
> Funny, I've come to the same conclusion, both in programming and my
> retirement hobby. There is always someone way better than me, but
> you are correct, that doesn't mean I'm awful. Just have more to
> learn.
Right.
> > > But apparently the bar is pretty low these days and I agree, that's sad.
> >
> > And it makes it much less fun to be out in the working world. :-(
>
> As a guy in his 2nd retirement (1st didn't stick) I can tell you I am
> so happy not having to deal with work stuff. My buddies who are still
> working tell me stories I find difficult to believe. They all say I'm
> so politically incorrect that I wouldn't last a week in today's world.
> If their stories are true, yeah, that's not for me.
>
> Weird politics and crappy programmers, count me out.
I don't have to deal with the politics, and my team is good, but
the company has a lot of crappy code.
I am planning to retire quite soon, too. :-)
Arnold
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 13:47 ` arnold
@ 2024-10-01 14:01 ` Larry McVoy
2024-10-01 14:18 ` arnold
2024-10-01 14:25 ` Luther Johnson
2024-10-01 16:49 ` Paul Winalski
1 sibling, 2 replies; 61+ messages in thread
From: Larry McVoy @ 2024-10-01 14:01 UTC (permalink / raw)
To: arnold; +Cc: tuhs
On Tue, Oct 01, 2024 at 07:47:10AM -0600, arnold@skeeve.com wrote:
> Larry McVoy <lm@mcvoy.com> wrote:
>
> > On Tue, Oct 01, 2024 at 07:13:04AM -0600, arnold@skeeve.com wrote:
> > > Would the word have been better off if Ada had caught on everywhere?
> > > Probably. When I was in grad school studying language design, circa 1982,
> > > it was expected to do so. But the language was VERY challenging for
> > > compiler writers.
> >
> > Huh. Rob Netzer and I, as grad students, took cs701 and cs702 at UW Madison.
> > It was the compilers course (701) and the really hard compilers course (702)
> > at the time. The first course was to write a compiler for a subset of Ada
> > and the second on increased the subset to be almost complete.
> >
> > We were supposed to do it on an IBM mainframe because the professor had his
> > own version of lex/yacc there. Rob had a 3b1 and asked if we could do it
> > there if he rewrote the parser stuff. Prof said sure.
> >
> > In one semester we had a compiler, no optimizer and not much in the
> > way of graceful error handling, but it compiled stuff that ran. We did
> > all of Ada other than late binding of variables (I think that was Ada's
> > templates) and threads and probably some other stuff I don't remember.
>
> Did you do generics? That and the run time, which had some real-time
> bits to it (*IIRC*, it's been a long time), as well as the cross
> object code type checking, would have been real bears.
None of those ring a bell so
> Like many things, the first 90% is easy, the second 90% is hard. :-)
I guess we did the easy stuff :-(
> > I don't consider myself to be that good of a programmer, I can point to
> > dozens of people my age that can run circles around me and I'm sure there
> > are many more.
>
> You are undoubtedly better than you give yourself credit for, even
> if there were people who could run circles around you. I learned
> a long time ago, that no matter how good you are, there's always
> someone better than you at something. I decided long ago to not
> try to compete with Superman.
Funny, I've come to the same conclusion, both in programming and my
retirement hobby. There is always someone way better than me, but
you are correct, that doesn't mean I'm awful. Just have more to
learn.
A buddy pointed out that I was probably better than 80% of the people
leaving the dock, it's just I fish with a guy who is better than pretty
much everyone.
> > But apparently the bar is pretty low these days and I agree, that's sad.
>
> And it makes it much less fun to be out in the working world. :-(
As a guy in his 2nd retirement (1st didn't stick) I can tell you I am
so happy not having to deal with work stuff. My buddies who are still
working tell me stories I find difficult to believe. They all say I'm
so politically incorrect that I wouldn't last a week in today's world.
If their stories are true, yeah, that's not for me.
Weird politics and crappy programmers, count me out.
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 13:32 ` Larry McVoy
@ 2024-10-01 13:47 ` arnold
2024-10-01 14:01 ` Larry McVoy
2024-10-01 16:49 ` Paul Winalski
0 siblings, 2 replies; 61+ messages in thread
From: arnold @ 2024-10-01 13:47 UTC (permalink / raw)
To: lm, arnold; +Cc: tuhs
Larry McVoy <lm@mcvoy.com> wrote:
> On Tue, Oct 01, 2024 at 07:13:04AM -0600, arnold@skeeve.com wrote:
> > Would the word have been better off if Ada had caught on everywhere?
> > Probably. When I was in grad school studying language design, circa 1982,
> > it was expected to do so. But the language was VERY challenging for
> > compiler writers.
>
> Huh. Rob Netzer and I, as grad students, took cs701 and cs702 at UW Madison.
> It was the compilers course (701) and the really hard compilers course (702)
> at the time. The first course was to write a compiler for a subset of Ada
> and the second on increased the subset to be almost complete.
>
> We were supposed to do it on an IBM mainframe because the professor had his
> own version of lex/yacc there. Rob had a 3b1 and asked if we could do it
> there if he rewrote the parser stuff. Prof said sure.
>
> In one semester we had a compiler, no optimizer and not much in the
> way of graceful error handling, but it compiled stuff that ran. We did
> all of Ada other than late binding of variables (I think that was Ada's
> templates) and threads and probably some other stuff I don't remember.
Did you do generics? That and the run time, which had some real-time
bits to it (*IIRC*, it's been a long time), as well as the cross
object code type checking, would have been real bears.
Like many things, the first 90% is easy, the second 90% is hard. :-)
> I don't consider myself to be that good of a programmer, I can point to
> dozens of people my age that can run circles around me and I'm sure there
> are many more.
You are undoubtedly better than you give yourself credit for, even
if there were people who could run circles around you. I learned
a long time ago, that no matter how good you are, there's always
someone better than you at something. I decided long ago to not
try to compete with Superman.
> But apparently the bar is pretty low these days and I agree, that's sad.
And it makes it much less fun to be out in the working world. :-(
Arnold
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 13:13 ` arnold
@ 2024-10-01 13:32 ` Larry McVoy
2024-10-01 13:47 ` arnold
2024-10-01 15:44 ` Bakul Shah via TUHS
2024-10-01 16:40 ` Paul Winalski
2 siblings, 1 reply; 61+ messages in thread
From: Larry McVoy @ 2024-10-01 13:32 UTC (permalink / raw)
To: arnold; +Cc: tuhs
On Tue, Oct 01, 2024 at 07:13:04AM -0600, arnold@skeeve.com wrote:
> Would the word have been better off if Ada had caught on everywhere?
> Probably. When I was in grad school studying language design, circa 1982,
> it was expected to do so. But the language was VERY challenging for
> compiler writers.
Huh. Rob Netzer and I, as grad students, took cs701 and cs702 at UW Madison.
It was the compilers course (701) and the really hard compilers course (702)
at the time. The first course was to write a compiler for a subset of Ada
and the second on increased the subset to be almost complete.
We were supposed to do it on an IBM mainframe because the professor had his
own version of lex/yacc there. Rob had a 3b1 and asked if we could do it
there if he rewrote the parser stuff. Prof said sure.
In one semester we had a compiler, no optimizer and not much in the
way of graceful error handling, but it compiled stuff that ran. We did
all of Ada other than late binding of variables (I think that was Ada's
templates) and threads and probably some other stuff I don't remember.
Rob is pretty smart, went on to be a tenured prof at Brown before going
back to industry. Maybe he did all the heavy lifting, but I didn't find
that project to very challenging. Did I miss something?
> This is a very important, key point. As more and more people have
> entered the field, the quality / education / knowledge / whatever
> has gone down. What was normal to learn and use back in 1983 is
> now too difficult for many, if not most, people, even good ones, in
> the field now.
>
> But for me, and I think others of my vintage, this state of affairs
> seems sad.
100% agree. A sharp young kid I know is/was working on finding bugs in
binaries. He came to me for some insight and I had to understand what he
was doing and when I did, I kept saying "just hire people that don't do
this stupid stuff" and he kept laughing at me and said it was impossible.
I don't consider myself to be that good of a programmer, I can point to
dozens of people my age that can run circles around me and I'm sure there
are many more. But apparently the bar is pretty low these days and I
agree, that's sad.
--
---
Larry McVoy Retired to fishing http://www.mcvoy.com/lm/boat
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 18:01 ` G. Branden Robinson
@ 2024-10-01 13:13 ` arnold
2024-10-01 13:32 ` Larry McVoy
` (2 more replies)
0 siblings, 3 replies; 61+ messages in thread
From: arnold @ 2024-10-01 13:13 UTC (permalink / raw)
To: tuhs, g.branden.robinson
This is long, my apologies.
"G. Branden Robinson" <g.branden.robinson@gmail.com> wrote:
> [ screed omitted ]
Branden, as they say, Hindsight is 20-20.
But one needs to take into account that Unix and C evolved,
and in particular on VERY small machines. IIRC the original
Unix PDP-11s didn't even have split I/D spaces. Getting a decent
compiler into that small an address space isn't easy (and by now
is a lost art).
The evolution was gradual and, shall we say "organic", without
the pretension and formalisms of a language committee, but simply
to meet the needs of the Bell Labs researchers.
The value of a high level language for OS work was clear from
Multics. But writing a PL/1 compiler from scratch for the tiny
PDP-11 address space made no sense. Thus the path from BCPL to B to C.
Today, new languages are often reactions to older ones.
Java, besides the JVM portability, tried to clean up the syntax
of C / C++ and add some safety and modernism (graphics, concurrency).
C# was a reaction to (and a way to compete with) Java.
Go was very clearly in reaction to current C++, but headed back
in the direction of simplicity. Successfully, IMHO.
Would the word have been better off if Ada had caught on everywhere?
Probably. When I was in grad school studying language design, circa 1982,
it was expected to do so. But the language was VERY challenging for
compiler writers. C was easy to port, and then Unix along with it.
C'est la vie.
Larry wrote:
> I have a somewhat different view. I have a son who is learning to program
> and he asked me about C. I said "C is like driving a sports car on a
> twisty mountain road that has cliffs and no guard rails. If you want to
> check your phone while you are driving, it's not for you. It requires
> your full, focussed attention. So that sounds bad, right? Well, if
> you are someone who enjoys driving a sports car, and are good at it,
> perhaps C is for you."
This goes back to the evolution thing. At the time, C was a huge
step up from FORTRAN and assembly. Programmers who moved to C
appreciated all it gave them over what they had at the time.
C programmers weren't wizards, they were simply using the best
available tool.
Going from a bicycle to an automobile is a big jump (to
continue Larry's analogy).
Larry again:
> I ran a company that developed a product that was orders of magnitude more
> complex than the v7 kernel (low bar but still) all in C and we had *NONE*
> of those supposed problems. We were careful to use stuff that worked,
> I'm "famous" in that company as the guy was viewed as "that was invented
> after 1980 so Larry won't let us use it". Not true, we used mmap and used
> POSIX signals, but mostly true. If you stick to the basics, C just works.
> And is portable, we supported every Unix (even SCO), MacOS, Windows, and
> all the Linux variants from ARM to IBM mainframes.
This is also my experience with gawk, which runs on everything from
ARM (Linux) to Windows to mac to VMS to z/OS (S/390x). OS issues
give me more grief than language issues.
> All that said, I get it, you want guard rails. You are not wrong, the
> caliber of programmers these days are nowhere near Bell Labs or Sun or
> my guys.
This is a very important, key point. As more and more people have
entered the field, the quality / education / knowledge / whatever
has gone down. What was normal to learn and use back in 1983 is
now too difficult for many, if not most, people, even good ones, in
the field now.
The people I work with here (Israel) don't know who Donald Knuth is.
Two of the people in my group, over 40, didn't know what Emacs is.
Shell scripting seems to be too hard for many people to master,
and I see a huge amount of Cargo Cult Programming when it comes
to things like scripts and Unix tools.
Thus Go and Rust are good things, taking the sharp tools out of the
hands of the people who aren't qualified to use them. Same thing Python.
But for me, and I think others of my vintage, this state of affairs
seems sad.
My 4 cents,
Arnold
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 18:05 ` Larry McVoy
@ 2024-09-30 15:49 ` Paul Winalski
0 siblings, 0 replies; 61+ messages in thread
From: Paul Winalski @ 2024-09-30 15:49 UTC (permalink / raw)
To: Larry McVoy, Computer Old Farts Followers
[-- Attachment #1: Type: text/plain, Size: 1344 bytes --]
[moving to COFF as this has drifted away from Unix]
On Sat, Sep 28, 2024 at 2:06 PM Larry McVoy <lm@mcvoy.com> wrote:
> I have a somewhat different view. I have a son who is learning to program
> and he asked me about C. I said "C is like driving a sports car on a
> twisty mountain road that has cliffs and no guard rails. If you want to
> check your phone while you are driving, it's not for you. It requires
> your full, focussed attention. So that sounds bad, right? Well, if
> you are someone who enjoys driving a sports car, and are good at it,
> perhaps C is for you."
>
> If you really want a language with no guard rails, try programming in
BLISS.
Regarding C and C++ having dangerous language features--of course they do.
Every higher-level language I've ever seen has its set of toxic language
features that should be avoided if you want reliability and maintainability
for your programs. And a set of things to avoid if you want portability.
Regarding managed dynamic memory allocation schemes that use garbage
collection vs. malloc()/free(), there are some applications where they are
not suitable. I'm thinking about real-time programs. You can't have your
missle defense software pause to do garbage collection when you're trying
to shoot down an incoming ballistic missile.
-Paul W.
[-- Attachment #2: Type: text/html, Size: 1762 bytes --]
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 23:30 ` Warner Losh
2024-09-29 10:06 ` Ralph Corderoy
@ 2024-09-30 12:15 ` Dan Cross
1 sibling, 0 replies; 61+ messages in thread
From: Dan Cross @ 2024-09-30 12:15 UTC (permalink / raw)
To: Warner Losh; +Cc: Douglas McIlroy, TUHS main list
On Sat, Sep 28, 2024 at 7:37 PM Warner Losh <imp@bsdimp.com> wrote:
> On Sat, Sep 28, 2024, 5:05 PM Rob Pike <robpike@gmail.com> wrote:
>> I wrote a letter to the ANSI C (1989) committee.
>>
>> Please allow malloc(0).
>> Please allow zero-length arrays.
>>
>> I got two letters back, saying that malloc(0) is illegal because zero-length arrays are illegal, and the other vice versa.
>>
>> I fumed.
>
> And now we have zero length arrays an UB malloc(0).
I'm late to this; I know. But I wonder if perhaps you meant realloc(0,
p) being made UB in C23? This caused something of a kerfuffle, perhaps
exemplified by an incredibly poorly written "article" in ACM Queue.
I asked Jean-Heyd Meneide why this was done, and he broke it down for
me (the gist of it being, "it was the least-bad of a bunch of bad
options..."). The take from the C standards committee was actually
very reasonable, given the constraints they are forced to live under
nowadays, even if some of the rank and file were infuriated. But I
think the overall situation is illustrative of the sort of dynamic Rob
alluded to. The motivations of C compiler writers and programmers have
diverged; a great frustration for me is that, despite its origins as a
language meant to support systems programming and operation systems
implementation, compiler writers these days seem almost antagonistic
to that use. Linux, for instance, is not written in "C" so much as
"Linux C", which is a dialect of the language selected by selectively
setting compiler flags until some set of reasonable presets tames it
enough to be useful.
- Dan C.
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-29 12:25 ` Warner Losh
@ 2024-09-29 15:17 ` Ralph Corderoy
0 siblings, 0 replies; 61+ messages in thread
From: Ralph Corderoy @ 2024-09-29 15:17 UTC (permalink / raw)
To: TUHS; +Cc: Douglas McIlroy
Hi Werner,
> > malloc(0) isn't undefined behaviour but implementation defined.
>
> In modern C there is no difference between those two concepts.
Can you explain more about your view or give a link if it's an accepted
opinion. I'm used to an implementation stating its choices from those
given by a C standard, e.g.
(42) Whether the calloc, malloc, realloc, and aligned_alloc functions
return a null pointer or a pointer to an allocated object when
the size requested is zero (7.24.3).
I'd call malloc(0) and know it's not undefined behaviour but one of
those choices.
--
Cheers, Ralph.
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-29 10:06 ` Ralph Corderoy
@ 2024-09-29 12:25 ` Warner Losh
2024-09-29 15:17 ` Ralph Corderoy
0 siblings, 1 reply; 61+ messages in thread
From: Warner Losh @ 2024-09-29 12:25 UTC (permalink / raw)
To: Ralph Corderoy; +Cc: TUHS main list, Douglas McIlroy
[-- Attachment #1: Type: text/plain, Size: 686 bytes --]
On Sun, Sep 29, 2024, 4:06 AM Ralph Corderoy <ralph@inputplus.co.uk> wrote:
> Hi Werner,
>
> > > I got two letters back, saying that malloc(0) is illegal because
> > > zero-length arrays are illegal, and the other vice versa.
> >
> > And now we have zero length arrays an UB malloc(0).
>
> malloc(0) isn't undefined behaviour but implementation defined.
>
In modern C there is no difference between those two concepts.
Are there prominent modern implementations which consider it an error so
> return NULL?
>
Many. There are a dozen or more malloc implementations in use and they all
are slightly different.
Warner
Warner
> --
> Cheers, Ralph.
>
[-- Attachment #2: Type: text/html, Size: 1568 bytes --]
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 23:30 ` Warner Losh
@ 2024-09-29 10:06 ` Ralph Corderoy
2024-09-29 12:25 ` Warner Losh
2024-09-30 12:15 ` Dan Cross
1 sibling, 1 reply; 61+ messages in thread
From: Ralph Corderoy @ 2024-09-29 10:06 UTC (permalink / raw)
To: TUHS main list; +Cc: Douglas McIlroy
Hi Werner,
> > I got two letters back, saying that malloc(0) is illegal because
> > zero-length arrays are illegal, and the other vice versa.
>
> And now we have zero length arrays an UB malloc(0).
malloc(0) isn't undefined behaviour but implementation defined.
Are there prominent modern implementations which consider it an error so
return NULL?
--
Cheers, Ralph.
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 23:05 ` Rob Pike
@ 2024-09-28 23:30 ` Warner Losh
2024-09-29 10:06 ` Ralph Corderoy
2024-09-30 12:15 ` Dan Cross
0 siblings, 2 replies; 61+ messages in thread
From: Warner Losh @ 2024-09-28 23:30 UTC (permalink / raw)
To: Rob Pike; +Cc: Douglas McIlroy, TUHS main list
[-- Attachment #1: Type: text/plain, Size: 13955 bytes --]
On Sat, Sep 28, 2024, 5:05 PM Rob Pike <robpike@gmail.com> wrote:
> I wrote a letter to the ANSI C (1989) committee.
>
> Please allow malloc(0).
> Please allow zero-length arrays.
>
> I got two letters back, saying that malloc(0) is illegal because
> zero-length arrays are illegal, and the other vice versa.
>
> I fumed.
>
And now we have zero length arrays an UB malloc(0).
Warner
-rob
>
>
> On Sun, Sep 29, 2024 at 8:08 AM Douglas McIlroy <
> douglas.mcilroy@dartmouth.edu> wrote:
>
>> I have to concede Branden's "gotcha". Struct copying is definitely not
>> O(1).
>>
>> A real-life hazard of non-O(1) operations. Vic Vyssotsky put bzero (I
>> forget what Vic called it) into Fortran II. Sometime later, he found that a
>> percolation simulation was running annoyingly slowly. It took some study to
>> discover that the real inner loop of the program was not the percolation,
>> which touched only a small fraction of a 2D field. More time went into the
>> innocuous bzero initializer. The fix was to ADD code to the inner loop to
>> remember what entries had been touched, and initialize only those for the
>> next round of the simulation.
>>
>> > for a while, every instruction has [sic] an exactly predictable and
>> constant cycle
>> > count. ...[Then] all of a sudden you had instructions with O(n) cycle
>> counts.
>>
>> O(n) cycle counts were nothing new. In the 1950s we had the IBM 1620
>> with arbitrary-length arithmetic and the 709 with "convert" instructions
>> whose cycle count went up to 256.
>>
>> >> spinelessly buckled to allow malloc(0) to return 0, as some
>> >> implementations gratuitously did.
>>
>> > What was the alternative? There was no such thing as an exception, and
>> > if a pointer was an int and an int was as wide as a machine address,
>> > there'd be no way to indicate failure in-band, either.
>>
>> What makes you think allocating zero space should fail? If the size n of
>> a set is determined at run time, why should one have to special-case its
>> space allocation when n=0? Subsequent processing of the form for(i=0; i<n;
>> i++) {...} will handle it gracefully with no special code. Malloc should do
>> as it did in v7--return a non-null pointer different from any other active
>> malloc pointer, as Bakul stated. If worse comes to worst[1] this can be
>> done by padding up to the next feasible size. Regardless of how the pointer
>> is created, any access via it would of course be out of bounds and hence
>> wrong.
>>
>> > How does malloc(0) get this job done and what benefit does it bring?
>>
>> If I understand the "job" (about initializing structure members)
>> correctly, malloc(0) has no bearing on it. The benefit lies elsewhere.
>>
>> Apropos of tail calls, Rob Pike had a nice name for an explicit tail
>> call, "become". It's certainly reasonable, though, to make compilers
>> recognize tail calls implicitly.
>>
>> [1] Worse didn't come to worst in the original malloc. It attached
>> metadata to each block, so even blocks of size zero consumed some memory.
>>
>> Doug
>>
>> On Sat, Sep 28, 2024 at 1:59 PM Bakul Shah via TUHS <tuhs@tuhs.org>
>> wrote:
>>
>>> Just responding to random things that I noticed:
>>>
>>> You don't need special syntax for tail-call. It should be done
>>> transparently when a call is the last thing that gets executed. Special
>>> syntax will merely allow confused people to use it in the wrong place and
>>> get confused more.
>>>
>>> malloc(0) should return a unique ptr. So that "T* a = malloc(0); T* b =
>>> malloc(0); a != (T*)0 && a != b". Without this, malloc(0) acts differently
>>> from malloc(n) for n > 0.
>>>
>>> Note that except for arrays, function arguments & result are copied so
>>> copying a struct makes perfect sense. Passing arrays by reference may have
>>> been due to residual Fortran influence! [Just guessing] Also note: that one
>>> exception has been the cause of many problems.
>>>
>>> In any case you have not argued convincingly about why dynamic memory
>>> allocation should be in the language (runtime) :-) And adding that wouldn't
>>> have fixed any of the existing problems with the language.
>>>
>>> Bakul
>>>
>>> > On Sep 28, 2024, at 9:58 AM, G. Branden Robinson <
>>> g.branden.robinson@gmail.com> wrote:
>>> >
>>> > At 2024-09-28T09:34:14-0400, Douglas McIlroy wrote:
>>> >>> C's refusal to specify dynamic memory allocation in the language
>>> >>> runtime (as opposed to, eventually, the standard library)
>>> >>
>>> >> This complaint overlooks one tenet of C: every operation in what you
>>> >> call "language runtime" takes O(1) time. Dynamic memory allocation
>>> >> is not such an operation.
>>> >
>>> > A fair point. Let me argue about it anyway. ;-)
>>> >
>>> > I'd make three observations. First, K&R did _not_ tout this in their
>>> > book presenting ANSI C. I went back and checked the prefaces,
>>> > introduction, and the section presenting a simple malloc()/free()
>>> > implementation. The tenet you claim for the language is not explicitly
>>> > articulated and, if I squint really hard, I can only barely perceive
>>> > (imagine?) it deeply between the lines in some of the prefatory
>>> material
>>> > to which K&R mostly confine their efforts to promote the language. In
>>> > my view, a "tenet" is something more overt: the sort of thing U.S.
>>> > politicians try to get hung on the walls of every public school
>>> > classroom, like Henry Spencer's Ten Commandments of C[1] (which itself
>>> > does not mention this "core language has only O(1) features"
>>> principle).
>>> >
>>> > Second, in reviewing K&R I was reminded that structure copying is part
>>> > of the language. ("There are no operations that manipulate an entire
>>> > array or string, although structures may be copied as a unit."[2])
>>> > Doesn't that break the tenet right there?
>>> >
>>> > Third, and following on from the foregoing, your point reminded me of
>>> my
>>> > youth programming non-pipelined machines with no caches. You could set
>>> > your watch by (just about) any instruction in the set--and often did,
>>> > because we penurious microcomputer guys often lacked hardware real-time
>>> > clocks, too. That is to say, for a while, every instruction has an
>>> > exactly predictable and constant cycle count. (The _value_ of that
>>> > constant might depend on the addressing mode, because that would have
>>> > consequences on memory fetches, but the principle stood.) When the Z80
>>> > extended the 8080's instruction set, they ate from Tree of Knowledge
>>> > with block-copy instructions like LDIR and LDDR, and all of a sudden
>>> you
>>> > had instructions with O(n) cycle counts. But as a rule, programmers
>>> > seemed to welcome this instead of recognizing it as knowing sin,
>>> because
>>> > you generally knew worst-case how many bytes you'd be copying and take
>>> > that into account. (The worst worst case was a mere 64kB!)
>>> >
>>> > Further, Z80 home computers in low-end configurations (that is, no disk
>>> > drives) often did a shocking thing: they ran with all interrupts
>>> masked.
>>> > All the time. The one non-maskable interrupt was RESET, after which
>>> you
>>> > weren't going to be resuming execution of your program anyway. Not
>>> from
>>> > the same instruction, at least. As I recall the TRS-80 Model I/III/4
>>> > didn't even have logic on the motherboard to decode the Z80's
>>> "interrupt
>>> > mode 2", which was vectored, I think. Even in the "high-end"
>>> > configurations of these tiny machines, you got a whopping ONE interrupt
>>> > to play with ("IM 1").
>>> >
>>> > Later, when the Hitachi 6309 smuggled similar block-transfer decadence
>>> > into its extensions to the Motorola 6809 (to the excitement of we
>>> > semi-consciously Unix-adjacent OS-9 users) they faced a starker
>>> problem,
>>> > because the 6809 didn't wall off interrupts in the same way the 8080
>>> and
>>> > Z80. They therefore presented the programmer with the novelty of the
>>> > restartable instruction, and a new generation of programmers became
>>> > acquainted with the hard lessons time-sharing minicomputer people were
>>> > familiar with.
>>> >
>>> > My point in this digression is that, in my opinion, it's tough to hold
>>> > fast to the O(1) tenet you claim for C's core language and to another
>>> at
>>> > the same time: the language's identity as a "portable assembly
>>> > language". Unless every programmer has control over the compiler--and
>>> > they don't--you can't predict when the compiler will emit an O(n) block
>>> > transfer instruction. You'll just have to look at the disassembly.
>>> >
>>> > _Maybe_ you can retain purity by...never copying structs. I don't
>>> think
>>> > lint or any other tool ever checked for this. Your advocacy of this
>>> > tenet is the first time I've heard it presented.
>>> >
>>> > If you were to suggest to me that most of the time I've spent in my
>>> life
>>> > arguing with C advocates was with rotten exemplars of the species and
>>> > therefore was time wasted, I would concede the point.
>>> >
>>> > There's just so danged _many_ of them...
>>> >
>>> >> Your hobbyhorse awakened one of mine.
>>> >>
>>> >> malloc was in v7, before the C standard was written. The standard
>>> >> spinelessly buckled to allow malloc(0) to return 0, as some
>>> >> implementations gratuitously did.
>>> >
>>> > What was the alternative? There was no such thing as an exception, and
>>> > if a pointer was an int and an int was as wide as a machine address,
>>> > there'd be no way to indicate failure in-band, either.
>>> >
>>> > If the choice was that or another instance of atoi()'s wincingly awful
>>> > "does this 0 represent an error or successful conversion of a zero
>>> > input?" land mine, ANSI might have made the right choice.
>>> >
>>> >> I can't imagine that any program ever actually wanted the feature. Now
>>> >> it's one more undefined behavior that lurks in thousands of programs.
>>> >
>>> > Hoare admitted to only one billion-dollar mistake. No one dares count
>>> > how many to write in C's ledger. This was predicted, wasn't it?
>>> > Everyone loved C because it was fast: it was performant, because it
>>> > never met a runtime check it didn't eschew--recall again Kernighan
>>> > punking Pascal on this exact point--and it was quick for the programmer
>>> > to write because it never met a _compile_-time check it didn't eschew.
>>> > C was born as a language for wizards who never made mistakes.
>>> >
>>> > The problem is that, like James Madison's fictive government of angels,
>>> > such entities don't exist. The staff of the CSRC itself may have been
>>> > overwhelmingly populated with frank, modest, and self-deprecating
>>> > people--and I'll emphasize here that I'm aware of no accounts that this
>>> > is anything but true--but C unfortunately played a part in stoking a
>>> > culture of pretension among software developers. "C is a language in
>>> > which wizards program. I program in C. Therefore I'm a wizard." is
>>> how
>>> > the syllogism (spot the fallacy) went. I don't know who does more
>>> > damage--the people who believe their own BS, or the ones who know
>>> > they're scamming their colleagues.
>>> >
>>> >> There are two arguments for malloc(0), Most importantly, it caters for
>>> >> a limiting case for aggregates generated at runtime--an instance of
>>> >> Kernighan's Law, "Do nothing gracefully". It also provides a way to
>>> >> create a distinctive pointer to impart some meta-information, e.g.
>>> >> "TBD" or "end of subgroup", distinct from the null pointer, which
>>> >> merely denotes absence.
>>> >
>>> > I think I might be confused now. I've frequently seen arrays of
>>> structs
>>> > initialized from lists of literals ending in a series of "NULL"
>>> > structure members, in code that antedates or ignores C99's wonderful
>>> > feature of designated initializers for aggregate types.[3] How does
>>> > malloc(0) get this job done and what benefit does it bring?
>>> >
>>> > Last time I posted to TUHS I mentioned a proposal for explicit
>>> tail-call
>>> > elimination in C. I got the syntax wrong. The proposal was "return
>>> > goto;". The WG14 document number is N2920 and it's by Alex Gilding.
>>> > Good stuff.[4] I hope we see it in C2y.
>>> >
>>> > Predictably, I must confess that I didn't make much headway on
>>> > Schiller's 1975 "secure kernel" paper. Maybe next time.
>>> >
>>> > Regards,
>>> > Branden
>>> >
>>> > [1] https://web.cs.dal.ca/~jamie/UWO/C/the10fromHenryS.html
>>> >
>>> > I can easily imagine that the tenet held at _some_ point in the
>>> > C's history. It's _got_ to be the reason that the language
>>> > relegates memset() and memcpy() to the standard library (or to the
>>> > programmer's own devise)! :-O
>>> >
>>> > [2] Kernighan & Ritchie, _The C Programming Language_, 2nd edition, p.
>>> 2
>>> >
>>> > Having thus admitted the camel's nose to the tent, K&R would have
>>> > done the world a major service by making memset(), or at least
>>> > bzero(), a language feature, the latter perhaps by having "= 0"
>>> > validly apply to an lvalue of non-primitive type. Okay,
>>> > _potentially_ a major service. You'd still need the self-regarding
>>> > wizard programmers to bother coding it, which they wouldn't in many
>>> > cases "because speed". Move fast, break stuff.
>>> >
>>> > C++ screwed this up too, and stubbornly stuck by it for a long time.
>>> >
>>> > https://cplusplus.github.io/CWG/issues/178.html
>>> >
>>> > [3] https://gcc.gnu.org/onlinedocs/gcc/Designated-Inits.html
>>> > [4] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2920.pdf
>>>
>>>
[-- Attachment #2: Type: text/html, Size: 17641 bytes --]
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 22:07 ` Douglas McIlroy
@ 2024-09-28 23:05 ` Rob Pike
2024-09-28 23:30 ` Warner Losh
0 siblings, 1 reply; 61+ messages in thread
From: Rob Pike @ 2024-09-28 23:05 UTC (permalink / raw)
To: Douglas McIlroy; +Cc: TUHS main list
[-- Attachment #1: Type: text/plain, Size: 13456 bytes --]
I wrote a letter to the ANSI C (1989) committee.
Please allow malloc(0).
Please allow zero-length arrays.
I got two letters back, saying that malloc(0) is illegal because
zero-length arrays are illegal, and the other vice versa.
I fumed.
-rob
On Sun, Sep 29, 2024 at 8:08 AM Douglas McIlroy <
douglas.mcilroy@dartmouth.edu> wrote:
> I have to concede Branden's "gotcha". Struct copying is definitely not
> O(1).
>
> A real-life hazard of non-O(1) operations. Vic Vyssotsky put bzero (I
> forget what Vic called it) into Fortran II. Sometime later, he found that a
> percolation simulation was running annoyingly slowly. It took some study to
> discover that the real inner loop of the program was not the percolation,
> which touched only a small fraction of a 2D field. More time went into the
> innocuous bzero initializer. The fix was to ADD code to the inner loop to
> remember what entries had been touched, and initialize only those for the
> next round of the simulation.
>
> > for a while, every instruction has [sic] an exactly predictable and
> constant cycle
> > count. ...[Then] all of a sudden you had instructions with O(n) cycle
> counts.
>
> O(n) cycle counts were nothing new. In the 1950s we had the IBM 1620 with
> arbitrary-length arithmetic and the 709 with "convert" instructions whose
> cycle count went up to 256.
>
> >> spinelessly buckled to allow malloc(0) to return 0, as some
> >> implementations gratuitously did.
>
> > What was the alternative? There was no such thing as an exception, and
> > if a pointer was an int and an int was as wide as a machine address,
> > there'd be no way to indicate failure in-band, either.
>
> What makes you think allocating zero space should fail? If the size n of a
> set is determined at run time, why should one have to special-case its
> space allocation when n=0? Subsequent processing of the form for(i=0; i<n;
> i++) {...} will handle it gracefully with no special code. Malloc should do
> as it did in v7--return a non-null pointer different from any other active
> malloc pointer, as Bakul stated. If worse comes to worst[1] this can be
> done by padding up to the next feasible size. Regardless of how the pointer
> is created, any access via it would of course be out of bounds and hence
> wrong.
>
> > How does malloc(0) get this job done and what benefit does it bring?
>
> If I understand the "job" (about initializing structure members)
> correctly, malloc(0) has no bearing on it. The benefit lies elsewhere.
>
> Apropos of tail calls, Rob Pike had a nice name for an explicit tail call,
> "become". It's certainly reasonable, though, to make compilers recognize
> tail calls implicitly.
>
> [1] Worse didn't come to worst in the original malloc. It attached
> metadata to each block, so even blocks of size zero consumed some memory.
>
> Doug
>
> On Sat, Sep 28, 2024 at 1:59 PM Bakul Shah via TUHS <tuhs@tuhs.org> wrote:
>
>> Just responding to random things that I noticed:
>>
>> You don't need special syntax for tail-call. It should be done
>> transparently when a call is the last thing that gets executed. Special
>> syntax will merely allow confused people to use it in the wrong place and
>> get confused more.
>>
>> malloc(0) should return a unique ptr. So that "T* a = malloc(0); T* b =
>> malloc(0); a != (T*)0 && a != b". Without this, malloc(0) acts differently
>> from malloc(n) for n > 0.
>>
>> Note that except for arrays, function arguments & result are copied so
>> copying a struct makes perfect sense. Passing arrays by reference may have
>> been due to residual Fortran influence! [Just guessing] Also note: that one
>> exception has been the cause of many problems.
>>
>> In any case you have not argued convincingly about why dynamic memory
>> allocation should be in the language (runtime) :-) And adding that wouldn't
>> have fixed any of the existing problems with the language.
>>
>> Bakul
>>
>> > On Sep 28, 2024, at 9:58 AM, G. Branden Robinson <
>> g.branden.robinson@gmail.com> wrote:
>> >
>> > At 2024-09-28T09:34:14-0400, Douglas McIlroy wrote:
>> >>> C's refusal to specify dynamic memory allocation in the language
>> >>> runtime (as opposed to, eventually, the standard library)
>> >>
>> >> This complaint overlooks one tenet of C: every operation in what you
>> >> call "language runtime" takes O(1) time. Dynamic memory allocation
>> >> is not such an operation.
>> >
>> > A fair point. Let me argue about it anyway. ;-)
>> >
>> > I'd make three observations. First, K&R did _not_ tout this in their
>> > book presenting ANSI C. I went back and checked the prefaces,
>> > introduction, and the section presenting a simple malloc()/free()
>> > implementation. The tenet you claim for the language is not explicitly
>> > articulated and, if I squint really hard, I can only barely perceive
>> > (imagine?) it deeply between the lines in some of the prefatory material
>> > to which K&R mostly confine their efforts to promote the language. In
>> > my view, a "tenet" is something more overt: the sort of thing U.S.
>> > politicians try to get hung on the walls of every public school
>> > classroom, like Henry Spencer's Ten Commandments of C[1] (which itself
>> > does not mention this "core language has only O(1) features" principle).
>> >
>> > Second, in reviewing K&R I was reminded that structure copying is part
>> > of the language. ("There are no operations that manipulate an entire
>> > array or string, although structures may be copied as a unit."[2])
>> > Doesn't that break the tenet right there?
>> >
>> > Third, and following on from the foregoing, your point reminded me of my
>> > youth programming non-pipelined machines with no caches. You could set
>> > your watch by (just about) any instruction in the set--and often did,
>> > because we penurious microcomputer guys often lacked hardware real-time
>> > clocks, too. That is to say, for a while, every instruction has an
>> > exactly predictable and constant cycle count. (The _value_ of that
>> > constant might depend on the addressing mode, because that would have
>> > consequences on memory fetches, but the principle stood.) When the Z80
>> > extended the 8080's instruction set, they ate from Tree of Knowledge
>> > with block-copy instructions like LDIR and LDDR, and all of a sudden you
>> > had instructions with O(n) cycle counts. But as a rule, programmers
>> > seemed to welcome this instead of recognizing it as knowing sin, because
>> > you generally knew worst-case how many bytes you'd be copying and take
>> > that into account. (The worst worst case was a mere 64kB!)
>> >
>> > Further, Z80 home computers in low-end configurations (that is, no disk
>> > drives) often did a shocking thing: they ran with all interrupts masked.
>> > All the time. The one non-maskable interrupt was RESET, after which you
>> > weren't going to be resuming execution of your program anyway. Not from
>> > the same instruction, at least. As I recall the TRS-80 Model I/III/4
>> > didn't even have logic on the motherboard to decode the Z80's "interrupt
>> > mode 2", which was vectored, I think. Even in the "high-end"
>> > configurations of these tiny machines, you got a whopping ONE interrupt
>> > to play with ("IM 1").
>> >
>> > Later, when the Hitachi 6309 smuggled similar block-transfer decadence
>> > into its extensions to the Motorola 6809 (to the excitement of we
>> > semi-consciously Unix-adjacent OS-9 users) they faced a starker problem,
>> > because the 6809 didn't wall off interrupts in the same way the 8080 and
>> > Z80. They therefore presented the programmer with the novelty of the
>> > restartable instruction, and a new generation of programmers became
>> > acquainted with the hard lessons time-sharing minicomputer people were
>> > familiar with.
>> >
>> > My point in this digression is that, in my opinion, it's tough to hold
>> > fast to the O(1) tenet you claim for C's core language and to another at
>> > the same time: the language's identity as a "portable assembly
>> > language". Unless every programmer has control over the compiler--and
>> > they don't--you can't predict when the compiler will emit an O(n) block
>> > transfer instruction. You'll just have to look at the disassembly.
>> >
>> > _Maybe_ you can retain purity by...never copying structs. I don't think
>> > lint or any other tool ever checked for this. Your advocacy of this
>> > tenet is the first time I've heard it presented.
>> >
>> > If you were to suggest to me that most of the time I've spent in my life
>> > arguing with C advocates was with rotten exemplars of the species and
>> > therefore was time wasted, I would concede the point.
>> >
>> > There's just so danged _many_ of them...
>> >
>> >> Your hobbyhorse awakened one of mine.
>> >>
>> >> malloc was in v7, before the C standard was written. The standard
>> >> spinelessly buckled to allow malloc(0) to return 0, as some
>> >> implementations gratuitously did.
>> >
>> > What was the alternative? There was no such thing as an exception, and
>> > if a pointer was an int and an int was as wide as a machine address,
>> > there'd be no way to indicate failure in-band, either.
>> >
>> > If the choice was that or another instance of atoi()'s wincingly awful
>> > "does this 0 represent an error or successful conversion of a zero
>> > input?" land mine, ANSI might have made the right choice.
>> >
>> >> I can't imagine that any program ever actually wanted the feature. Now
>> >> it's one more undefined behavior that lurks in thousands of programs.
>> >
>> > Hoare admitted to only one billion-dollar mistake. No one dares count
>> > how many to write in C's ledger. This was predicted, wasn't it?
>> > Everyone loved C because it was fast: it was performant, because it
>> > never met a runtime check it didn't eschew--recall again Kernighan
>> > punking Pascal on this exact point--and it was quick for the programmer
>> > to write because it never met a _compile_-time check it didn't eschew.
>> > C was born as a language for wizards who never made mistakes.
>> >
>> > The problem is that, like James Madison's fictive government of angels,
>> > such entities don't exist. The staff of the CSRC itself may have been
>> > overwhelmingly populated with frank, modest, and self-deprecating
>> > people--and I'll emphasize here that I'm aware of no accounts that this
>> > is anything but true--but C unfortunately played a part in stoking a
>> > culture of pretension among software developers. "C is a language in
>> > which wizards program. I program in C. Therefore I'm a wizard." is how
>> > the syllogism (spot the fallacy) went. I don't know who does more
>> > damage--the people who believe their own BS, or the ones who know
>> > they're scamming their colleagues.
>> >
>> >> There are two arguments for malloc(0), Most importantly, it caters for
>> >> a limiting case for aggregates generated at runtime--an instance of
>> >> Kernighan's Law, "Do nothing gracefully". It also provides a way to
>> >> create a distinctive pointer to impart some meta-information, e.g.
>> >> "TBD" or "end of subgroup", distinct from the null pointer, which
>> >> merely denotes absence.
>> >
>> > I think I might be confused now. I've frequently seen arrays of structs
>> > initialized from lists of literals ending in a series of "NULL"
>> > structure members, in code that antedates or ignores C99's wonderful
>> > feature of designated initializers for aggregate types.[3] How does
>> > malloc(0) get this job done and what benefit does it bring?
>> >
>> > Last time I posted to TUHS I mentioned a proposal for explicit tail-call
>> > elimination in C. I got the syntax wrong. The proposal was "return
>> > goto;". The WG14 document number is N2920 and it's by Alex Gilding.
>> > Good stuff.[4] I hope we see it in C2y.
>> >
>> > Predictably, I must confess that I didn't make much headway on
>> > Schiller's 1975 "secure kernel" paper. Maybe next time.
>> >
>> > Regards,
>> > Branden
>> >
>> > [1] https://web.cs.dal.ca/~jamie/UWO/C/the10fromHenryS.html
>> >
>> > I can easily imagine that the tenet held at _some_ point in the
>> > C's history. It's _got_ to be the reason that the language
>> > relegates memset() and memcpy() to the standard library (or to the
>> > programmer's own devise)! :-O
>> >
>> > [2] Kernighan & Ritchie, _The C Programming Language_, 2nd edition, p. 2
>> >
>> > Having thus admitted the camel's nose to the tent, K&R would have
>> > done the world a major service by making memset(), or at least
>> > bzero(), a language feature, the latter perhaps by having "= 0"
>> > validly apply to an lvalue of non-primitive type. Okay,
>> > _potentially_ a major service. You'd still need the self-regarding
>> > wizard programmers to bother coding it, which they wouldn't in many
>> > cases "because speed". Move fast, break stuff.
>> >
>> > C++ screwed this up too, and stubbornly stuck by it for a long time.
>> >
>> > https://cplusplus.github.io/CWG/issues/178.html
>> >
>> > [3] https://gcc.gnu.org/onlinedocs/gcc/Designated-Inits.html
>> > [4] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2920.pdf
>>
>>
[-- Attachment #2: Type: text/html, Size: 16865 bytes --]
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 22:45 ` Luther Johnson
@ 2024-09-28 22:50 ` Luther Johnson
0 siblings, 0 replies; 61+ messages in thread
From: Luther Johnson @ 2024-09-28 22:50 UTC (permalink / raw)
To: tuhs
Or to your point, we have some of these problems no matter what, so
that's not a completely valid reason not to have some other facility, on
the basis of its cost profile, when other things already in, have
similar orders of costs.
On 09/28/2024 03:45 PM, Luther Johnson wrote:
> G. Branden,
>
> I get it. From your and Doug's responses, even O(n) baked-in costs can
> be a problem.
>
> On 09/28/2024 03:08 PM, Luther Johnson wrote:
>> "Classic C", K&R + void function return type, enumerations, and
>> structure passing/return, maybe a couple other things (some bug
>> fixes/stricter rules on use of members in structs and unions), has
>> this, I use a compiler from 1983 that lines up with an AT&T System V
>> document detailing updates to the language.
>>
>> I view these kinds of changes as incremental usability and
>> reliability fixes, well within the spirit and style, but just closing
>> loopholes or filling in gaps of things that ought to work together,
>> and could, without too much effort. But I agree, structures as
>> full-fledged citizens came late to K & R / Classic C.
>>
>> On 09/28/2024 11:46 AM, G. Branden Robinson wrote:
>>> Hi Luther,
>>>
>>> At 2024-09-28T10:47:44-0700, Luther Johnson wrote:
>>>> I don't know that structure copying breaks any complexity or bounds on
>>>> execution time rules. Many compilers may be different, but in the
>>>> generated code I've seen, when you pass in a structure to a function,
>>>> the receiving function copies it to the stack. In the portable C
>>>> compiler, when you return a structure as a result, it is first copied
>>>> to a static area, a pointer to that area is returned, then the caller
>>>> copies that out to wherever it's meant to go, either a variable that's
>>>> being assigned (which could be on the stack or elsewhere), or to a
>>>> place on the stack that was reserved for it because that result will
>>>> now be an argument to another function to be called. So there's some
>>>> copying, but that's proportional to the size of the structure, it's
>>>> linear, and there's no dynamic memory allocation going on.
>>> I have no problem with this presentation, but recall the principle--the
>>> tenet--that Doug was upholding:
>>>
>>>>> At 2024-09-28T09:34:14-0400, Douglas McIlroy wrote:
>>>>>> This complaint overlooks one tenet of C: every operation in what
>>>>>> you call "language runtime" takes O(1) time. Dynamic memory
>>>>>> allocation is not such an operation.
>>> Even without dynamic memory allocation, if you did something linear,
>>> something O(n), it was a lose and a violation of the tenet.
>>>
>>> I can easily see the appeal of a language whose every operation really
>>> is O(1). Once upon a time, a university course, or equivalent
>>> experience, in assembly language (on a CLEAN instruction set, not x86)
>>> is what taught you the virtues and drawbacks of thinking about and
>>> implementing things that way. But my view is that C hasn't been one of
>>> those languages for a very long time, since before its initial ANSI
>>> standardization at the latest.
>>>
>>> At 2024-09-28T10:52:16-0700, Luther Johnson wrote:
>>>> In the compilers I'm talking about, you pass a structure by passing a
>>>> pointer to it - but the receiving function knows the argument is a
>>>> structure, and not a pointer to a structure, so it knows it needs to
>>>> use the pointer to copy to its own local version.
>>> It's my understanding that the ability to work with structs as
>>> first-class citizens in function calls, as parameters _or_ return
>>> types,
>>> was something fairly late to stabilize in C compilers. Second-hand, I
>>> gather that pre-standard C as told by K&R explicitly did _not_
>>> countenance this. So a lot of early C code, including that in
>>> libraries, indirected nearly all struct access, even when read-only,
>>> through pointers.
>>>
>>> This is often a win, but not always. A few minutes ago I shot off my
>>> mouth to this list about how much better the standard library design
>>> could have been if the return of structs by value had been supported
>>> much earlier.
>>>
>>> Our industry has, it seemss, been slow to appreciate the distinction
>>> between what C++ eventually came to explicitly call "copy" semantics
>>> and
>>> "move" semantics. Rust's paradigmatic dedication to the concept of
>>> data
>>> "ownership" at last seems to be popularizing the practice of thinking
>>> about these things. (For my part, I will forever hurl calumnies at
>>> computer architects who refer to copy operations as "MOV" or similar.
>>> If the operation doesn't destroy the source, it's not a move--I don't
>>> care how many thousands of pages of manuals Intel writes saying
>>> otherwise. Even the RISC-V specs screw this up, I assume in a
>>> deliberate but embarrassing attempt to win mindshare among x86
>>> programmers who cling to this myth as they do so many others.)
>>>
>>> For a moment I considered giving credit to a virtuous few '80s C
>>> programmers who recognized that there was indeed no need to copy a
>>> struct upon passing it to a function if you knew the callee wasn't
>>> going
>>> to modify that struct...but we had a way of saying this, "const", and
>>> library writers of that era were infamously indifferent to using
>>> "const"
>>> in their APIs where it would have done good. So, no, no credit.
>>>
>>> Here's a paragraph from a 1987 text I wish I'd read back then, or at
>>> any
>>> time before being exposed to C.
>>>
>>> "[Language] does not define how parameter passing is implemented. A
>>> program is erroneous if it depends on a specific implementation method.
>>> The two obvious implementations are by copy and by reference. With an
>>> implementation that copies parameters, an `out` or `in out` actual
>>> parameter will not be updated until (normal) return from the
>>> subprogram.
>>> Therefore if the subprogram propagates an exception, the actual
>>> parameter will be unchanged. This is clearly not the case when a
>>> reference implementation is used. The difficulty with this
>>> vagueness in
>>> the definition of [language] is that it is quite awkward to be sure
>>> that
>>> a program is independent of the implementation method. (You might
>>> wonder why the language does not define the implementation method. The
>>> reason is that the copy mechanism is very inefficient with large
>>> parameters, whereas the reference mechanism is prohibitively expensive
>>> on distributed systems.)"[1]
>>>
>>> I admire the frankness. It points the way forward to reasoned
>>> discussion of engineering tradeoffs, as opposed to programming language
>>> boosterism. (By contrast, the trashing of boosters and their rhetoric
>>> is an obvious service to humanity. See? I'm charitable!)
>>>
>>> I concealed the name of the programming language because people have a
>>> tendency to unfairly disregard and denigrate it in spite of (or because
>>> of?) its many excellent properties and suitability for robust and
>>> reliable systems, in contrast to slovenly prototypes that minimize
>>> launch costs and impose negative externalities on users (and on anyone
>>> unlucky enough to be stuck supporting them). But then again cowboy
>>> programmers and their managers likely don't read my drivel anyway.
>>> They're busy chasing AI money before the bubble bursts.
>>>
>>> Anyway--the language is Ada.
>>>
>>> Regards,
>>> Branden
>>>
>>> [1] Watt, Wichmann, Findlay. _Ada Language and Methodology_.
>>> Prentice-Hall, 1987, p. 395.
>>
>
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 22:08 ` Luther Johnson
@ 2024-09-28 22:45 ` Luther Johnson
2024-09-28 22:50 ` Luther Johnson
0 siblings, 1 reply; 61+ messages in thread
From: Luther Johnson @ 2024-09-28 22:45 UTC (permalink / raw)
To: tuhs
G. Branden,
I get it. From your and Doug's responses, even O(n) baked-in costs can
be a problem.
On 09/28/2024 03:08 PM, Luther Johnson wrote:
> "Classic C", K&R + void function return type, enumerations, and
> structure passing/return, maybe a couple other things (some bug
> fixes/stricter rules on use of members in structs and unions), has
> this, I use a compiler from 1983 that lines up with an AT&T System V
> document detailing updates to the language.
>
> I view these kinds of changes as incremental usability and reliability
> fixes, well within the spirit and style, but just closing loopholes or
> filling in gaps of things that ought to work together, and could,
> without too much effort. But I agree, structures as full-fledged
> citizens came late to K & R / Classic C.
>
> On 09/28/2024 11:46 AM, G. Branden Robinson wrote:
>> Hi Luther,
>>
>> At 2024-09-28T10:47:44-0700, Luther Johnson wrote:
>>> I don't know that structure copying breaks any complexity or bounds on
>>> execution time rules. Many compilers may be different, but in the
>>> generated code I've seen, when you pass in a structure to a function,
>>> the receiving function copies it to the stack. In the portable C
>>> compiler, when you return a structure as a result, it is first copied
>>> to a static area, a pointer to that area is returned, then the caller
>>> copies that out to wherever it's meant to go, either a variable that's
>>> being assigned (which could be on the stack or elsewhere), or to a
>>> place on the stack that was reserved for it because that result will
>>> now be an argument to another function to be called. So there's some
>>> copying, but that's proportional to the size of the structure, it's
>>> linear, and there's no dynamic memory allocation going on.
>> I have no problem with this presentation, but recall the principle--the
>> tenet--that Doug was upholding:
>>
>>>> At 2024-09-28T09:34:14-0400, Douglas McIlroy wrote:
>>>>> This complaint overlooks one tenet of C: every operation in what
>>>>> you call "language runtime" takes O(1) time. Dynamic memory
>>>>> allocation is not such an operation.
>> Even without dynamic memory allocation, if you did something linear,
>> something O(n), it was a lose and a violation of the tenet.
>>
>> I can easily see the appeal of a language whose every operation really
>> is O(1). Once upon a time, a university course, or equivalent
>> experience, in assembly language (on a CLEAN instruction set, not x86)
>> is what taught you the virtues and drawbacks of thinking about and
>> implementing things that way. But my view is that C hasn't been one of
>> those languages for a very long time, since before its initial ANSI
>> standardization at the latest.
>>
>> At 2024-09-28T10:52:16-0700, Luther Johnson wrote:
>>> In the compilers I'm talking about, you pass a structure by passing a
>>> pointer to it - but the receiving function knows the argument is a
>>> structure, and not a pointer to a structure, so it knows it needs to
>>> use the pointer to copy to its own local version.
>> It's my understanding that the ability to work with structs as
>> first-class citizens in function calls, as parameters _or_ return types,
>> was something fairly late to stabilize in C compilers. Second-hand, I
>> gather that pre-standard C as told by K&R explicitly did _not_
>> countenance this. So a lot of early C code, including that in
>> libraries, indirected nearly all struct access, even when read-only,
>> through pointers.
>>
>> This is often a win, but not always. A few minutes ago I shot off my
>> mouth to this list about how much better the standard library design
>> could have been if the return of structs by value had been supported
>> much earlier.
>>
>> Our industry has, it seemss, been slow to appreciate the distinction
>> between what C++ eventually came to explicitly call "copy" semantics and
>> "move" semantics. Rust's paradigmatic dedication to the concept of data
>> "ownership" at last seems to be popularizing the practice of thinking
>> about these things. (For my part, I will forever hurl calumnies at
>> computer architects who refer to copy operations as "MOV" or similar.
>> If the operation doesn't destroy the source, it's not a move--I don't
>> care how many thousands of pages of manuals Intel writes saying
>> otherwise. Even the RISC-V specs screw this up, I assume in a
>> deliberate but embarrassing attempt to win mindshare among x86
>> programmers who cling to this myth as they do so many others.)
>>
>> For a moment I considered giving credit to a virtuous few '80s C
>> programmers who recognized that there was indeed no need to copy a
>> struct upon passing it to a function if you knew the callee wasn't going
>> to modify that struct...but we had a way of saying this, "const", and
>> library writers of that era were infamously indifferent to using "const"
>> in their APIs where it would have done good. So, no, no credit.
>>
>> Here's a paragraph from a 1987 text I wish I'd read back then, or at any
>> time before being exposed to C.
>>
>> "[Language] does not define how parameter passing is implemented. A
>> program is erroneous if it depends on a specific implementation method.
>> The two obvious implementations are by copy and by reference. With an
>> implementation that copies parameters, an `out` or `in out` actual
>> parameter will not be updated until (normal) return from the subprogram.
>> Therefore if the subprogram propagates an exception, the actual
>> parameter will be unchanged. This is clearly not the case when a
>> reference implementation is used. The difficulty with this vagueness in
>> the definition of [language] is that it is quite awkward to be sure that
>> a program is independent of the implementation method. (You might
>> wonder why the language does not define the implementation method. The
>> reason is that the copy mechanism is very inefficient with large
>> parameters, whereas the reference mechanism is prohibitively expensive
>> on distributed systems.)"[1]
>>
>> I admire the frankness. It points the way forward to reasoned
>> discussion of engineering tradeoffs, as opposed to programming language
>> boosterism. (By contrast, the trashing of boosters and their rhetoric
>> is an obvious service to humanity. See? I'm charitable!)
>>
>> I concealed the name of the programming language because people have a
>> tendency to unfairly disregard and denigrate it in spite of (or because
>> of?) its many excellent properties and suitability for robust and
>> reliable systems, in contrast to slovenly prototypes that minimize
>> launch costs and impose negative externalities on users (and on anyone
>> unlucky enough to be stuck supporting them). But then again cowboy
>> programmers and their managers likely don't read my drivel anyway.
>> They're busy chasing AI money before the bubble bursts.
>>
>> Anyway--the language is Ada.
>>
>> Regards,
>> Branden
>>
>> [1] Watt, Wichmann, Findlay. _Ada Language and Methodology_.
>> Prentice-Hall, 1987, p. 395.
>
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 18:46 ` G. Branden Robinson
@ 2024-09-28 22:08 ` Luther Johnson
2024-09-28 22:45 ` Luther Johnson
0 siblings, 1 reply; 61+ messages in thread
From: Luther Johnson @ 2024-09-28 22:08 UTC (permalink / raw)
To: tuhs
"Classic C", K&R + void function return type, enumerations, and
structure passing/return, maybe a couple other things (some bug
fixes/stricter rules on use of members in structs and unions), has this,
I use a compiler from 1983 that lines up with an AT&T System V document
detailing updates to the language.
I view these kinds of changes as incremental usability and reliability
fixes, well within the spirit and style, but just closing loopholes or
filling in gaps of things that ought to work together, and could,
without too much effort. But I agree, structures as full-fledged
citizens came late to K & R / Classic C.
On 09/28/2024 11:46 AM, G. Branden Robinson wrote:
> Hi Luther,
>
> At 2024-09-28T10:47:44-0700, Luther Johnson wrote:
>> I don't know that structure copying breaks any complexity or bounds on
>> execution time rules. Many compilers may be different, but in the
>> generated code I've seen, when you pass in a structure to a function,
>> the receiving function copies it to the stack. In the portable C
>> compiler, when you return a structure as a result, it is first copied
>> to a static area, a pointer to that area is returned, then the caller
>> copies that out to wherever it's meant to go, either a variable that's
>> being assigned (which could be on the stack or elsewhere), or to a
>> place on the stack that was reserved for it because that result will
>> now be an argument to another function to be called. So there's some
>> copying, but that's proportional to the size of the structure, it's
>> linear, and there's no dynamic memory allocation going on.
> I have no problem with this presentation, but recall the principle--the
> tenet--that Doug was upholding:
>
>>> At 2024-09-28T09:34:14-0400, Douglas McIlroy wrote:
>>>> This complaint overlooks one tenet of C: every operation in what
>>>> you call "language runtime" takes O(1) time. Dynamic memory
>>>> allocation is not such an operation.
> Even without dynamic memory allocation, if you did something linear,
> something O(n), it was a lose and a violation of the tenet.
>
> I can easily see the appeal of a language whose every operation really
> is O(1). Once upon a time, a university course, or equivalent
> experience, in assembly language (on a CLEAN instruction set, not x86)
> is what taught you the virtues and drawbacks of thinking about and
> implementing things that way. But my view is that C hasn't been one of
> those languages for a very long time, since before its initial ANSI
> standardization at the latest.
>
> At 2024-09-28T10:52:16-0700, Luther Johnson wrote:
>> In the compilers I'm talking about, you pass a structure by passing a
>> pointer to it - but the receiving function knows the argument is a
>> structure, and not a pointer to a structure, so it knows it needs to
>> use the pointer to copy to its own local version.
> It's my understanding that the ability to work with structs as
> first-class citizens in function calls, as parameters _or_ return types,
> was something fairly late to stabilize in C compilers. Second-hand, I
> gather that pre-standard C as told by K&R explicitly did _not_
> countenance this. So a lot of early C code, including that in
> libraries, indirected nearly all struct access, even when read-only,
> through pointers.
>
> This is often a win, but not always. A few minutes ago I shot off my
> mouth to this list about how much better the standard library design
> could have been if the return of structs by value had been supported
> much earlier.
>
> Our industry has, it seemss, been slow to appreciate the distinction
> between what C++ eventually came to explicitly call "copy" semantics and
> "move" semantics. Rust's paradigmatic dedication to the concept of data
> "ownership" at last seems to be popularizing the practice of thinking
> about these things. (For my part, I will forever hurl calumnies at
> computer architects who refer to copy operations as "MOV" or similar.
> If the operation doesn't destroy the source, it's not a move--I don't
> care how many thousands of pages of manuals Intel writes saying
> otherwise. Even the RISC-V specs screw this up, I assume in a
> deliberate but embarrassing attempt to win mindshare among x86
> programmers who cling to this myth as they do so many others.)
>
> For a moment I considered giving credit to a virtuous few '80s C
> programmers who recognized that there was indeed no need to copy a
> struct upon passing it to a function if you knew the callee wasn't going
> to modify that struct...but we had a way of saying this, "const", and
> library writers of that era were infamously indifferent to using "const"
> in their APIs where it would have done good. So, no, no credit.
>
> Here's a paragraph from a 1987 text I wish I'd read back then, or at any
> time before being exposed to C.
>
> "[Language] does not define how parameter passing is implemented. A
> program is erroneous if it depends on a specific implementation method.
> The two obvious implementations are by copy and by reference. With an
> implementation that copies parameters, an `out` or `in out` actual
> parameter will not be updated until (normal) return from the subprogram.
> Therefore if the subprogram propagates an exception, the actual
> parameter will be unchanged. This is clearly not the case when a
> reference implementation is used. The difficulty with this vagueness in
> the definition of [language] is that it is quite awkward to be sure that
> a program is independent of the implementation method. (You might
> wonder why the language does not define the implementation method. The
> reason is that the copy mechanism is very inefficient with large
> parameters, whereas the reference mechanism is prohibitively expensive
> on distributed systems.)"[1]
>
> I admire the frankness. It points the way forward to reasoned
> discussion of engineering tradeoffs, as opposed to programming language
> boosterism. (By contrast, the trashing of boosters and their rhetoric
> is an obvious service to humanity. See? I'm charitable!)
>
> I concealed the name of the programming language because people have a
> tendency to unfairly disregard and denigrate it in spite of (or because
> of?) its many excellent properties and suitability for robust and
> reliable systems, in contrast to slovenly prototypes that minimize
> launch costs and impose negative externalities on users (and on anyone
> unlucky enough to be stuck supporting them). But then again cowboy
> programmers and their managers likely don't read my drivel anyway.
> They're busy chasing AI money before the bubble bursts.
>
> Anyway--the language is Ada.
>
> Regards,
> Branden
>
> [1] Watt, Wichmann, Findlay. _Ada Language and Methodology_.
> Prentice-Hall, 1987, p. 395.
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 17:59 ` Bakul Shah via TUHS
@ 2024-09-28 22:07 ` Douglas McIlroy
2024-09-28 23:05 ` Rob Pike
0 siblings, 1 reply; 61+ messages in thread
From: Douglas McIlroy @ 2024-09-28 22:07 UTC (permalink / raw)
To: TUHS main list
[-- Attachment #1: Type: text/plain, Size: 12816 bytes --]
I have to concede Branden's "gotcha". Struct copying is definitely not O(1).
A real-life hazard of non-O(1) operations. Vic Vyssotsky put bzero (I
forget what Vic called it) into Fortran II. Sometime later, he found that a
percolation simulation was running annoyingly slowly. It took some study to
discover that the real inner loop of the program was not the percolation,
which touched only a small fraction of a 2D field. More time went into the
innocuous bzero initializer. The fix was to ADD code to the inner loop to
remember what entries had been touched, and initialize only those for the
next round of the simulation.
> for a while, every instruction has [sic] an exactly predictable and
constant cycle
> count. ...[Then] all of a sudden you had instructions with O(n) cycle
counts.
O(n) cycle counts were nothing new. In the 1950s we had the IBM 1620 with
arbitrary-length arithmetic and the 709 with "convert" instructions whose
cycle count went up to 256.
>> spinelessly buckled to allow malloc(0) to return 0, as some
>> implementations gratuitously did.
> What was the alternative? There was no such thing as an exception, and
> if a pointer was an int and an int was as wide as a machine address,
> there'd be no way to indicate failure in-band, either.
What makes you think allocating zero space should fail? If the size n of a
set is determined at run time, why should one have to special-case its
space allocation when n=0? Subsequent processing of the form for(i=0; i<n;
i++) {...} will handle it gracefully with no special code. Malloc should do
as it did in v7--return a non-null pointer different from any other active
malloc pointer, as Bakul stated. If worse comes to worst[1] this can be
done by padding up to the next feasible size. Regardless of how the pointer
is created, any access via it would of course be out of bounds and hence
wrong.
> How does malloc(0) get this job done and what benefit does it bring?
If I understand the "job" (about initializing structure members) correctly,
malloc(0) has no bearing on it. The benefit lies elsewhere.
Apropos of tail calls, Rob Pike had a nice name for an explicit tail call,
"become". It's certainly reasonable, though, to make compilers recognize
tail calls implicitly.
[1] Worse didn't come to worst in the original malloc. It attached metadata
to each block, so even blocks of size zero consumed some memory.
Doug
On Sat, Sep 28, 2024 at 1:59 PM Bakul Shah via TUHS <tuhs@tuhs.org> wrote:
> Just responding to random things that I noticed:
>
> You don't need special syntax for tail-call. It should be done
> transparently when a call is the last thing that gets executed. Special
> syntax will merely allow confused people to use it in the wrong place and
> get confused more.
>
> malloc(0) should return a unique ptr. So that "T* a = malloc(0); T* b =
> malloc(0); a != (T*)0 && a != b". Without this, malloc(0) acts differently
> from malloc(n) for n > 0.
>
> Note that except for arrays, function arguments & result are copied so
> copying a struct makes perfect sense. Passing arrays by reference may have
> been due to residual Fortran influence! [Just guessing] Also note: that one
> exception has been the cause of many problems.
>
> In any case you have not argued convincingly about why dynamic memory
> allocation should be in the language (runtime) :-) And adding that wouldn't
> have fixed any of the existing problems with the language.
>
> Bakul
>
> > On Sep 28, 2024, at 9:58 AM, G. Branden Robinson <
> g.branden.robinson@gmail.com> wrote:
> >
> > At 2024-09-28T09:34:14-0400, Douglas McIlroy wrote:
> >>> C's refusal to specify dynamic memory allocation in the language
> >>> runtime (as opposed to, eventually, the standard library)
> >>
> >> This complaint overlooks one tenet of C: every operation in what you
> >> call "language runtime" takes O(1) time. Dynamic memory allocation
> >> is not such an operation.
> >
> > A fair point. Let me argue about it anyway. ;-)
> >
> > I'd make three observations. First, K&R did _not_ tout this in their
> > book presenting ANSI C. I went back and checked the prefaces,
> > introduction, and the section presenting a simple malloc()/free()
> > implementation. The tenet you claim for the language is not explicitly
> > articulated and, if I squint really hard, I can only barely perceive
> > (imagine?) it deeply between the lines in some of the prefatory material
> > to which K&R mostly confine their efforts to promote the language. In
> > my view, a "tenet" is something more overt: the sort of thing U.S.
> > politicians try to get hung on the walls of every public school
> > classroom, like Henry Spencer's Ten Commandments of C[1] (which itself
> > does not mention this "core language has only O(1) features" principle).
> >
> > Second, in reviewing K&R I was reminded that structure copying is part
> > of the language. ("There are no operations that manipulate an entire
> > array or string, although structures may be copied as a unit."[2])
> > Doesn't that break the tenet right there?
> >
> > Third, and following on from the foregoing, your point reminded me of my
> > youth programming non-pipelined machines with no caches. You could set
> > your watch by (just about) any instruction in the set--and often did,
> > because we penurious microcomputer guys often lacked hardware real-time
> > clocks, too. That is to say, for a while, every instruction has an
> > exactly predictable and constant cycle count. (The _value_ of that
> > constant might depend on the addressing mode, because that would have
> > consequences on memory fetches, but the principle stood.) When the Z80
> > extended the 8080's instruction set, they ate from Tree of Knowledge
> > with block-copy instructions like LDIR and LDDR, and all of a sudden you
> > had instructions with O(n) cycle counts. But as a rule, programmers
> > seemed to welcome this instead of recognizing it as knowing sin, because
> > you generally knew worst-case how many bytes you'd be copying and take
> > that into account. (The worst worst case was a mere 64kB!)
> >
> > Further, Z80 home computers in low-end configurations (that is, no disk
> > drives) often did a shocking thing: they ran with all interrupts masked.
> > All the time. The one non-maskable interrupt was RESET, after which you
> > weren't going to be resuming execution of your program anyway. Not from
> > the same instruction, at least. As I recall the TRS-80 Model I/III/4
> > didn't even have logic on the motherboard to decode the Z80's "interrupt
> > mode 2", which was vectored, I think. Even in the "high-end"
> > configurations of these tiny machines, you got a whopping ONE interrupt
> > to play with ("IM 1").
> >
> > Later, when the Hitachi 6309 smuggled similar block-transfer decadence
> > into its extensions to the Motorola 6809 (to the excitement of we
> > semi-consciously Unix-adjacent OS-9 users) they faced a starker problem,
> > because the 6809 didn't wall off interrupts in the same way the 8080 and
> > Z80. They therefore presented the programmer with the novelty of the
> > restartable instruction, and a new generation of programmers became
> > acquainted with the hard lessons time-sharing minicomputer people were
> > familiar with.
> >
> > My point in this digression is that, in my opinion, it's tough to hold
> > fast to the O(1) tenet you claim for C's core language and to another at
> > the same time: the language's identity as a "portable assembly
> > language". Unless every programmer has control over the compiler--and
> > they don't--you can't predict when the compiler will emit an O(n) block
> > transfer instruction. You'll just have to look at the disassembly.
> >
> > _Maybe_ you can retain purity by...never copying structs. I don't think
> > lint or any other tool ever checked for this. Your advocacy of this
> > tenet is the first time I've heard it presented.
> >
> > If you were to suggest to me that most of the time I've spent in my life
> > arguing with C advocates was with rotten exemplars of the species and
> > therefore was time wasted, I would concede the point.
> >
> > There's just so danged _many_ of them...
> >
> >> Your hobbyhorse awakened one of mine.
> >>
> >> malloc was in v7, before the C standard was written. The standard
> >> spinelessly buckled to allow malloc(0) to return 0, as some
> >> implementations gratuitously did.
> >
> > What was the alternative? There was no such thing as an exception, and
> > if a pointer was an int and an int was as wide as a machine address,
> > there'd be no way to indicate failure in-band, either.
> >
> > If the choice was that or another instance of atoi()'s wincingly awful
> > "does this 0 represent an error or successful conversion of a zero
> > input?" land mine, ANSI might have made the right choice.
> >
> >> I can't imagine that any program ever actually wanted the feature. Now
> >> it's one more undefined behavior that lurks in thousands of programs.
> >
> > Hoare admitted to only one billion-dollar mistake. No one dares count
> > how many to write in C's ledger. This was predicted, wasn't it?
> > Everyone loved C because it was fast: it was performant, because it
> > never met a runtime check it didn't eschew--recall again Kernighan
> > punking Pascal on this exact point--and it was quick for the programmer
> > to write because it never met a _compile_-time check it didn't eschew.
> > C was born as a language for wizards who never made mistakes.
> >
> > The problem is that, like James Madison's fictive government of angels,
> > such entities don't exist. The staff of the CSRC itself may have been
> > overwhelmingly populated with frank, modest, and self-deprecating
> > people--and I'll emphasize here that I'm aware of no accounts that this
> > is anything but true--but C unfortunately played a part in stoking a
> > culture of pretension among software developers. "C is a language in
> > which wizards program. I program in C. Therefore I'm a wizard." is how
> > the syllogism (spot the fallacy) went. I don't know who does more
> > damage--the people who believe their own BS, or the ones who know
> > they're scamming their colleagues.
> >
> >> There are two arguments for malloc(0), Most importantly, it caters for
> >> a limiting case for aggregates generated at runtime--an instance of
> >> Kernighan's Law, "Do nothing gracefully". It also provides a way to
> >> create a distinctive pointer to impart some meta-information, e.g.
> >> "TBD" or "end of subgroup", distinct from the null pointer, which
> >> merely denotes absence.
> >
> > I think I might be confused now. I've frequently seen arrays of structs
> > initialized from lists of literals ending in a series of "NULL"
> > structure members, in code that antedates or ignores C99's wonderful
> > feature of designated initializers for aggregate types.[3] How does
> > malloc(0) get this job done and what benefit does it bring?
> >
> > Last time I posted to TUHS I mentioned a proposal for explicit tail-call
> > elimination in C. I got the syntax wrong. The proposal was "return
> > goto;". The WG14 document number is N2920 and it's by Alex Gilding.
> > Good stuff.[4] I hope we see it in C2y.
> >
> > Predictably, I must confess that I didn't make much headway on
> > Schiller's 1975 "secure kernel" paper. Maybe next time.
> >
> > Regards,
> > Branden
> >
> > [1] https://web.cs.dal.ca/~jamie/UWO/C/the10fromHenryS.html
> >
> > I can easily imagine that the tenet held at _some_ point in the
> > C's history. It's _got_ to be the reason that the language
> > relegates memset() and memcpy() to the standard library (or to the
> > programmer's own devise)! :-O
> >
> > [2] Kernighan & Ritchie, _The C Programming Language_, 2nd edition, p. 2
> >
> > Having thus admitted the camel's nose to the tent, K&R would have
> > done the world a major service by making memset(), or at least
> > bzero(), a language feature, the latter perhaps by having "= 0"
> > validly apply to an lvalue of non-primitive type. Okay,
> > _potentially_ a major service. You'd still need the self-regarding
> > wizard programmers to bother coding it, which they wouldn't in many
> > cases "because speed". Move fast, break stuff.
> >
> > C++ screwed this up too, and stubbornly stuck by it for a long time.
> >
> > https://cplusplus.github.io/CWG/issues/178.html
> >
> > [3] https://gcc.gnu.org/onlinedocs/gcc/Designated-Inits.html
> > [4] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2920.pdf
>
>
[-- Attachment #2: Type: text/html, Size: 15458 bytes --]
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 17:52 ` Luther Johnson
@ 2024-09-28 18:46 ` G. Branden Robinson
2024-09-28 22:08 ` Luther Johnson
0 siblings, 1 reply; 61+ messages in thread
From: G. Branden Robinson @ 2024-09-28 18:46 UTC (permalink / raw)
To: tuhs
[-- Attachment #1: Type: text/plain, Size: 5928 bytes --]
Hi Luther,
At 2024-09-28T10:47:44-0700, Luther Johnson wrote:
> I don't know that structure copying breaks any complexity or bounds on
> execution time rules. Many compilers may be different, but in the
> generated code I've seen, when you pass in a structure to a function,
> the receiving function copies it to the stack. In the portable C
> compiler, when you return a structure as a result, it is first copied
> to a static area, a pointer to that area is returned, then the caller
> copies that out to wherever it's meant to go, either a variable that's
> being assigned (which could be on the stack or elsewhere), or to a
> place on the stack that was reserved for it because that result will
> now be an argument to another function to be called. So there's some
> copying, but that's proportional to the size of the structure, it's
> linear, and there's no dynamic memory allocation going on.
I have no problem with this presentation, but recall the principle--the
tenet--that Doug was upholding:
> > At 2024-09-28T09:34:14-0400, Douglas McIlroy wrote:
> > > This complaint overlooks one tenet of C: every operation in what
> > > you call "language runtime" takes O(1) time. Dynamic memory
> > > allocation is not such an operation.
Even without dynamic memory allocation, if you did something linear,
something O(n), it was a lose and a violation of the tenet.
I can easily see the appeal of a language whose every operation really
is O(1). Once upon a time, a university course, or equivalent
experience, in assembly language (on a CLEAN instruction set, not x86)
is what taught you the virtues and drawbacks of thinking about and
implementing things that way. But my view is that C hasn't been one of
those languages for a very long time, since before its initial ANSI
standardization at the latest.
At 2024-09-28T10:52:16-0700, Luther Johnson wrote:
> In the compilers I'm talking about, you pass a structure by passing a
> pointer to it - but the receiving function knows the argument is a
> structure, and not a pointer to a structure, so it knows it needs to
> use the pointer to copy to its own local version.
It's my understanding that the ability to work with structs as
first-class citizens in function calls, as parameters _or_ return types,
was something fairly late to stabilize in C compilers. Second-hand, I
gather that pre-standard C as told by K&R explicitly did _not_
countenance this. So a lot of early C code, including that in
libraries, indirected nearly all struct access, even when read-only,
through pointers.
This is often a win, but not always. A few minutes ago I shot off my
mouth to this list about how much better the standard library design
could have been if the return of structs by value had been supported
much earlier.
Our industry has, it seemss, been slow to appreciate the distinction
between what C++ eventually came to explicitly call "copy" semantics and
"move" semantics. Rust's paradigmatic dedication to the concept of data
"ownership" at last seems to be popularizing the practice of thinking
about these things. (For my part, I will forever hurl calumnies at
computer architects who refer to copy operations as "MOV" or similar.
If the operation doesn't destroy the source, it's not a move--I don't
care how many thousands of pages of manuals Intel writes saying
otherwise. Even the RISC-V specs screw this up, I assume in a
deliberate but embarrassing attempt to win mindshare among x86
programmers who cling to this myth as they do so many others.)
For a moment I considered giving credit to a virtuous few '80s C
programmers who recognized that there was indeed no need to copy a
struct upon passing it to a function if you knew the callee wasn't going
to modify that struct...but we had a way of saying this, "const", and
library writers of that era were infamously indifferent to using "const"
in their APIs where it would have done good. So, no, no credit.
Here's a paragraph from a 1987 text I wish I'd read back then, or at any
time before being exposed to C.
"[Language] does not define how parameter passing is implemented. A
program is erroneous if it depends on a specific implementation method.
The two obvious implementations are by copy and by reference. With an
implementation that copies parameters, an `out` or `in out` actual
parameter will not be updated until (normal) return from the subprogram.
Therefore if the subprogram propagates an exception, the actual
parameter will be unchanged. This is clearly not the case when a
reference implementation is used. The difficulty with this vagueness in
the definition of [language] is that it is quite awkward to be sure that
a program is independent of the implementation method. (You might
wonder why the language does not define the implementation method. The
reason is that the copy mechanism is very inefficient with large
parameters, whereas the reference mechanism is prohibitively expensive
on distributed systems.)"[1]
I admire the frankness. It points the way forward to reasoned
discussion of engineering tradeoffs, as opposed to programming language
boosterism. (By contrast, the trashing of boosters and their rhetoric
is an obvious service to humanity. See? I'm charitable!)
I concealed the name of the programming language because people have a
tendency to unfairly disregard and denigrate it in spite of (or because
of?) its many excellent properties and suitability for robust and
reliable systems, in contrast to slovenly prototypes that minimize
launch costs and impose negative externalities on users (and on anyone
unlucky enough to be stuck supporting them). But then again cowboy
programmers and their managers likely don't read my drivel anyway.
They're busy chasing AI money before the bubble bursts.
Anyway--the language is Ada.
Regards,
Branden
[1] Watt, Wichmann, Findlay. _Ada Language and Methodology_.
Prentice-Hall, 1987, p. 395.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 16:58 ` G. Branden Robinson
` (2 preceding siblings ...)
2024-09-28 18:01 ` G. Branden Robinson
@ 2024-09-28 18:05 ` Larry McVoy
2024-09-30 15:49 ` Paul Winalski
3 siblings, 1 reply; 61+ messages in thread
From: Larry McVoy @ 2024-09-28 18:05 UTC (permalink / raw)
To: G. Branden Robinson; +Cc: TUHS main list
On Sat, Sep 28, 2024 at 11:58:12AM -0500, G. Branden Robinson wrote:
> The problem is that, like James Madison's fictive government of angels,
> such entities don't exist. The staff of the CSRC itself may have been
> overwhelmingly populated with frank, modest, and self-deprecating
> people--and I'll emphasize here that I'm aware of no accounts that this
> is anything but true--but C unfortunately played a part in stoking a
> culture of pretension among software developers. "C is a language in
> which wizards program. I program in C. Therefore I'm a wizard." is how
> the syllogism (spot the fallacy) went. I don't know who does more
> damage--the people who believe their own BS, or the ones who know
> they're scamming their colleagues.
I have a somewhat different view. I have a son who is learning to program
and he asked me about C. I said "C is like driving a sports car on a
twisty mountain road that has cliffs and no guard rails. If you want to
check your phone while you are driving, it's not for you. It requires
your full, focussed attention. So that sounds bad, right? Well, if
you are someone who enjoys driving a sports car, and are good at it,
perhaps C is for you."
So I guess I fit the description of thinking I'm a wizard, sort of. I'm
good at C, there is plenty of my C open sourced, you can go read it and
decide for yourself. I enjoy C. But it's not for everyone, in fact,
most programmers these days would be better off in Rust or something
that has guardrails.
I get your points, Branden, but I'd prefer that C sticks around for
the people who enjoy it and are good at it. A small crowd, for sure.
--lm
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 16:58 ` G. Branden Robinson
2024-09-28 17:47 ` Luther Johnson
2024-09-28 17:59 ` Bakul Shah via TUHS
@ 2024-09-28 18:01 ` G. Branden Robinson
2024-10-01 13:13 ` arnold
2024-09-28 18:05 ` Larry McVoy
3 siblings, 1 reply; 61+ messages in thread
From: G. Branden Robinson @ 2024-09-28 18:01 UTC (permalink / raw)
To: TUHS main list
[-- Attachment #1: Type: text/plain, Size: 3068 bytes --]
[self-follow-up]
At 2024-09-28T11:58:16-0500, G. Branden Robinson wrote:
> > malloc was in v7, before the C standard was written. The standard
> > spinelessly buckled to allow malloc(0) to return 0, as some
> > implementations gratuitously did.
>
> What was the alternative? There was no such thing as an exception, and
> if a pointer was an int and an int was as wide as a machine address,
> there'd be no way to indicate failure in-band, either.
While I'm making enemies of C advocates, let me just damn myself further
by suggesting an answer to my own question.
The obvious and correct thing to do was to have any standard library
function that could possibly fail return a structure type instead of a
scalar. Such a type would have two components: the value of interest
when valid, and an error indicator.[1]
As John Belushi would have said at the time such design decisions were
being made, "but nooooooooo". Returning a struct was an obviously
HORRIBLE idea. My god, you might be stacking two ints instead of one.
That doubles the badness! Oh, how we yearn for the days of the PDP-7,
when resources were so scarce that a system call didn't return
_anything_. If it failed, the carry flag was set. "One bit of return
value ought to be enough for anyone," as I hope Ken Thompson never said.
Expounders of Doug's tenet would, or should, have acknowledged that by
going to the library _at all_, they were giving up any O(1) guarantee,
and likely starting something O(n) or worse in time and/or space. So
what's so awful about sticking on a piece of O(1) overhead? In the
analysis of algorithms class lecture that the wizards slept through, it
was pointed out that only the highest-order term is retained. Well, the
extra int was easy to see in memory and throw a hissy fit about, and I
suppose a hissy fit is exactly what happened.
Much better to use a global library symbol. Call it "errno". That's
sure to never cause anyone any problems with reentrancy or concurrency
whatsoever. After all:
"...C offers only straightforward, single-thread control flow: tests,
loops, grouping, and subprograms, but not multiprogramming, parallel
operations, synchronization, or coroutines." (K&R 2e, p. 2)
It's grimly fascinating to me now to observe how many security
vulnerabilities and other disasters have arisen from the determination
of C's champions to apply it to all problems, and with especial fervor
to those that K&R explicitly acknowledged it was ill-suited for.
Real wizards, it seems, know only one spell, and it is tellingly
hammer-shaped.
Regards,
Branden
[1] Much later, we came to know this (in slightly cleaner form) as an
"option type". Rust advocates make a big, big deal of this. Only
a slightly bigger one than it deserves, but I perceive a replay of
C's cultural history in the passion of Rust's advocates. Or maybe
something less edifying than passion accounts for this:
https://fasterthanli.me/articles/the-rustconf-keynote-fiasco-explained
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 16:58 ` G. Branden Robinson
2024-09-28 17:47 ` Luther Johnson
@ 2024-09-28 17:59 ` Bakul Shah via TUHS
2024-09-28 22:07 ` Douglas McIlroy
2024-09-28 18:01 ` G. Branden Robinson
2024-09-28 18:05 ` Larry McVoy
3 siblings, 1 reply; 61+ messages in thread
From: Bakul Shah via TUHS @ 2024-09-28 17:59 UTC (permalink / raw)
To: G. Branden Robinson; +Cc: TUHS main list
Just responding to random things that I noticed:
You don't need special syntax for tail-call. It should be done transparently when a call is the last thing that gets executed. Special syntax will merely allow confused people to use it in the wrong place and get confused more.
malloc(0) should return a unique ptr. So that "T* a = malloc(0); T* b = malloc(0); a != (T*)0 && a != b". Without this, malloc(0) acts differently from malloc(n) for n > 0.
Note that except for arrays, function arguments & result are copied so copying a struct makes perfect sense. Passing arrays by reference may have been due to residual Fortran influence! [Just guessing] Also note: that one exception has been the cause of many problems.
In any case you have not argued convincingly about why dynamic memory allocation should be in the language (runtime) :-) And adding that wouldn't have fixed any of the existing problems with the language.
Bakul
> On Sep 28, 2024, at 9:58 AM, G. Branden Robinson <g.branden.robinson@gmail.com> wrote:
>
> At 2024-09-28T09:34:14-0400, Douglas McIlroy wrote:
>>> C's refusal to specify dynamic memory allocation in the language
>>> runtime (as opposed to, eventually, the standard library)
>>
>> This complaint overlooks one tenet of C: every operation in what you
>> call "language runtime" takes O(1) time. Dynamic memory allocation
>> is not such an operation.
>
> A fair point. Let me argue about it anyway. ;-)
>
> I'd make three observations. First, K&R did _not_ tout this in their
> book presenting ANSI C. I went back and checked the prefaces,
> introduction, and the section presenting a simple malloc()/free()
> implementation. The tenet you claim for the language is not explicitly
> articulated and, if I squint really hard, I can only barely perceive
> (imagine?) it deeply between the lines in some of the prefatory material
> to which K&R mostly confine their efforts to promote the language. In
> my view, a "tenet" is something more overt: the sort of thing U.S.
> politicians try to get hung on the walls of every public school
> classroom, like Henry Spencer's Ten Commandments of C[1] (which itself
> does not mention this "core language has only O(1) features" principle).
>
> Second, in reviewing K&R I was reminded that structure copying is part
> of the language. ("There are no operations that manipulate an entire
> array or string, although structures may be copied as a unit."[2])
> Doesn't that break the tenet right there?
>
> Third, and following on from the foregoing, your point reminded me of my
> youth programming non-pipelined machines with no caches. You could set
> your watch by (just about) any instruction in the set--and often did,
> because we penurious microcomputer guys often lacked hardware real-time
> clocks, too. That is to say, for a while, every instruction has an
> exactly predictable and constant cycle count. (The _value_ of that
> constant might depend on the addressing mode, because that would have
> consequences on memory fetches, but the principle stood.) When the Z80
> extended the 8080's instruction set, they ate from Tree of Knowledge
> with block-copy instructions like LDIR and LDDR, and all of a sudden you
> had instructions with O(n) cycle counts. But as a rule, programmers
> seemed to welcome this instead of recognizing it as knowing sin, because
> you generally knew worst-case how many bytes you'd be copying and take
> that into account. (The worst worst case was a mere 64kB!)
>
> Further, Z80 home computers in low-end configurations (that is, no disk
> drives) often did a shocking thing: they ran with all interrupts masked.
> All the time. The one non-maskable interrupt was RESET, after which you
> weren't going to be resuming execution of your program anyway. Not from
> the same instruction, at least. As I recall the TRS-80 Model I/III/4
> didn't even have logic on the motherboard to decode the Z80's "interrupt
> mode 2", which was vectored, I think. Even in the "high-end"
> configurations of these tiny machines, you got a whopping ONE interrupt
> to play with ("IM 1").
>
> Later, when the Hitachi 6309 smuggled similar block-transfer decadence
> into its extensions to the Motorola 6809 (to the excitement of we
> semi-consciously Unix-adjacent OS-9 users) they faced a starker problem,
> because the 6809 didn't wall off interrupts in the same way the 8080 and
> Z80. They therefore presented the programmer with the novelty of the
> restartable instruction, and a new generation of programmers became
> acquainted with the hard lessons time-sharing minicomputer people were
> familiar with.
>
> My point in this digression is that, in my opinion, it's tough to hold
> fast to the O(1) tenet you claim for C's core language and to another at
> the same time: the language's identity as a "portable assembly
> language". Unless every programmer has control over the compiler--and
> they don't--you can't predict when the compiler will emit an O(n) block
> transfer instruction. You'll just have to look at the disassembly.
>
> _Maybe_ you can retain purity by...never copying structs. I don't think
> lint or any other tool ever checked for this. Your advocacy of this
> tenet is the first time I've heard it presented.
>
> If you were to suggest to me that most of the time I've spent in my life
> arguing with C advocates was with rotten exemplars of the species and
> therefore was time wasted, I would concede the point.
>
> There's just so danged _many_ of them...
>
>> Your hobbyhorse awakened one of mine.
>>
>> malloc was in v7, before the C standard was written. The standard
>> spinelessly buckled to allow malloc(0) to return 0, as some
>> implementations gratuitously did.
>
> What was the alternative? There was no such thing as an exception, and
> if a pointer was an int and an int was as wide as a machine address,
> there'd be no way to indicate failure in-band, either.
>
> If the choice was that or another instance of atoi()'s wincingly awful
> "does this 0 represent an error or successful conversion of a zero
> input?" land mine, ANSI might have made the right choice.
>
>> I can't imagine that any program ever actually wanted the feature. Now
>> it's one more undefined behavior that lurks in thousands of programs.
>
> Hoare admitted to only one billion-dollar mistake. No one dares count
> how many to write in C's ledger. This was predicted, wasn't it?
> Everyone loved C because it was fast: it was performant, because it
> never met a runtime check it didn't eschew--recall again Kernighan
> punking Pascal on this exact point--and it was quick for the programmer
> to write because it never met a _compile_-time check it didn't eschew.
> C was born as a language for wizards who never made mistakes.
>
> The problem is that, like James Madison's fictive government of angels,
> such entities don't exist. The staff of the CSRC itself may have been
> overwhelmingly populated with frank, modest, and self-deprecating
> people--and I'll emphasize here that I'm aware of no accounts that this
> is anything but true--but C unfortunately played a part in stoking a
> culture of pretension among software developers. "C is a language in
> which wizards program. I program in C. Therefore I'm a wizard." is how
> the syllogism (spot the fallacy) went. I don't know who does more
> damage--the people who believe their own BS, or the ones who know
> they're scamming their colleagues.
>
>> There are two arguments for malloc(0), Most importantly, it caters for
>> a limiting case for aggregates generated at runtime--an instance of
>> Kernighan's Law, "Do nothing gracefully". It also provides a way to
>> create a distinctive pointer to impart some meta-information, e.g.
>> "TBD" or "end of subgroup", distinct from the null pointer, which
>> merely denotes absence.
>
> I think I might be confused now. I've frequently seen arrays of structs
> initialized from lists of literals ending in a series of "NULL"
> structure members, in code that antedates or ignores C99's wonderful
> feature of designated initializers for aggregate types.[3] How does
> malloc(0) get this job done and what benefit does it bring?
>
> Last time I posted to TUHS I mentioned a proposal for explicit tail-call
> elimination in C. I got the syntax wrong. The proposal was "return
> goto;". The WG14 document number is N2920 and it's by Alex Gilding.
> Good stuff.[4] I hope we see it in C2y.
>
> Predictably, I must confess that I didn't make much headway on
> Schiller's 1975 "secure kernel" paper. Maybe next time.
>
> Regards,
> Branden
>
> [1] https://web.cs.dal.ca/~jamie/UWO/C/the10fromHenryS.html
>
> I can easily imagine that the tenet held at _some_ point in the
> C's history. It's _got_ to be the reason that the language
> relegates memset() and memcpy() to the standard library (or to the
> programmer's own devise)! :-O
>
> [2] Kernighan & Ritchie, _The C Programming Language_, 2nd edition, p. 2
>
> Having thus admitted the camel's nose to the tent, K&R would have
> done the world a major service by making memset(), or at least
> bzero(), a language feature, the latter perhaps by having "= 0"
> validly apply to an lvalue of non-primitive type. Okay,
> _potentially_ a major service. You'd still need the self-regarding
> wizard programmers to bother coding it, which they wouldn't in many
> cases "because speed". Move fast, break stuff.
>
> C++ screwed this up too, and stubbornly stuck by it for a long time.
>
> https://cplusplus.github.io/CWG/issues/178.html
>
> [3] https://gcc.gnu.org/onlinedocs/gcc/Designated-Inits.html
> [4] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2920.pdf
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 17:47 ` Luther Johnson
@ 2024-09-28 17:52 ` Luther Johnson
2024-09-28 18:46 ` G. Branden Robinson
0 siblings, 1 reply; 61+ messages in thread
From: Luther Johnson @ 2024-09-28 17:52 UTC (permalink / raw)
To: tuhs
In the compilers I'm talking about, you pass a structure by passing a
pointer to it - but the receiving function knows the argument is a
structure, and not a pointer to a structure, so it knows it needs to use
the pointer to copy to its own local version.
On 09/28/2024 10:47 AM, Luther Johnson wrote:
> I don't know that structure copying breaks any complexity or bounds on
> execution time rules. Many compilers may be different, but in the
> generated code I've seen, when you pass in a structure to a function,
> the receiving function copies it to the stack. In the portable C
> compiler, when you return a structure as a result, it is first copied
> to a static area, a pointer to that area is returned, then the caller
> copies that out to wherever it's meant to go, either a variable that's
> being assigned (which could be on the stack or elsewhere), or to a place
> on the stack that was reserved for it because that result will now be an
> argument to another function to be called. So there's some copying, but
> that's proportional to the size of the structure, it's linear, and
> there's no dynamic memory allocation going on.
>
> On 09/28/2024 09:58 AM, G. Branden Robinson wrote:
>> At 2024-09-28T09:34:14-0400, Douglas McIlroy wrote:
>>>> C's refusal to specify dynamic memory allocation in the language
>>>> runtime (as opposed to, eventually, the standard library)
>>> This complaint overlooks one tenet of C: every operation in what you
>>> call "language runtime" takes O(1) time. Dynamic memory allocation
>>> is not such an operation.
>> A fair point. Let me argue about it anyway. ;-)
>>
>> I'd make three observations. First, K&R did _not_ tout this in their
>> book presenting ANSI C. I went back and checked the prefaces,
>> introduction, and the section presenting a simple malloc()/free()
>> implementation. The tenet you claim for the language is not explicitly
>> articulated and, if I squint really hard, I can only barely perceive
>> (imagine?) it deeply between the lines in some of the prefatory material
>> to which K&R mostly confine their efforts to promote the language. In
>> my view, a "tenet" is something more overt: the sort of thing U.S.
>> politicians try to get hung on the walls of every public school
>> classroom, like Henry Spencer's Ten Commandments of C[1] (which itself
>> does not mention this "core language has only O(1) features" principle).
>>
>> Second, in reviewing K&R I was reminded that structure copying is part
>> of the language. ("There are no operations that manipulate an entire
>> array or string, although structures may be copied as a unit."[2])
>> Doesn't that break the tenet right there?
>>
>> Third, and following on from the foregoing, your point reminded me of my
>> youth programming non-pipelined machines with no caches. You could set
>> your watch by (just about) any instruction in the set--and often did,
>> because we penurious microcomputer guys often lacked hardware real-time
>> clocks, too. That is to say, for a while, every instruction has an
>> exactly predictable and constant cycle count. (The _value_ of that
>> constant might depend on the addressing mode, because that would have
>> consequences on memory fetches, but the principle stood.) When the Z80
>> extended the 8080's instruction set, they ate from Tree of Knowledge
>> with block-copy instructions like LDIR and LDDR, and all of a sudden you
>> had instructions with O(n) cycle counts. But as a rule, programmers
>> seemed to welcome this instead of recognizing it as knowing sin, because
>> you generally knew worst-case how many bytes you'd be copying and take
>> that into account. (The worst worst case was a mere 64kB!)
>>
>> Further, Z80 home computers in low-end configurations (that is, no disk
>> drives) often did a shocking thing: they ran with all interrupts masked.
>> All the time. The one non-maskable interrupt was RESET, after which you
>> weren't going to be resuming execution of your program anyway. Not from
>> the same instruction, at least. As I recall the TRS-80 Model I/III/4
>> didn't even have logic on the motherboard to decode the Z80's "interrupt
>> mode 2", which was vectored, I think. Even in the "high-end"
>> configurations of these tiny machines, you got a whopping ONE interrupt
>> to play with ("IM 1").
>>
>> Later, when the Hitachi 6309 smuggled similar block-transfer decadence
>> into its extensions to the Motorola 6809 (to the excitement of we
>> semi-consciously Unix-adjacent OS-9 users) they faced a starker problem,
>> because the 6809 didn't wall off interrupts in the same way the 8080 and
>> Z80. They therefore presented the programmer with the novelty of the
>> restartable instruction, and a new generation of programmers became
>> acquainted with the hard lessons time-sharing minicomputer people were
>> familiar with.
>>
>> My point in this digression is that, in my opinion, it's tough to hold
>> fast to the O(1) tenet you claim for C's core language and to another at
>> the same time: the language's identity as a "portable assembly
>> language". Unless every programmer has control over the compiler--and
>> they don't--you can't predict when the compiler will emit an O(n) block
>> transfer instruction. You'll just have to look at the disassembly.
>>
>> _Maybe_ you can retain purity by...never copying structs. I don't think
>> lint or any other tool ever checked for this. Your advocacy of this
>> tenet is the first time I've heard it presented.
>>
>> If you were to suggest to me that most of the time I've spent in my life
>> arguing with C advocates was with rotten exemplars of the species and
>> therefore was time wasted, I would concede the point.
>>
>> There's just so danged _many_ of them...
>>
>>> Your hobbyhorse awakened one of mine.
>>>
>>> malloc was in v7, before the C standard was written. The standard
>>> spinelessly buckled to allow malloc(0) to return 0, as some
>>> implementations gratuitously did.
>> What was the alternative? There was no such thing as an exception, and
>> if a pointer was an int and an int was as wide as a machine address,
>> there'd be no way to indicate failure in-band, either.
>>
>> If the choice was that or another instance of atoi()'s wincingly awful
>> "does this 0 represent an error or successful conversion of a zero
>> input?" land mine, ANSI might have made the right choice.
>>
>>> I can't imagine that any program ever actually wanted the feature. Now
>>> it's one more undefined behavior that lurks in thousands of programs.
>> Hoare admitted to only one billion-dollar mistake. No one dares count
>> how many to write in C's ledger. This was predicted, wasn't it?
>> Everyone loved C because it was fast: it was performant, because it
>> never met a runtime check it didn't eschew--recall again Kernighan
>> punking Pascal on this exact point--and it was quick for the programmer
>> to write because it never met a _compile_-time check it didn't eschew.
>> C was born as a language for wizards who never made mistakes.
>>
>> The problem is that, like James Madison's fictive government of angels,
>> such entities don't exist. The staff of the CSRC itself may have been
>> overwhelmingly populated with frank, modest, and self-deprecating
>> people--and I'll emphasize here that I'm aware of no accounts that this
>> is anything but true--but C unfortunately played a part in stoking a
>> culture of pretension among software developers. "C is a language in
>> which wizards program. I program in C. Therefore I'm a wizard." is how
>> the syllogism (spot the fallacy) went. I don't know who does more
>> damage--the people who believe their own BS, or the ones who know
>> they're scamming their colleagues.
>>
>>> There are two arguments for malloc(0), Most importantly, it caters for
>>> a limiting case for aggregates generated at runtime--an instance of
>>> Kernighan's Law, "Do nothing gracefully". It also provides a way to
>>> create a distinctive pointer to impart some meta-information, e.g.
>>> "TBD" or "end of subgroup", distinct from the null pointer, which
>>> merely denotes absence.
>> I think I might be confused now. I've frequently seen arrays of structs
>> initialized from lists of literals ending in a series of "NULL"
>> structure members, in code that antedates or ignores C99's wonderful
>> feature of designated initializers for aggregate types.[3] How does
>> malloc(0) get this job done and what benefit does it bring?
>>
>> Last time I posted to TUHS I mentioned a proposal for explicit tail-call
>> elimination in C. I got the syntax wrong. The proposal was "return
>> goto;". The WG14 document number is N2920 and it's by Alex Gilding.
>> Good stuff.[4] I hope we see it in C2y.
>>
>> Predictably, I must confess that I didn't make much headway on
>> Schiller's 1975 "secure kernel" paper. Maybe next time.
>>
>> Regards,
>> Branden
>>
>> [1] https://web.cs.dal.ca/~jamie/UWO/C/the10fromHenryS.html
>>
>> I can easily imagine that the tenet held at _some_ point in the
>> C's history. It's _got_ to be the reason that the language
>> relegates memset() and memcpy() to the standard library (or to the
>> programmer's own devise)! :-O
>>
>> [2] Kernighan & Ritchie, _The C Programming Language_, 2nd edition, p. 2
>>
>> Having thus admitted the camel's nose to the tent, K&R would have
>> done the world a major service by making memset(), or at least
>> bzero(), a language feature, the latter perhaps by having "= 0"
>> validly apply to an lvalue of non-primitive type. Okay,
>> _potentially_ a major service. You'd still need the self-regarding
>> wizard programmers to bother coding it, which they wouldn't in many
>> cases "because speed". Move fast, break stuff.
>>
>> C++ screwed this up too, and stubbornly stuck by it for a long
>> time.
>>
>> https://cplusplus.github.io/CWG/issues/178.html
>>
>> [3] https://gcc.gnu.org/onlinedocs/gcc/Designated-Inits.html
>> [4] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2920.pdf
>
>
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 16:58 ` G. Branden Robinson
@ 2024-09-28 17:47 ` Luther Johnson
2024-09-28 17:52 ` Luther Johnson
2024-09-28 17:59 ` Bakul Shah via TUHS
` (2 subsequent siblings)
3 siblings, 1 reply; 61+ messages in thread
From: Luther Johnson @ 2024-09-28 17:47 UTC (permalink / raw)
To: tuhs
I don't know that structure copying breaks any complexity or bounds on
execution time rules. Many compilers may be different, but in the
generated code I've seen, when you pass in a structure to a function,
the receiving function copies it to the stack. In the portable C
compiler, when you return a structure as a result, it is first copied
to a static area, a pointer to that area is returned, then the caller
copies that out to wherever it's meant to go, either a variable that's
being assigned (which could be on the stack or elsewhere), or to a place
on the stack that was reserved for it because that result will now be an
argument to another function to be called. So there's some copying, but
that's proportional to the size of the structure, it's linear, and
there's no dynamic memory allocation going on.
On 09/28/2024 09:58 AM, G. Branden Robinson wrote:
> At 2024-09-28T09:34:14-0400, Douglas McIlroy wrote:
>>> C's refusal to specify dynamic memory allocation in the language
>>> runtime (as opposed to, eventually, the standard library)
>> This complaint overlooks one tenet of C: every operation in what you
>> call "language runtime" takes O(1) time. Dynamic memory allocation
>> is not such an operation.
> A fair point. Let me argue about it anyway. ;-)
>
> I'd make three observations. First, K&R did _not_ tout this in their
> book presenting ANSI C. I went back and checked the prefaces,
> introduction, and the section presenting a simple malloc()/free()
> implementation. The tenet you claim for the language is not explicitly
> articulated and, if I squint really hard, I can only barely perceive
> (imagine?) it deeply between the lines in some of the prefatory material
> to which K&R mostly confine their efforts to promote the language. In
> my view, a "tenet" is something more overt: the sort of thing U.S.
> politicians try to get hung on the walls of every public school
> classroom, like Henry Spencer's Ten Commandments of C[1] (which itself
> does not mention this "core language has only O(1) features" principle).
>
> Second, in reviewing K&R I was reminded that structure copying is part
> of the language. ("There are no operations that manipulate an entire
> array or string, although structures may be copied as a unit."[2])
> Doesn't that break the tenet right there?
>
> Third, and following on from the foregoing, your point reminded me of my
> youth programming non-pipelined machines with no caches. You could set
> your watch by (just about) any instruction in the set--and often did,
> because we penurious microcomputer guys often lacked hardware real-time
> clocks, too. That is to say, for a while, every instruction has an
> exactly predictable and constant cycle count. (The _value_ of that
> constant might depend on the addressing mode, because that would have
> consequences on memory fetches, but the principle stood.) When the Z80
> extended the 8080's instruction set, they ate from Tree of Knowledge
> with block-copy instructions like LDIR and LDDR, and all of a sudden you
> had instructions with O(n) cycle counts. But as a rule, programmers
> seemed to welcome this instead of recognizing it as knowing sin, because
> you generally knew worst-case how many bytes you'd be copying and take
> that into account. (The worst worst case was a mere 64kB!)
>
> Further, Z80 home computers in low-end configurations (that is, no disk
> drives) often did a shocking thing: they ran with all interrupts masked.
> All the time. The one non-maskable interrupt was RESET, after which you
> weren't going to be resuming execution of your program anyway. Not from
> the same instruction, at least. As I recall the TRS-80 Model I/III/4
> didn't even have logic on the motherboard to decode the Z80's "interrupt
> mode 2", which was vectored, I think. Even in the "high-end"
> configurations of these tiny machines, you got a whopping ONE interrupt
> to play with ("IM 1").
>
> Later, when the Hitachi 6309 smuggled similar block-transfer decadence
> into its extensions to the Motorola 6809 (to the excitement of we
> semi-consciously Unix-adjacent OS-9 users) they faced a starker problem,
> because the 6809 didn't wall off interrupts in the same way the 8080 and
> Z80. They therefore presented the programmer with the novelty of the
> restartable instruction, and a new generation of programmers became
> acquainted with the hard lessons time-sharing minicomputer people were
> familiar with.
>
> My point in this digression is that, in my opinion, it's tough to hold
> fast to the O(1) tenet you claim for C's core language and to another at
> the same time: the language's identity as a "portable assembly
> language". Unless every programmer has control over the compiler--and
> they don't--you can't predict when the compiler will emit an O(n) block
> transfer instruction. You'll just have to look at the disassembly.
>
> _Maybe_ you can retain purity by...never copying structs. I don't think
> lint or any other tool ever checked for this. Your advocacy of this
> tenet is the first time I've heard it presented.
>
> If you were to suggest to me that most of the time I've spent in my life
> arguing with C advocates was with rotten exemplars of the species and
> therefore was time wasted, I would concede the point.
>
> There's just so danged _many_ of them...
>
>> Your hobbyhorse awakened one of mine.
>>
>> malloc was in v7, before the C standard was written. The standard
>> spinelessly buckled to allow malloc(0) to return 0, as some
>> implementations gratuitously did.
> What was the alternative? There was no such thing as an exception, and
> if a pointer was an int and an int was as wide as a machine address,
> there'd be no way to indicate failure in-band, either.
>
> If the choice was that or another instance of atoi()'s wincingly awful
> "does this 0 represent an error or successful conversion of a zero
> input?" land mine, ANSI might have made the right choice.
>
>> I can't imagine that any program ever actually wanted the feature. Now
>> it's one more undefined behavior that lurks in thousands of programs.
> Hoare admitted to only one billion-dollar mistake. No one dares count
> how many to write in C's ledger. This was predicted, wasn't it?
> Everyone loved C because it was fast: it was performant, because it
> never met a runtime check it didn't eschew--recall again Kernighan
> punking Pascal on this exact point--and it was quick for the programmer
> to write because it never met a _compile_-time check it didn't eschew.
> C was born as a language for wizards who never made mistakes.
>
> The problem is that, like James Madison's fictive government of angels,
> such entities don't exist. The staff of the CSRC itself may have been
> overwhelmingly populated with frank, modest, and self-deprecating
> people--and I'll emphasize here that I'm aware of no accounts that this
> is anything but true--but C unfortunately played a part in stoking a
> culture of pretension among software developers. "C is a language in
> which wizards program. I program in C. Therefore I'm a wizard." is how
> the syllogism (spot the fallacy) went. I don't know who does more
> damage--the people who believe their own BS, or the ones who know
> they're scamming their colleagues.
>
>> There are two arguments for malloc(0), Most importantly, it caters for
>> a limiting case for aggregates generated at runtime--an instance of
>> Kernighan's Law, "Do nothing gracefully". It also provides a way to
>> create a distinctive pointer to impart some meta-information, e.g.
>> "TBD" or "end of subgroup", distinct from the null pointer, which
>> merely denotes absence.
> I think I might be confused now. I've frequently seen arrays of structs
> initialized from lists of literals ending in a series of "NULL"
> structure members, in code that antedates or ignores C99's wonderful
> feature of designated initializers for aggregate types.[3] How does
> malloc(0) get this job done and what benefit does it bring?
>
> Last time I posted to TUHS I mentioned a proposal for explicit tail-call
> elimination in C. I got the syntax wrong. The proposal was "return
> goto;". The WG14 document number is N2920 and it's by Alex Gilding.
> Good stuff.[4] I hope we see it in C2y.
>
> Predictably, I must confess that I didn't make much headway on
> Schiller's 1975 "secure kernel" paper. Maybe next time.
>
> Regards,
> Branden
>
> [1] https://web.cs.dal.ca/~jamie/UWO/C/the10fromHenryS.html
>
> I can easily imagine that the tenet held at _some_ point in the
> C's history. It's _got_ to be the reason that the language
> relegates memset() and memcpy() to the standard library (or to the
> programmer's own devise)! :-O
>
> [2] Kernighan & Ritchie, _The C Programming Language_, 2nd edition, p. 2
>
> Having thus admitted the camel's nose to the tent, K&R would have
> done the world a major service by making memset(), or at least
> bzero(), a language feature, the latter perhaps by having "= 0"
> validly apply to an lvalue of non-primitive type. Okay,
> _potentially_ a major service. You'd still need the self-regarding
> wizard programmers to bother coding it, which they wouldn't in many
> cases "because speed". Move fast, break stuff.
>
> C++ screwed this up too, and stubbornly stuck by it for a long time.
>
> https://cplusplus.github.io/CWG/issues/178.html
>
> [3] https://gcc.gnu.org/onlinedocs/gcc/Designated-Inits.html
> [4] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2920.pdf
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 13:34 Douglas McIlroy
@ 2024-09-28 16:58 ` G. Branden Robinson
2024-09-28 17:47 ` Luther Johnson
` (3 more replies)
0 siblings, 4 replies; 61+ messages in thread
From: G. Branden Robinson @ 2024-09-28 16:58 UTC (permalink / raw)
To: TUHS main list
[-- Attachment #1: Type: text/plain, Size: 8592 bytes --]
At 2024-09-28T09:34:14-0400, Douglas McIlroy wrote:
> > C's refusal to specify dynamic memory allocation in the language
> > runtime (as opposed to, eventually, the standard library)
>
> This complaint overlooks one tenet of C: every operation in what you
> call "language runtime" takes O(1) time. Dynamic memory allocation
> is not such an operation.
A fair point. Let me argue about it anyway. ;-)
I'd make three observations. First, K&R did _not_ tout this in their
book presenting ANSI C. I went back and checked the prefaces,
introduction, and the section presenting a simple malloc()/free()
implementation. The tenet you claim for the language is not explicitly
articulated and, if I squint really hard, I can only barely perceive
(imagine?) it deeply between the lines in some of the prefatory material
to which K&R mostly confine their efforts to promote the language. In
my view, a "tenet" is something more overt: the sort of thing U.S.
politicians try to get hung on the walls of every public school
classroom, like Henry Spencer's Ten Commandments of C[1] (which itself
does not mention this "core language has only O(1) features" principle).
Second, in reviewing K&R I was reminded that structure copying is part
of the language. ("There are no operations that manipulate an entire
array or string, although structures may be copied as a unit."[2])
Doesn't that break the tenet right there?
Third, and following on from the foregoing, your point reminded me of my
youth programming non-pipelined machines with no caches. You could set
your watch by (just about) any instruction in the set--and often did,
because we penurious microcomputer guys often lacked hardware real-time
clocks, too. That is to say, for a while, every instruction has an
exactly predictable and constant cycle count. (The _value_ of that
constant might depend on the addressing mode, because that would have
consequences on memory fetches, but the principle stood.) When the Z80
extended the 8080's instruction set, they ate from Tree of Knowledge
with block-copy instructions like LDIR and LDDR, and all of a sudden you
had instructions with O(n) cycle counts. But as a rule, programmers
seemed to welcome this instead of recognizing it as knowing sin, because
you generally knew worst-case how many bytes you'd be copying and take
that into account. (The worst worst case was a mere 64kB!)
Further, Z80 home computers in low-end configurations (that is, no disk
drives) often did a shocking thing: they ran with all interrupts masked.
All the time. The one non-maskable interrupt was RESET, after which you
weren't going to be resuming execution of your program anyway. Not from
the same instruction, at least. As I recall the TRS-80 Model I/III/4
didn't even have logic on the motherboard to decode the Z80's "interrupt
mode 2", which was vectored, I think. Even in the "high-end"
configurations of these tiny machines, you got a whopping ONE interrupt
to play with ("IM 1").
Later, when the Hitachi 6309 smuggled similar block-transfer decadence
into its extensions to the Motorola 6809 (to the excitement of we
semi-consciously Unix-adjacent OS-9 users) they faced a starker problem,
because the 6809 didn't wall off interrupts in the same way the 8080 and
Z80. They therefore presented the programmer with the novelty of the
restartable instruction, and a new generation of programmers became
acquainted with the hard lessons time-sharing minicomputer people were
familiar with.
My point in this digression is that, in my opinion, it's tough to hold
fast to the O(1) tenet you claim for C's core language and to another at
the same time: the language's identity as a "portable assembly
language". Unless every programmer has control over the compiler--and
they don't--you can't predict when the compiler will emit an O(n) block
transfer instruction. You'll just have to look at the disassembly.
_Maybe_ you can retain purity by...never copying structs. I don't think
lint or any other tool ever checked for this. Your advocacy of this
tenet is the first time I've heard it presented.
If you were to suggest to me that most of the time I've spent in my life
arguing with C advocates was with rotten exemplars of the species and
therefore was time wasted, I would concede the point.
There's just so danged _many_ of them...
> Your hobbyhorse awakened one of mine.
>
> malloc was in v7, before the C standard was written. The standard
> spinelessly buckled to allow malloc(0) to return 0, as some
> implementations gratuitously did.
What was the alternative? There was no such thing as an exception, and
if a pointer was an int and an int was as wide as a machine address,
there'd be no way to indicate failure in-band, either.
If the choice was that or another instance of atoi()'s wincingly awful
"does this 0 represent an error or successful conversion of a zero
input?" land mine, ANSI might have made the right choice.
> I can't imagine that any program ever actually wanted the feature. Now
> it's one more undefined behavior that lurks in thousands of programs.
Hoare admitted to only one billion-dollar mistake. No one dares count
how many to write in C's ledger. This was predicted, wasn't it?
Everyone loved C because it was fast: it was performant, because it
never met a runtime check it didn't eschew--recall again Kernighan
punking Pascal on this exact point--and it was quick for the programmer
to write because it never met a _compile_-time check it didn't eschew.
C was born as a language for wizards who never made mistakes.
The problem is that, like James Madison's fictive government of angels,
such entities don't exist. The staff of the CSRC itself may have been
overwhelmingly populated with frank, modest, and self-deprecating
people--and I'll emphasize here that I'm aware of no accounts that this
is anything but true--but C unfortunately played a part in stoking a
culture of pretension among software developers. "C is a language in
which wizards program. I program in C. Therefore I'm a wizard." is how
the syllogism (spot the fallacy) went. I don't know who does more
damage--the people who believe their own BS, or the ones who know
they're scamming their colleagues.
> There are two arguments for malloc(0), Most importantly, it caters for
> a limiting case for aggregates generated at runtime--an instance of
> Kernighan's Law, "Do nothing gracefully". It also provides a way to
> create a distinctive pointer to impart some meta-information, e.g.
> "TBD" or "end of subgroup", distinct from the null pointer, which
> merely denotes absence.
I think I might be confused now. I've frequently seen arrays of structs
initialized from lists of literals ending in a series of "NULL"
structure members, in code that antedates or ignores C99's wonderful
feature of designated initializers for aggregate types.[3] How does
malloc(0) get this job done and what benefit does it bring?
Last time I posted to TUHS I mentioned a proposal for explicit tail-call
elimination in C. I got the syntax wrong. The proposal was "return
goto;". The WG14 document number is N2920 and it's by Alex Gilding.
Good stuff.[4] I hope we see it in C2y.
Predictably, I must confess that I didn't make much headway on
Schiller's 1975 "secure kernel" paper. Maybe next time.
Regards,
Branden
[1] https://web.cs.dal.ca/~jamie/UWO/C/the10fromHenryS.html
I can easily imagine that the tenet held at _some_ point in the
C's history. It's _got_ to be the reason that the language
relegates memset() and memcpy() to the standard library (or to the
programmer's own devise)! :-O
[2] Kernighan & Ritchie, _The C Programming Language_, 2nd edition, p. 2
Having thus admitted the camel's nose to the tent, K&R would have
done the world a major service by making memset(), or at least
bzero(), a language feature, the latter perhaps by having "= 0"
validly apply to an lvalue of non-primitive type. Okay,
_potentially_ a major service. You'd still need the self-regarding
wizard programmers to bother coding it, which they wouldn't in many
cases "because speed". Move fast, break stuff.
C++ screwed this up too, and stubbornly stuck by it for a long time.
https://cplusplus.github.io/CWG/issues/178.html
[3] https://gcc.gnu.org/onlinedocs/gcc/Designated-Inits.html
[4] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2920.pdf
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 61+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
@ 2024-09-28 13:34 Douglas McIlroy
2024-09-28 16:58 ` G. Branden Robinson
0 siblings, 1 reply; 61+ messages in thread
From: Douglas McIlroy @ 2024-09-28 13:34 UTC (permalink / raw)
To: TUHS main list
[-- Attachment #1: Type: text/plain, Size: 1002 bytes --]
> C's refusal to specify dynamic memory allocation in the language runtime
> (as opposed to, eventually, the standard library)
This complaint overlooks one tenet of C: every operation in what you
call "language runtime" takes O(1) time. Dynamic memory allocation
is not such an operation.
Your hobbyhorse awakened one of mine.
malloc was in v7, before the C standard was written. The standard
spinelessly buckled to allow malloc(0) to return 0, as some
implementations gratuitously did. I can't imagine that any program
ever actually wanted the feature. Now it's one more undefined
behavior that lurks in thousands of programs.
There are two arguments for malloc(0), Most importantly, it caters for
a limiting case for aggregates generated at runtime--an instance of
Kernighan's Law, "Do nothing gracefully". It also provides a way to
create a distinctive pointer to impart some meta-information, e.g.
"TBD" or "end of subgroup", distinct from the null pointer, which
merely denotes absence.
Doug
[-- Attachment #2: Type: text/html, Size: 1367 bytes --]
^ permalink raw reply [flat|nested] 61+ messages in thread
end of thread, other threads:[~2024-10-05 17:46 UTC | newest]
Thread overview: 61+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-09-29 16:56 [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum) Douglas McIlroy
2024-09-29 20:29 ` Rob Pike
2024-09-29 21:13 ` Rik Farrow
2024-09-29 22:21 ` Rich Salz
2024-09-29 23:56 ` Rob Pike
2024-09-30 0:36 ` Larry McVoy
2024-09-30 0:55 ` Larry McVoy
2024-09-30 1:09 ` Luther Johnson
2024-09-30 1:37 ` Luther Johnson
2024-09-30 3:52 ` ron minnich
2024-10-01 12:43 ` arnold
2024-09-30 19:12 ` Steffen Nurpmeso
2024-09-30 20:03 ` Rich Salz
2024-09-30 21:15 ` Steffen Nurpmeso
2024-09-30 22:14 ` Bakul Shah via TUHS
2024-10-01 1:42 ` Alexis
2024-09-30 20:14 ` Rik Farrow
2024-09-30 22:00 ` Steffen Nurpmeso
2024-10-01 12:53 ` Dan Cross
2024-09-29 21:24 ` Ralph Corderoy
-- strict thread matches above, loose matches on Subject: below --
2024-09-28 13:34 Douglas McIlroy
2024-09-28 16:58 ` G. Branden Robinson
2024-09-28 17:47 ` Luther Johnson
2024-09-28 17:52 ` Luther Johnson
2024-09-28 18:46 ` G. Branden Robinson
2024-09-28 22:08 ` Luther Johnson
2024-09-28 22:45 ` Luther Johnson
2024-09-28 22:50 ` Luther Johnson
2024-09-28 17:59 ` Bakul Shah via TUHS
2024-09-28 22:07 ` Douglas McIlroy
2024-09-28 23:05 ` Rob Pike
2024-09-28 23:30 ` Warner Losh
2024-09-29 10:06 ` Ralph Corderoy
2024-09-29 12:25 ` Warner Losh
2024-09-29 15:17 ` Ralph Corderoy
2024-09-30 12:15 ` Dan Cross
2024-09-28 18:01 ` G. Branden Robinson
2024-10-01 13:13 ` arnold
2024-10-01 13:32 ` Larry McVoy
2024-10-01 13:47 ` arnold
2024-10-01 14:01 ` Larry McVoy
2024-10-01 14:18 ` arnold
2024-10-01 14:25 ` Luther Johnson
2024-10-01 14:56 ` Dan Cross
2024-10-01 15:08 ` Stuff Received
2024-10-01 15:20 ` Larry McVoy
2024-10-01 15:38 ` Peter Weinberger (温博格) via TUHS
2024-10-01 15:50 ` ron minnich
2024-10-01 19:04 ` arnold
2024-10-01 16:49 ` Paul Winalski
2024-10-01 15:44 ` Bakul Shah via TUHS
2024-10-01 19:07 ` arnold
2024-10-01 20:34 ` Rik Farrow
2024-10-02 0:55 ` Steffen Nurpmeso
2024-10-02 5:49 ` arnold
2024-10-02 20:42 ` Dan Cross
2024-10-02 21:54 ` Marc Donner
2024-10-05 17:45 ` arnold
2024-10-01 16:40 ` Paul Winalski
2024-09-28 18:05 ` Larry McVoy
2024-09-30 15:49 ` Paul Winalski
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).