* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 16:58 ` G. Branden Robinson
@ 2024-09-28 17:47 ` Luther Johnson
2024-09-28 17:52 ` Luther Johnson
2024-09-28 17:59 ` Bakul Shah via TUHS
` (2 subsequent siblings)
3 siblings, 1 reply; 74+ messages in thread
From: Luther Johnson @ 2024-09-28 17:47 UTC (permalink / raw)
To: tuhs
I don't know that structure copying breaks any complexity or bounds on
execution time rules. Many compilers may be different, but in the
generated code I've seen, when you pass in a structure to a function,
the receiving function copies it to the stack. In the portable C
compiler, when you return a structure as a result, it is first copied
to a static area, a pointer to that area is returned, then the caller
copies that out to wherever it's meant to go, either a variable that's
being assigned (which could be on the stack or elsewhere), or to a place
on the stack that was reserved for it because that result will now be an
argument to another function to be called. So there's some copying, but
that's proportional to the size of the structure, it's linear, and
there's no dynamic memory allocation going on.
On 09/28/2024 09:58 AM, G. Branden Robinson wrote:
> At 2024-09-28T09:34:14-0400, Douglas McIlroy wrote:
>>> C's refusal to specify dynamic memory allocation in the language
>>> runtime (as opposed to, eventually, the standard library)
>> This complaint overlooks one tenet of C: every operation in what you
>> call "language runtime" takes O(1) time. Dynamic memory allocation
>> is not such an operation.
> A fair point. Let me argue about it anyway. ;-)
>
> I'd make three observations. First, K&R did _not_ tout this in their
> book presenting ANSI C. I went back and checked the prefaces,
> introduction, and the section presenting a simple malloc()/free()
> implementation. The tenet you claim for the language is not explicitly
> articulated and, if I squint really hard, I can only barely perceive
> (imagine?) it deeply between the lines in some of the prefatory material
> to which K&R mostly confine their efforts to promote the language. In
> my view, a "tenet" is something more overt: the sort of thing U.S.
> politicians try to get hung on the walls of every public school
> classroom, like Henry Spencer's Ten Commandments of C[1] (which itself
> does not mention this "core language has only O(1) features" principle).
>
> Second, in reviewing K&R I was reminded that structure copying is part
> of the language. ("There are no operations that manipulate an entire
> array or string, although structures may be copied as a unit."[2])
> Doesn't that break the tenet right there?
>
> Third, and following on from the foregoing, your point reminded me of my
> youth programming non-pipelined machines with no caches. You could set
> your watch by (just about) any instruction in the set--and often did,
> because we penurious microcomputer guys often lacked hardware real-time
> clocks, too. That is to say, for a while, every instruction has an
> exactly predictable and constant cycle count. (The _value_ of that
> constant might depend on the addressing mode, because that would have
> consequences on memory fetches, but the principle stood.) When the Z80
> extended the 8080's instruction set, they ate from Tree of Knowledge
> with block-copy instructions like LDIR and LDDR, and all of a sudden you
> had instructions with O(n) cycle counts. But as a rule, programmers
> seemed to welcome this instead of recognizing it as knowing sin, because
> you generally knew worst-case how many bytes you'd be copying and take
> that into account. (The worst worst case was a mere 64kB!)
>
> Further, Z80 home computers in low-end configurations (that is, no disk
> drives) often did a shocking thing: they ran with all interrupts masked.
> All the time. The one non-maskable interrupt was RESET, after which you
> weren't going to be resuming execution of your program anyway. Not from
> the same instruction, at least. As I recall the TRS-80 Model I/III/4
> didn't even have logic on the motherboard to decode the Z80's "interrupt
> mode 2", which was vectored, I think. Even in the "high-end"
> configurations of these tiny machines, you got a whopping ONE interrupt
> to play with ("IM 1").
>
> Later, when the Hitachi 6309 smuggled similar block-transfer decadence
> into its extensions to the Motorola 6809 (to the excitement of we
> semi-consciously Unix-adjacent OS-9 users) they faced a starker problem,
> because the 6809 didn't wall off interrupts in the same way the 8080 and
> Z80. They therefore presented the programmer with the novelty of the
> restartable instruction, and a new generation of programmers became
> acquainted with the hard lessons time-sharing minicomputer people were
> familiar with.
>
> My point in this digression is that, in my opinion, it's tough to hold
> fast to the O(1) tenet you claim for C's core language and to another at
> the same time: the language's identity as a "portable assembly
> language". Unless every programmer has control over the compiler--and
> they don't--you can't predict when the compiler will emit an O(n) block
> transfer instruction. You'll just have to look at the disassembly.
>
> _Maybe_ you can retain purity by...never copying structs. I don't think
> lint or any other tool ever checked for this. Your advocacy of this
> tenet is the first time I've heard it presented.
>
> If you were to suggest to me that most of the time I've spent in my life
> arguing with C advocates was with rotten exemplars of the species and
> therefore was time wasted, I would concede the point.
>
> There's just so danged _many_ of them...
>
>> Your hobbyhorse awakened one of mine.
>>
>> malloc was in v7, before the C standard was written. The standard
>> spinelessly buckled to allow malloc(0) to return 0, as some
>> implementations gratuitously did.
> What was the alternative? There was no such thing as an exception, and
> if a pointer was an int and an int was as wide as a machine address,
> there'd be no way to indicate failure in-band, either.
>
> If the choice was that or another instance of atoi()'s wincingly awful
> "does this 0 represent an error or successful conversion of a zero
> input?" land mine, ANSI might have made the right choice.
>
>> I can't imagine that any program ever actually wanted the feature. Now
>> it's one more undefined behavior that lurks in thousands of programs.
> Hoare admitted to only one billion-dollar mistake. No one dares count
> how many to write in C's ledger. This was predicted, wasn't it?
> Everyone loved C because it was fast: it was performant, because it
> never met a runtime check it didn't eschew--recall again Kernighan
> punking Pascal on this exact point--and it was quick for the programmer
> to write because it never met a _compile_-time check it didn't eschew.
> C was born as a language for wizards who never made mistakes.
>
> The problem is that, like James Madison's fictive government of angels,
> such entities don't exist. The staff of the CSRC itself may have been
> overwhelmingly populated with frank, modest, and self-deprecating
> people--and I'll emphasize here that I'm aware of no accounts that this
> is anything but true--but C unfortunately played a part in stoking a
> culture of pretension among software developers. "C is a language in
> which wizards program. I program in C. Therefore I'm a wizard." is how
> the syllogism (spot the fallacy) went. I don't know who does more
> damage--the people who believe their own BS, or the ones who know
> they're scamming their colleagues.
>
>> There are two arguments for malloc(0), Most importantly, it caters for
>> a limiting case for aggregates generated at runtime--an instance of
>> Kernighan's Law, "Do nothing gracefully". It also provides a way to
>> create a distinctive pointer to impart some meta-information, e.g.
>> "TBD" or "end of subgroup", distinct from the null pointer, which
>> merely denotes absence.
> I think I might be confused now. I've frequently seen arrays of structs
> initialized from lists of literals ending in a series of "NULL"
> structure members, in code that antedates or ignores C99's wonderful
> feature of designated initializers for aggregate types.[3] How does
> malloc(0) get this job done and what benefit does it bring?
>
> Last time I posted to TUHS I mentioned a proposal for explicit tail-call
> elimination in C. I got the syntax wrong. The proposal was "return
> goto;". The WG14 document number is N2920 and it's by Alex Gilding.
> Good stuff.[4] I hope we see it in C2y.
>
> Predictably, I must confess that I didn't make much headway on
> Schiller's 1975 "secure kernel" paper. Maybe next time.
>
> Regards,
> Branden
>
> [1] https://web.cs.dal.ca/~jamie/UWO/C/the10fromHenryS.html
>
> I can easily imagine that the tenet held at _some_ point in the
> C's history. It's _got_ to be the reason that the language
> relegates memset() and memcpy() to the standard library (or to the
> programmer's own devise)! :-O
>
> [2] Kernighan & Ritchie, _The C Programming Language_, 2nd edition, p. 2
>
> Having thus admitted the camel's nose to the tent, K&R would have
> done the world a major service by making memset(), or at least
> bzero(), a language feature, the latter perhaps by having "= 0"
> validly apply to an lvalue of non-primitive type. Okay,
> _potentially_ a major service. You'd still need the self-regarding
> wizard programmers to bother coding it, which they wouldn't in many
> cases "because speed". Move fast, break stuff.
>
> C++ screwed this up too, and stubbornly stuck by it for a long time.
>
> https://cplusplus.github.io/CWG/issues/178.html
>
> [3] https://gcc.gnu.org/onlinedocs/gcc/Designated-Inits.html
> [4] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2920.pdf
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 17:47 ` Luther Johnson
@ 2024-09-28 17:52 ` Luther Johnson
2024-09-28 18:46 ` G. Branden Robinson
0 siblings, 1 reply; 74+ messages in thread
From: Luther Johnson @ 2024-09-28 17:52 UTC (permalink / raw)
To: tuhs
In the compilers I'm talking about, you pass a structure by passing a
pointer to it - but the receiving function knows the argument is a
structure, and not a pointer to a structure, so it knows it needs to use
the pointer to copy to its own local version.
On 09/28/2024 10:47 AM, Luther Johnson wrote:
> I don't know that structure copying breaks any complexity or bounds on
> execution time rules. Many compilers may be different, but in the
> generated code I've seen, when you pass in a structure to a function,
> the receiving function copies it to the stack. In the portable C
> compiler, when you return a structure as a result, it is first copied
> to a static area, a pointer to that area is returned, then the caller
> copies that out to wherever it's meant to go, either a variable that's
> being assigned (which could be on the stack or elsewhere), or to a place
> on the stack that was reserved for it because that result will now be an
> argument to another function to be called. So there's some copying, but
> that's proportional to the size of the structure, it's linear, and
> there's no dynamic memory allocation going on.
>
> On 09/28/2024 09:58 AM, G. Branden Robinson wrote:
>> At 2024-09-28T09:34:14-0400, Douglas McIlroy wrote:
>>>> C's refusal to specify dynamic memory allocation in the language
>>>> runtime (as opposed to, eventually, the standard library)
>>> This complaint overlooks one tenet of C: every operation in what you
>>> call "language runtime" takes O(1) time. Dynamic memory allocation
>>> is not such an operation.
>> A fair point. Let me argue about it anyway. ;-)
>>
>> I'd make three observations. First, K&R did _not_ tout this in their
>> book presenting ANSI C. I went back and checked the prefaces,
>> introduction, and the section presenting a simple malloc()/free()
>> implementation. The tenet you claim for the language is not explicitly
>> articulated and, if I squint really hard, I can only barely perceive
>> (imagine?) it deeply between the lines in some of the prefatory material
>> to which K&R mostly confine their efforts to promote the language. In
>> my view, a "tenet" is something more overt: the sort of thing U.S.
>> politicians try to get hung on the walls of every public school
>> classroom, like Henry Spencer's Ten Commandments of C[1] (which itself
>> does not mention this "core language has only O(1) features" principle).
>>
>> Second, in reviewing K&R I was reminded that structure copying is part
>> of the language. ("There are no operations that manipulate an entire
>> array or string, although structures may be copied as a unit."[2])
>> Doesn't that break the tenet right there?
>>
>> Third, and following on from the foregoing, your point reminded me of my
>> youth programming non-pipelined machines with no caches. You could set
>> your watch by (just about) any instruction in the set--and often did,
>> because we penurious microcomputer guys often lacked hardware real-time
>> clocks, too. That is to say, for a while, every instruction has an
>> exactly predictable and constant cycle count. (The _value_ of that
>> constant might depend on the addressing mode, because that would have
>> consequences on memory fetches, but the principle stood.) When the Z80
>> extended the 8080's instruction set, they ate from Tree of Knowledge
>> with block-copy instructions like LDIR and LDDR, and all of a sudden you
>> had instructions with O(n) cycle counts. But as a rule, programmers
>> seemed to welcome this instead of recognizing it as knowing sin, because
>> you generally knew worst-case how many bytes you'd be copying and take
>> that into account. (The worst worst case was a mere 64kB!)
>>
>> Further, Z80 home computers in low-end configurations (that is, no disk
>> drives) often did a shocking thing: they ran with all interrupts masked.
>> All the time. The one non-maskable interrupt was RESET, after which you
>> weren't going to be resuming execution of your program anyway. Not from
>> the same instruction, at least. As I recall the TRS-80 Model I/III/4
>> didn't even have logic on the motherboard to decode the Z80's "interrupt
>> mode 2", which was vectored, I think. Even in the "high-end"
>> configurations of these tiny machines, you got a whopping ONE interrupt
>> to play with ("IM 1").
>>
>> Later, when the Hitachi 6309 smuggled similar block-transfer decadence
>> into its extensions to the Motorola 6809 (to the excitement of we
>> semi-consciously Unix-adjacent OS-9 users) they faced a starker problem,
>> because the 6809 didn't wall off interrupts in the same way the 8080 and
>> Z80. They therefore presented the programmer with the novelty of the
>> restartable instruction, and a new generation of programmers became
>> acquainted with the hard lessons time-sharing minicomputer people were
>> familiar with.
>>
>> My point in this digression is that, in my opinion, it's tough to hold
>> fast to the O(1) tenet you claim for C's core language and to another at
>> the same time: the language's identity as a "portable assembly
>> language". Unless every programmer has control over the compiler--and
>> they don't--you can't predict when the compiler will emit an O(n) block
>> transfer instruction. You'll just have to look at the disassembly.
>>
>> _Maybe_ you can retain purity by...never copying structs. I don't think
>> lint or any other tool ever checked for this. Your advocacy of this
>> tenet is the first time I've heard it presented.
>>
>> If you were to suggest to me that most of the time I've spent in my life
>> arguing with C advocates was with rotten exemplars of the species and
>> therefore was time wasted, I would concede the point.
>>
>> There's just so danged _many_ of them...
>>
>>> Your hobbyhorse awakened one of mine.
>>>
>>> malloc was in v7, before the C standard was written. The standard
>>> spinelessly buckled to allow malloc(0) to return 0, as some
>>> implementations gratuitously did.
>> What was the alternative? There was no such thing as an exception, and
>> if a pointer was an int and an int was as wide as a machine address,
>> there'd be no way to indicate failure in-band, either.
>>
>> If the choice was that or another instance of atoi()'s wincingly awful
>> "does this 0 represent an error or successful conversion of a zero
>> input?" land mine, ANSI might have made the right choice.
>>
>>> I can't imagine that any program ever actually wanted the feature. Now
>>> it's one more undefined behavior that lurks in thousands of programs.
>> Hoare admitted to only one billion-dollar mistake. No one dares count
>> how many to write in C's ledger. This was predicted, wasn't it?
>> Everyone loved C because it was fast: it was performant, because it
>> never met a runtime check it didn't eschew--recall again Kernighan
>> punking Pascal on this exact point--and it was quick for the programmer
>> to write because it never met a _compile_-time check it didn't eschew.
>> C was born as a language for wizards who never made mistakes.
>>
>> The problem is that, like James Madison's fictive government of angels,
>> such entities don't exist. The staff of the CSRC itself may have been
>> overwhelmingly populated with frank, modest, and self-deprecating
>> people--and I'll emphasize here that I'm aware of no accounts that this
>> is anything but true--but C unfortunately played a part in stoking a
>> culture of pretension among software developers. "C is a language in
>> which wizards program. I program in C. Therefore I'm a wizard." is how
>> the syllogism (spot the fallacy) went. I don't know who does more
>> damage--the people who believe their own BS, or the ones who know
>> they're scamming their colleagues.
>>
>>> There are two arguments for malloc(0), Most importantly, it caters for
>>> a limiting case for aggregates generated at runtime--an instance of
>>> Kernighan's Law, "Do nothing gracefully". It also provides a way to
>>> create a distinctive pointer to impart some meta-information, e.g.
>>> "TBD" or "end of subgroup", distinct from the null pointer, which
>>> merely denotes absence.
>> I think I might be confused now. I've frequently seen arrays of structs
>> initialized from lists of literals ending in a series of "NULL"
>> structure members, in code that antedates or ignores C99's wonderful
>> feature of designated initializers for aggregate types.[3] How does
>> malloc(0) get this job done and what benefit does it bring?
>>
>> Last time I posted to TUHS I mentioned a proposal for explicit tail-call
>> elimination in C. I got the syntax wrong. The proposal was "return
>> goto;". The WG14 document number is N2920 and it's by Alex Gilding.
>> Good stuff.[4] I hope we see it in C2y.
>>
>> Predictably, I must confess that I didn't make much headway on
>> Schiller's 1975 "secure kernel" paper. Maybe next time.
>>
>> Regards,
>> Branden
>>
>> [1] https://web.cs.dal.ca/~jamie/UWO/C/the10fromHenryS.html
>>
>> I can easily imagine that the tenet held at _some_ point in the
>> C's history. It's _got_ to be the reason that the language
>> relegates memset() and memcpy() to the standard library (or to the
>> programmer's own devise)! :-O
>>
>> [2] Kernighan & Ritchie, _The C Programming Language_, 2nd edition, p. 2
>>
>> Having thus admitted the camel's nose to the tent, K&R would have
>> done the world a major service by making memset(), or at least
>> bzero(), a language feature, the latter perhaps by having "= 0"
>> validly apply to an lvalue of non-primitive type. Okay,
>> _potentially_ a major service. You'd still need the self-regarding
>> wizard programmers to bother coding it, which they wouldn't in many
>> cases "because speed". Move fast, break stuff.
>>
>> C++ screwed this up too, and stubbornly stuck by it for a long
>> time.
>>
>> https://cplusplus.github.io/CWG/issues/178.html
>>
>> [3] https://gcc.gnu.org/onlinedocs/gcc/Designated-Inits.html
>> [4] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2920.pdf
>
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 17:52 ` Luther Johnson
@ 2024-09-28 18:46 ` G. Branden Robinson
2024-09-28 22:08 ` Luther Johnson
0 siblings, 1 reply; 74+ messages in thread
From: G. Branden Robinson @ 2024-09-28 18:46 UTC (permalink / raw)
To: tuhs
[-- Attachment #1: Type: text/plain, Size: 5928 bytes --]
Hi Luther,
At 2024-09-28T10:47:44-0700, Luther Johnson wrote:
> I don't know that structure copying breaks any complexity or bounds on
> execution time rules. Many compilers may be different, but in the
> generated code I've seen, when you pass in a structure to a function,
> the receiving function copies it to the stack. In the portable C
> compiler, when you return a structure as a result, it is first copied
> to a static area, a pointer to that area is returned, then the caller
> copies that out to wherever it's meant to go, either a variable that's
> being assigned (which could be on the stack or elsewhere), or to a
> place on the stack that was reserved for it because that result will
> now be an argument to another function to be called. So there's some
> copying, but that's proportional to the size of the structure, it's
> linear, and there's no dynamic memory allocation going on.
I have no problem with this presentation, but recall the principle--the
tenet--that Doug was upholding:
> > At 2024-09-28T09:34:14-0400, Douglas McIlroy wrote:
> > > This complaint overlooks one tenet of C: every operation in what
> > > you call "language runtime" takes O(1) time. Dynamic memory
> > > allocation is not such an operation.
Even without dynamic memory allocation, if you did something linear,
something O(n), it was a lose and a violation of the tenet.
I can easily see the appeal of a language whose every operation really
is O(1). Once upon a time, a university course, or equivalent
experience, in assembly language (on a CLEAN instruction set, not x86)
is what taught you the virtues and drawbacks of thinking about and
implementing things that way. But my view is that C hasn't been one of
those languages for a very long time, since before its initial ANSI
standardization at the latest.
At 2024-09-28T10:52:16-0700, Luther Johnson wrote:
> In the compilers I'm talking about, you pass a structure by passing a
> pointer to it - but the receiving function knows the argument is a
> structure, and not a pointer to a structure, so it knows it needs to
> use the pointer to copy to its own local version.
It's my understanding that the ability to work with structs as
first-class citizens in function calls, as parameters _or_ return types,
was something fairly late to stabilize in C compilers. Second-hand, I
gather that pre-standard C as told by K&R explicitly did _not_
countenance this. So a lot of early C code, including that in
libraries, indirected nearly all struct access, even when read-only,
through pointers.
This is often a win, but not always. A few minutes ago I shot off my
mouth to this list about how much better the standard library design
could have been if the return of structs by value had been supported
much earlier.
Our industry has, it seemss, been slow to appreciate the distinction
between what C++ eventually came to explicitly call "copy" semantics and
"move" semantics. Rust's paradigmatic dedication to the concept of data
"ownership" at last seems to be popularizing the practice of thinking
about these things. (For my part, I will forever hurl calumnies at
computer architects who refer to copy operations as "MOV" or similar.
If the operation doesn't destroy the source, it's not a move--I don't
care how many thousands of pages of manuals Intel writes saying
otherwise. Even the RISC-V specs screw this up, I assume in a
deliberate but embarrassing attempt to win mindshare among x86
programmers who cling to this myth as they do so many others.)
For a moment I considered giving credit to a virtuous few '80s C
programmers who recognized that there was indeed no need to copy a
struct upon passing it to a function if you knew the callee wasn't going
to modify that struct...but we had a way of saying this, "const", and
library writers of that era were infamously indifferent to using "const"
in their APIs where it would have done good. So, no, no credit.
Here's a paragraph from a 1987 text I wish I'd read back then, or at any
time before being exposed to C.
"[Language] does not define how parameter passing is implemented. A
program is erroneous if it depends on a specific implementation method.
The two obvious implementations are by copy and by reference. With an
implementation that copies parameters, an `out` or `in out` actual
parameter will not be updated until (normal) return from the subprogram.
Therefore if the subprogram propagates an exception, the actual
parameter will be unchanged. This is clearly not the case when a
reference implementation is used. The difficulty with this vagueness in
the definition of [language] is that it is quite awkward to be sure that
a program is independent of the implementation method. (You might
wonder why the language does not define the implementation method. The
reason is that the copy mechanism is very inefficient with large
parameters, whereas the reference mechanism is prohibitively expensive
on distributed systems.)"[1]
I admire the frankness. It points the way forward to reasoned
discussion of engineering tradeoffs, as opposed to programming language
boosterism. (By contrast, the trashing of boosters and their rhetoric
is an obvious service to humanity. See? I'm charitable!)
I concealed the name of the programming language because people have a
tendency to unfairly disregard and denigrate it in spite of (or because
of?) its many excellent properties and suitability for robust and
reliable systems, in contrast to slovenly prototypes that minimize
launch costs and impose negative externalities on users (and on anyone
unlucky enough to be stuck supporting them). But then again cowboy
programmers and their managers likely don't read my drivel anyway.
They're busy chasing AI money before the bubble bursts.
Anyway--the language is Ada.
Regards,
Branden
[1] Watt, Wichmann, Findlay. _Ada Language and Methodology_.
Prentice-Hall, 1987, p. 395.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 18:46 ` G. Branden Robinson
@ 2024-09-28 22:08 ` Luther Johnson
2024-09-28 22:45 ` Luther Johnson
0 siblings, 1 reply; 74+ messages in thread
From: Luther Johnson @ 2024-09-28 22:08 UTC (permalink / raw)
To: tuhs
"Classic C", K&R + void function return type, enumerations, and
structure passing/return, maybe a couple other things (some bug
fixes/stricter rules on use of members in structs and unions), has this,
I use a compiler from 1983 that lines up with an AT&T System V document
detailing updates to the language.
I view these kinds of changes as incremental usability and reliability
fixes, well within the spirit and style, but just closing loopholes or
filling in gaps of things that ought to work together, and could,
without too much effort. But I agree, structures as full-fledged
citizens came late to K & R / Classic C.
On 09/28/2024 11:46 AM, G. Branden Robinson wrote:
> Hi Luther,
>
> At 2024-09-28T10:47:44-0700, Luther Johnson wrote:
>> I don't know that structure copying breaks any complexity or bounds on
>> execution time rules. Many compilers may be different, but in the
>> generated code I've seen, when you pass in a structure to a function,
>> the receiving function copies it to the stack. In the portable C
>> compiler, when you return a structure as a result, it is first copied
>> to a static area, a pointer to that area is returned, then the caller
>> copies that out to wherever it's meant to go, either a variable that's
>> being assigned (which could be on the stack or elsewhere), or to a
>> place on the stack that was reserved for it because that result will
>> now be an argument to another function to be called. So there's some
>> copying, but that's proportional to the size of the structure, it's
>> linear, and there's no dynamic memory allocation going on.
> I have no problem with this presentation, but recall the principle--the
> tenet--that Doug was upholding:
>
>>> At 2024-09-28T09:34:14-0400, Douglas McIlroy wrote:
>>>> This complaint overlooks one tenet of C: every operation in what
>>>> you call "language runtime" takes O(1) time. Dynamic memory
>>>> allocation is not such an operation.
> Even without dynamic memory allocation, if you did something linear,
> something O(n), it was a lose and a violation of the tenet.
>
> I can easily see the appeal of a language whose every operation really
> is O(1). Once upon a time, a university course, or equivalent
> experience, in assembly language (on a CLEAN instruction set, not x86)
> is what taught you the virtues and drawbacks of thinking about and
> implementing things that way. But my view is that C hasn't been one of
> those languages for a very long time, since before its initial ANSI
> standardization at the latest.
>
> At 2024-09-28T10:52:16-0700, Luther Johnson wrote:
>> In the compilers I'm talking about, you pass a structure by passing a
>> pointer to it - but the receiving function knows the argument is a
>> structure, and not a pointer to a structure, so it knows it needs to
>> use the pointer to copy to its own local version.
> It's my understanding that the ability to work with structs as
> first-class citizens in function calls, as parameters _or_ return types,
> was something fairly late to stabilize in C compilers. Second-hand, I
> gather that pre-standard C as told by K&R explicitly did _not_
> countenance this. So a lot of early C code, including that in
> libraries, indirected nearly all struct access, even when read-only,
> through pointers.
>
> This is often a win, but not always. A few minutes ago I shot off my
> mouth to this list about how much better the standard library design
> could have been if the return of structs by value had been supported
> much earlier.
>
> Our industry has, it seemss, been slow to appreciate the distinction
> between what C++ eventually came to explicitly call "copy" semantics and
> "move" semantics. Rust's paradigmatic dedication to the concept of data
> "ownership" at last seems to be popularizing the practice of thinking
> about these things. (For my part, I will forever hurl calumnies at
> computer architects who refer to copy operations as "MOV" or similar.
> If the operation doesn't destroy the source, it's not a move--I don't
> care how many thousands of pages of manuals Intel writes saying
> otherwise. Even the RISC-V specs screw this up, I assume in a
> deliberate but embarrassing attempt to win mindshare among x86
> programmers who cling to this myth as they do so many others.)
>
> For a moment I considered giving credit to a virtuous few '80s C
> programmers who recognized that there was indeed no need to copy a
> struct upon passing it to a function if you knew the callee wasn't going
> to modify that struct...but we had a way of saying this, "const", and
> library writers of that era were infamously indifferent to using "const"
> in their APIs where it would have done good. So, no, no credit.
>
> Here's a paragraph from a 1987 text I wish I'd read back then, or at any
> time before being exposed to C.
>
> "[Language] does not define how parameter passing is implemented. A
> program is erroneous if it depends on a specific implementation method.
> The two obvious implementations are by copy and by reference. With an
> implementation that copies parameters, an `out` or `in out` actual
> parameter will not be updated until (normal) return from the subprogram.
> Therefore if the subprogram propagates an exception, the actual
> parameter will be unchanged. This is clearly not the case when a
> reference implementation is used. The difficulty with this vagueness in
> the definition of [language] is that it is quite awkward to be sure that
> a program is independent of the implementation method. (You might
> wonder why the language does not define the implementation method. The
> reason is that the copy mechanism is very inefficient with large
> parameters, whereas the reference mechanism is prohibitively expensive
> on distributed systems.)"[1]
>
> I admire the frankness. It points the way forward to reasoned
> discussion of engineering tradeoffs, as opposed to programming language
> boosterism. (By contrast, the trashing of boosters and their rhetoric
> is an obvious service to humanity. See? I'm charitable!)
>
> I concealed the name of the programming language because people have a
> tendency to unfairly disregard and denigrate it in spite of (or because
> of?) its many excellent properties and suitability for robust and
> reliable systems, in contrast to slovenly prototypes that minimize
> launch costs and impose negative externalities on users (and on anyone
> unlucky enough to be stuck supporting them). But then again cowboy
> programmers and their managers likely don't read my drivel anyway.
> They're busy chasing AI money before the bubble bursts.
>
> Anyway--the language is Ada.
>
> Regards,
> Branden
>
> [1] Watt, Wichmann, Findlay. _Ada Language and Methodology_.
> Prentice-Hall, 1987, p. 395.
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 22:08 ` Luther Johnson
@ 2024-09-28 22:45 ` Luther Johnson
2024-09-28 22:50 ` Luther Johnson
0 siblings, 1 reply; 74+ messages in thread
From: Luther Johnson @ 2024-09-28 22:45 UTC (permalink / raw)
To: tuhs
G. Branden,
I get it. From your and Doug's responses, even O(n) baked-in costs can
be a problem.
On 09/28/2024 03:08 PM, Luther Johnson wrote:
> "Classic C", K&R + void function return type, enumerations, and
> structure passing/return, maybe a couple other things (some bug
> fixes/stricter rules on use of members in structs and unions), has
> this, I use a compiler from 1983 that lines up with an AT&T System V
> document detailing updates to the language.
>
> I view these kinds of changes as incremental usability and reliability
> fixes, well within the spirit and style, but just closing loopholes or
> filling in gaps of things that ought to work together, and could,
> without too much effort. But I agree, structures as full-fledged
> citizens came late to K & R / Classic C.
>
> On 09/28/2024 11:46 AM, G. Branden Robinson wrote:
>> Hi Luther,
>>
>> At 2024-09-28T10:47:44-0700, Luther Johnson wrote:
>>> I don't know that structure copying breaks any complexity or bounds on
>>> execution time rules. Many compilers may be different, but in the
>>> generated code I've seen, when you pass in a structure to a function,
>>> the receiving function copies it to the stack. In the portable C
>>> compiler, when you return a structure as a result, it is first copied
>>> to a static area, a pointer to that area is returned, then the caller
>>> copies that out to wherever it's meant to go, either a variable that's
>>> being assigned (which could be on the stack or elsewhere), or to a
>>> place on the stack that was reserved for it because that result will
>>> now be an argument to another function to be called. So there's some
>>> copying, but that's proportional to the size of the structure, it's
>>> linear, and there's no dynamic memory allocation going on.
>> I have no problem with this presentation, but recall the principle--the
>> tenet--that Doug was upholding:
>>
>>>> At 2024-09-28T09:34:14-0400, Douglas McIlroy wrote:
>>>>> This complaint overlooks one tenet of C: every operation in what
>>>>> you call "language runtime" takes O(1) time. Dynamic memory
>>>>> allocation is not such an operation.
>> Even without dynamic memory allocation, if you did something linear,
>> something O(n), it was a lose and a violation of the tenet.
>>
>> I can easily see the appeal of a language whose every operation really
>> is O(1). Once upon a time, a university course, or equivalent
>> experience, in assembly language (on a CLEAN instruction set, not x86)
>> is what taught you the virtues and drawbacks of thinking about and
>> implementing things that way. But my view is that C hasn't been one of
>> those languages for a very long time, since before its initial ANSI
>> standardization at the latest.
>>
>> At 2024-09-28T10:52:16-0700, Luther Johnson wrote:
>>> In the compilers I'm talking about, you pass a structure by passing a
>>> pointer to it - but the receiving function knows the argument is a
>>> structure, and not a pointer to a structure, so it knows it needs to
>>> use the pointer to copy to its own local version.
>> It's my understanding that the ability to work with structs as
>> first-class citizens in function calls, as parameters _or_ return types,
>> was something fairly late to stabilize in C compilers. Second-hand, I
>> gather that pre-standard C as told by K&R explicitly did _not_
>> countenance this. So a lot of early C code, including that in
>> libraries, indirected nearly all struct access, even when read-only,
>> through pointers.
>>
>> This is often a win, but not always. A few minutes ago I shot off my
>> mouth to this list about how much better the standard library design
>> could have been if the return of structs by value had been supported
>> much earlier.
>>
>> Our industry has, it seemss, been slow to appreciate the distinction
>> between what C++ eventually came to explicitly call "copy" semantics and
>> "move" semantics. Rust's paradigmatic dedication to the concept of data
>> "ownership" at last seems to be popularizing the practice of thinking
>> about these things. (For my part, I will forever hurl calumnies at
>> computer architects who refer to copy operations as "MOV" or similar.
>> If the operation doesn't destroy the source, it's not a move--I don't
>> care how many thousands of pages of manuals Intel writes saying
>> otherwise. Even the RISC-V specs screw this up, I assume in a
>> deliberate but embarrassing attempt to win mindshare among x86
>> programmers who cling to this myth as they do so many others.)
>>
>> For a moment I considered giving credit to a virtuous few '80s C
>> programmers who recognized that there was indeed no need to copy a
>> struct upon passing it to a function if you knew the callee wasn't going
>> to modify that struct...but we had a way of saying this, "const", and
>> library writers of that era were infamously indifferent to using "const"
>> in their APIs where it would have done good. So, no, no credit.
>>
>> Here's a paragraph from a 1987 text I wish I'd read back then, or at any
>> time before being exposed to C.
>>
>> "[Language] does not define how parameter passing is implemented. A
>> program is erroneous if it depends on a specific implementation method.
>> The two obvious implementations are by copy and by reference. With an
>> implementation that copies parameters, an `out` or `in out` actual
>> parameter will not be updated until (normal) return from the subprogram.
>> Therefore if the subprogram propagates an exception, the actual
>> parameter will be unchanged. This is clearly not the case when a
>> reference implementation is used. The difficulty with this vagueness in
>> the definition of [language] is that it is quite awkward to be sure that
>> a program is independent of the implementation method. (You might
>> wonder why the language does not define the implementation method. The
>> reason is that the copy mechanism is very inefficient with large
>> parameters, whereas the reference mechanism is prohibitively expensive
>> on distributed systems.)"[1]
>>
>> I admire the frankness. It points the way forward to reasoned
>> discussion of engineering tradeoffs, as opposed to programming language
>> boosterism. (By contrast, the trashing of boosters and their rhetoric
>> is an obvious service to humanity. See? I'm charitable!)
>>
>> I concealed the name of the programming language because people have a
>> tendency to unfairly disregard and denigrate it in spite of (or because
>> of?) its many excellent properties and suitability for robust and
>> reliable systems, in contrast to slovenly prototypes that minimize
>> launch costs and impose negative externalities on users (and on anyone
>> unlucky enough to be stuck supporting them). But then again cowboy
>> programmers and their managers likely don't read my drivel anyway.
>> They're busy chasing AI money before the bubble bursts.
>>
>> Anyway--the language is Ada.
>>
>> Regards,
>> Branden
>>
>> [1] Watt, Wichmann, Findlay. _Ada Language and Methodology_.
>> Prentice-Hall, 1987, p. 395.
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 22:45 ` Luther Johnson
@ 2024-09-28 22:50 ` Luther Johnson
0 siblings, 0 replies; 74+ messages in thread
From: Luther Johnson @ 2024-09-28 22:50 UTC (permalink / raw)
To: tuhs
Or to your point, we have some of these problems no matter what, so
that's not a completely valid reason not to have some other facility, on
the basis of its cost profile, when other things already in, have
similar orders of costs.
On 09/28/2024 03:45 PM, Luther Johnson wrote:
> G. Branden,
>
> I get it. From your and Doug's responses, even O(n) baked-in costs can
> be a problem.
>
> On 09/28/2024 03:08 PM, Luther Johnson wrote:
>> "Classic C", K&R + void function return type, enumerations, and
>> structure passing/return, maybe a couple other things (some bug
>> fixes/stricter rules on use of members in structs and unions), has
>> this, I use a compiler from 1983 that lines up with an AT&T System V
>> document detailing updates to the language.
>>
>> I view these kinds of changes as incremental usability and
>> reliability fixes, well within the spirit and style, but just closing
>> loopholes or filling in gaps of things that ought to work together,
>> and could, without too much effort. But I agree, structures as
>> full-fledged citizens came late to K & R / Classic C.
>>
>> On 09/28/2024 11:46 AM, G. Branden Robinson wrote:
>>> Hi Luther,
>>>
>>> At 2024-09-28T10:47:44-0700, Luther Johnson wrote:
>>>> I don't know that structure copying breaks any complexity or bounds on
>>>> execution time rules. Many compilers may be different, but in the
>>>> generated code I've seen, when you pass in a structure to a function,
>>>> the receiving function copies it to the stack. In the portable C
>>>> compiler, when you return a structure as a result, it is first copied
>>>> to a static area, a pointer to that area is returned, then the caller
>>>> copies that out to wherever it's meant to go, either a variable that's
>>>> being assigned (which could be on the stack or elsewhere), or to a
>>>> place on the stack that was reserved for it because that result will
>>>> now be an argument to another function to be called. So there's some
>>>> copying, but that's proportional to the size of the structure, it's
>>>> linear, and there's no dynamic memory allocation going on.
>>> I have no problem with this presentation, but recall the principle--the
>>> tenet--that Doug was upholding:
>>>
>>>>> At 2024-09-28T09:34:14-0400, Douglas McIlroy wrote:
>>>>>> This complaint overlooks one tenet of C: every operation in what
>>>>>> you call "language runtime" takes O(1) time. Dynamic memory
>>>>>> allocation is not such an operation.
>>> Even without dynamic memory allocation, if you did something linear,
>>> something O(n), it was a lose and a violation of the tenet.
>>>
>>> I can easily see the appeal of a language whose every operation really
>>> is O(1). Once upon a time, a university course, or equivalent
>>> experience, in assembly language (on a CLEAN instruction set, not x86)
>>> is what taught you the virtues and drawbacks of thinking about and
>>> implementing things that way. But my view is that C hasn't been one of
>>> those languages for a very long time, since before its initial ANSI
>>> standardization at the latest.
>>>
>>> At 2024-09-28T10:52:16-0700, Luther Johnson wrote:
>>>> In the compilers I'm talking about, you pass a structure by passing a
>>>> pointer to it - but the receiving function knows the argument is a
>>>> structure, and not a pointer to a structure, so it knows it needs to
>>>> use the pointer to copy to its own local version.
>>> It's my understanding that the ability to work with structs as
>>> first-class citizens in function calls, as parameters _or_ return
>>> types,
>>> was something fairly late to stabilize in C compilers. Second-hand, I
>>> gather that pre-standard C as told by K&R explicitly did _not_
>>> countenance this. So a lot of early C code, including that in
>>> libraries, indirected nearly all struct access, even when read-only,
>>> through pointers.
>>>
>>> This is often a win, but not always. A few minutes ago I shot off my
>>> mouth to this list about how much better the standard library design
>>> could have been if the return of structs by value had been supported
>>> much earlier.
>>>
>>> Our industry has, it seemss, been slow to appreciate the distinction
>>> between what C++ eventually came to explicitly call "copy" semantics
>>> and
>>> "move" semantics. Rust's paradigmatic dedication to the concept of
>>> data
>>> "ownership" at last seems to be popularizing the practice of thinking
>>> about these things. (For my part, I will forever hurl calumnies at
>>> computer architects who refer to copy operations as "MOV" or similar.
>>> If the operation doesn't destroy the source, it's not a move--I don't
>>> care how many thousands of pages of manuals Intel writes saying
>>> otherwise. Even the RISC-V specs screw this up, I assume in a
>>> deliberate but embarrassing attempt to win mindshare among x86
>>> programmers who cling to this myth as they do so many others.)
>>>
>>> For a moment I considered giving credit to a virtuous few '80s C
>>> programmers who recognized that there was indeed no need to copy a
>>> struct upon passing it to a function if you knew the callee wasn't
>>> going
>>> to modify that struct...but we had a way of saying this, "const", and
>>> library writers of that era were infamously indifferent to using
>>> "const"
>>> in their APIs where it would have done good. So, no, no credit.
>>>
>>> Here's a paragraph from a 1987 text I wish I'd read back then, or at
>>> any
>>> time before being exposed to C.
>>>
>>> "[Language] does not define how parameter passing is implemented. A
>>> program is erroneous if it depends on a specific implementation method.
>>> The two obvious implementations are by copy and by reference. With an
>>> implementation that copies parameters, an `out` or `in out` actual
>>> parameter will not be updated until (normal) return from the
>>> subprogram.
>>> Therefore if the subprogram propagates an exception, the actual
>>> parameter will be unchanged. This is clearly not the case when a
>>> reference implementation is used. The difficulty with this
>>> vagueness in
>>> the definition of [language] is that it is quite awkward to be sure
>>> that
>>> a program is independent of the implementation method. (You might
>>> wonder why the language does not define the implementation method. The
>>> reason is that the copy mechanism is very inefficient with large
>>> parameters, whereas the reference mechanism is prohibitively expensive
>>> on distributed systems.)"[1]
>>>
>>> I admire the frankness. It points the way forward to reasoned
>>> discussion of engineering tradeoffs, as opposed to programming language
>>> boosterism. (By contrast, the trashing of boosters and their rhetoric
>>> is an obvious service to humanity. See? I'm charitable!)
>>>
>>> I concealed the name of the programming language because people have a
>>> tendency to unfairly disregard and denigrate it in spite of (or because
>>> of?) its many excellent properties and suitability for robust and
>>> reliable systems, in contrast to slovenly prototypes that minimize
>>> launch costs and impose negative externalities on users (and on anyone
>>> unlucky enough to be stuck supporting them). But then again cowboy
>>> programmers and their managers likely don't read my drivel anyway.
>>> They're busy chasing AI money before the bubble bursts.
>>>
>>> Anyway--the language is Ada.
>>>
>>> Regards,
>>> Branden
>>>
>>> [1] Watt, Wichmann, Findlay. _Ada Language and Methodology_.
>>> Prentice-Hall, 1987, p. 395.
>>
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 16:58 ` G. Branden Robinson
2024-09-28 17:47 ` Luther Johnson
@ 2024-09-28 17:59 ` Bakul Shah via TUHS
2024-09-28 22:07 ` Douglas McIlroy
2024-09-28 18:01 ` G. Branden Robinson
2024-09-28 18:05 ` Larry McVoy
3 siblings, 1 reply; 74+ messages in thread
From: Bakul Shah via TUHS @ 2024-09-28 17:59 UTC (permalink / raw)
To: G. Branden Robinson; +Cc: TUHS main list
Just responding to random things that I noticed:
You don't need special syntax for tail-call. It should be done transparently when a call is the last thing that gets executed. Special syntax will merely allow confused people to use it in the wrong place and get confused more.
malloc(0) should return a unique ptr. So that "T* a = malloc(0); T* b = malloc(0); a != (T*)0 && a != b". Without this, malloc(0) acts differently from malloc(n) for n > 0.
Note that except for arrays, function arguments & result are copied so copying a struct makes perfect sense. Passing arrays by reference may have been due to residual Fortran influence! [Just guessing] Also note: that one exception has been the cause of many problems.
In any case you have not argued convincingly about why dynamic memory allocation should be in the language (runtime) :-) And adding that wouldn't have fixed any of the existing problems with the language.
Bakul
> On Sep 28, 2024, at 9:58 AM, G. Branden Robinson <g.branden.robinson@gmail.com> wrote:
>
> At 2024-09-28T09:34:14-0400, Douglas McIlroy wrote:
>>> C's refusal to specify dynamic memory allocation in the language
>>> runtime (as opposed to, eventually, the standard library)
>>
>> This complaint overlooks one tenet of C: every operation in what you
>> call "language runtime" takes O(1) time. Dynamic memory allocation
>> is not such an operation.
>
> A fair point. Let me argue about it anyway. ;-)
>
> I'd make three observations. First, K&R did _not_ tout this in their
> book presenting ANSI C. I went back and checked the prefaces,
> introduction, and the section presenting a simple malloc()/free()
> implementation. The tenet you claim for the language is not explicitly
> articulated and, if I squint really hard, I can only barely perceive
> (imagine?) it deeply between the lines in some of the prefatory material
> to which K&R mostly confine their efforts to promote the language. In
> my view, a "tenet" is something more overt: the sort of thing U.S.
> politicians try to get hung on the walls of every public school
> classroom, like Henry Spencer's Ten Commandments of C[1] (which itself
> does not mention this "core language has only O(1) features" principle).
>
> Second, in reviewing K&R I was reminded that structure copying is part
> of the language. ("There are no operations that manipulate an entire
> array or string, although structures may be copied as a unit."[2])
> Doesn't that break the tenet right there?
>
> Third, and following on from the foregoing, your point reminded me of my
> youth programming non-pipelined machines with no caches. You could set
> your watch by (just about) any instruction in the set--and often did,
> because we penurious microcomputer guys often lacked hardware real-time
> clocks, too. That is to say, for a while, every instruction has an
> exactly predictable and constant cycle count. (The _value_ of that
> constant might depend on the addressing mode, because that would have
> consequences on memory fetches, but the principle stood.) When the Z80
> extended the 8080's instruction set, they ate from Tree of Knowledge
> with block-copy instructions like LDIR and LDDR, and all of a sudden you
> had instructions with O(n) cycle counts. But as a rule, programmers
> seemed to welcome this instead of recognizing it as knowing sin, because
> you generally knew worst-case how many bytes you'd be copying and take
> that into account. (The worst worst case was a mere 64kB!)
>
> Further, Z80 home computers in low-end configurations (that is, no disk
> drives) often did a shocking thing: they ran with all interrupts masked.
> All the time. The one non-maskable interrupt was RESET, after which you
> weren't going to be resuming execution of your program anyway. Not from
> the same instruction, at least. As I recall the TRS-80 Model I/III/4
> didn't even have logic on the motherboard to decode the Z80's "interrupt
> mode 2", which was vectored, I think. Even in the "high-end"
> configurations of these tiny machines, you got a whopping ONE interrupt
> to play with ("IM 1").
>
> Later, when the Hitachi 6309 smuggled similar block-transfer decadence
> into its extensions to the Motorola 6809 (to the excitement of we
> semi-consciously Unix-adjacent OS-9 users) they faced a starker problem,
> because the 6809 didn't wall off interrupts in the same way the 8080 and
> Z80. They therefore presented the programmer with the novelty of the
> restartable instruction, and a new generation of programmers became
> acquainted with the hard lessons time-sharing minicomputer people were
> familiar with.
>
> My point in this digression is that, in my opinion, it's tough to hold
> fast to the O(1) tenet you claim for C's core language and to another at
> the same time: the language's identity as a "portable assembly
> language". Unless every programmer has control over the compiler--and
> they don't--you can't predict when the compiler will emit an O(n) block
> transfer instruction. You'll just have to look at the disassembly.
>
> _Maybe_ you can retain purity by...never copying structs. I don't think
> lint or any other tool ever checked for this. Your advocacy of this
> tenet is the first time I've heard it presented.
>
> If you were to suggest to me that most of the time I've spent in my life
> arguing with C advocates was with rotten exemplars of the species and
> therefore was time wasted, I would concede the point.
>
> There's just so danged _many_ of them...
>
>> Your hobbyhorse awakened one of mine.
>>
>> malloc was in v7, before the C standard was written. The standard
>> spinelessly buckled to allow malloc(0) to return 0, as some
>> implementations gratuitously did.
>
> What was the alternative? There was no such thing as an exception, and
> if a pointer was an int and an int was as wide as a machine address,
> there'd be no way to indicate failure in-band, either.
>
> If the choice was that or another instance of atoi()'s wincingly awful
> "does this 0 represent an error or successful conversion of a zero
> input?" land mine, ANSI might have made the right choice.
>
>> I can't imagine that any program ever actually wanted the feature. Now
>> it's one more undefined behavior that lurks in thousands of programs.
>
> Hoare admitted to only one billion-dollar mistake. No one dares count
> how many to write in C's ledger. This was predicted, wasn't it?
> Everyone loved C because it was fast: it was performant, because it
> never met a runtime check it didn't eschew--recall again Kernighan
> punking Pascal on this exact point--and it was quick for the programmer
> to write because it never met a _compile_-time check it didn't eschew.
> C was born as a language for wizards who never made mistakes.
>
> The problem is that, like James Madison's fictive government of angels,
> such entities don't exist. The staff of the CSRC itself may have been
> overwhelmingly populated with frank, modest, and self-deprecating
> people--and I'll emphasize here that I'm aware of no accounts that this
> is anything but true--but C unfortunately played a part in stoking a
> culture of pretension among software developers. "C is a language in
> which wizards program. I program in C. Therefore I'm a wizard." is how
> the syllogism (spot the fallacy) went. I don't know who does more
> damage--the people who believe their own BS, or the ones who know
> they're scamming their colleagues.
>
>> There are two arguments for malloc(0), Most importantly, it caters for
>> a limiting case for aggregates generated at runtime--an instance of
>> Kernighan's Law, "Do nothing gracefully". It also provides a way to
>> create a distinctive pointer to impart some meta-information, e.g.
>> "TBD" or "end of subgroup", distinct from the null pointer, which
>> merely denotes absence.
>
> I think I might be confused now. I've frequently seen arrays of structs
> initialized from lists of literals ending in a series of "NULL"
> structure members, in code that antedates or ignores C99's wonderful
> feature of designated initializers for aggregate types.[3] How does
> malloc(0) get this job done and what benefit does it bring?
>
> Last time I posted to TUHS I mentioned a proposal for explicit tail-call
> elimination in C. I got the syntax wrong. The proposal was "return
> goto;". The WG14 document number is N2920 and it's by Alex Gilding.
> Good stuff.[4] I hope we see it in C2y.
>
> Predictably, I must confess that I didn't make much headway on
> Schiller's 1975 "secure kernel" paper. Maybe next time.
>
> Regards,
> Branden
>
> [1] https://web.cs.dal.ca/~jamie/UWO/C/the10fromHenryS.html
>
> I can easily imagine that the tenet held at _some_ point in the
> C's history. It's _got_ to be the reason that the language
> relegates memset() and memcpy() to the standard library (or to the
> programmer's own devise)! :-O
>
> [2] Kernighan & Ritchie, _The C Programming Language_, 2nd edition, p. 2
>
> Having thus admitted the camel's nose to the tent, K&R would have
> done the world a major service by making memset(), or at least
> bzero(), a language feature, the latter perhaps by having "= 0"
> validly apply to an lvalue of non-primitive type. Okay,
> _potentially_ a major service. You'd still need the self-regarding
> wizard programmers to bother coding it, which they wouldn't in many
> cases "because speed". Move fast, break stuff.
>
> C++ screwed this up too, and stubbornly stuck by it for a long time.
>
> https://cplusplus.github.io/CWG/issues/178.html
>
> [3] https://gcc.gnu.org/onlinedocs/gcc/Designated-Inits.html
> [4] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2920.pdf
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 17:59 ` Bakul Shah via TUHS
@ 2024-09-28 22:07 ` Douglas McIlroy
2024-09-28 23:05 ` Rob Pike
0 siblings, 1 reply; 74+ messages in thread
From: Douglas McIlroy @ 2024-09-28 22:07 UTC (permalink / raw)
To: TUHS main list
[-- Attachment #1: Type: text/plain, Size: 12816 bytes --]
I have to concede Branden's "gotcha". Struct copying is definitely not O(1).
A real-life hazard of non-O(1) operations. Vic Vyssotsky put bzero (I
forget what Vic called it) into Fortran II. Sometime later, he found that a
percolation simulation was running annoyingly slowly. It took some study to
discover that the real inner loop of the program was not the percolation,
which touched only a small fraction of a 2D field. More time went into the
innocuous bzero initializer. The fix was to ADD code to the inner loop to
remember what entries had been touched, and initialize only those for the
next round of the simulation.
> for a while, every instruction has [sic] an exactly predictable and
constant cycle
> count. ...[Then] all of a sudden you had instructions with O(n) cycle
counts.
O(n) cycle counts were nothing new. In the 1950s we had the IBM 1620 with
arbitrary-length arithmetic and the 709 with "convert" instructions whose
cycle count went up to 256.
>> spinelessly buckled to allow malloc(0) to return 0, as some
>> implementations gratuitously did.
> What was the alternative? There was no such thing as an exception, and
> if a pointer was an int and an int was as wide as a machine address,
> there'd be no way to indicate failure in-band, either.
What makes you think allocating zero space should fail? If the size n of a
set is determined at run time, why should one have to special-case its
space allocation when n=0? Subsequent processing of the form for(i=0; i<n;
i++) {...} will handle it gracefully with no special code. Malloc should do
as it did in v7--return a non-null pointer different from any other active
malloc pointer, as Bakul stated. If worse comes to worst[1] this can be
done by padding up to the next feasible size. Regardless of how the pointer
is created, any access via it would of course be out of bounds and hence
wrong.
> How does malloc(0) get this job done and what benefit does it bring?
If I understand the "job" (about initializing structure members) correctly,
malloc(0) has no bearing on it. The benefit lies elsewhere.
Apropos of tail calls, Rob Pike had a nice name for an explicit tail call,
"become". It's certainly reasonable, though, to make compilers recognize
tail calls implicitly.
[1] Worse didn't come to worst in the original malloc. It attached metadata
to each block, so even blocks of size zero consumed some memory.
Doug
On Sat, Sep 28, 2024 at 1:59 PM Bakul Shah via TUHS <tuhs@tuhs.org> wrote:
> Just responding to random things that I noticed:
>
> You don't need special syntax for tail-call. It should be done
> transparently when a call is the last thing that gets executed. Special
> syntax will merely allow confused people to use it in the wrong place and
> get confused more.
>
> malloc(0) should return a unique ptr. So that "T* a = malloc(0); T* b =
> malloc(0); a != (T*)0 && a != b". Without this, malloc(0) acts differently
> from malloc(n) for n > 0.
>
> Note that except for arrays, function arguments & result are copied so
> copying a struct makes perfect sense. Passing arrays by reference may have
> been due to residual Fortran influence! [Just guessing] Also note: that one
> exception has been the cause of many problems.
>
> In any case you have not argued convincingly about why dynamic memory
> allocation should be in the language (runtime) :-) And adding that wouldn't
> have fixed any of the existing problems with the language.
>
> Bakul
>
> > On Sep 28, 2024, at 9:58 AM, G. Branden Robinson <
> g.branden.robinson@gmail.com> wrote:
> >
> > At 2024-09-28T09:34:14-0400, Douglas McIlroy wrote:
> >>> C's refusal to specify dynamic memory allocation in the language
> >>> runtime (as opposed to, eventually, the standard library)
> >>
> >> This complaint overlooks one tenet of C: every operation in what you
> >> call "language runtime" takes O(1) time. Dynamic memory allocation
> >> is not such an operation.
> >
> > A fair point. Let me argue about it anyway. ;-)
> >
> > I'd make three observations. First, K&R did _not_ tout this in their
> > book presenting ANSI C. I went back and checked the prefaces,
> > introduction, and the section presenting a simple malloc()/free()
> > implementation. The tenet you claim for the language is not explicitly
> > articulated and, if I squint really hard, I can only barely perceive
> > (imagine?) it deeply between the lines in some of the prefatory material
> > to which K&R mostly confine their efforts to promote the language. In
> > my view, a "tenet" is something more overt: the sort of thing U.S.
> > politicians try to get hung on the walls of every public school
> > classroom, like Henry Spencer's Ten Commandments of C[1] (which itself
> > does not mention this "core language has only O(1) features" principle).
> >
> > Second, in reviewing K&R I was reminded that structure copying is part
> > of the language. ("There are no operations that manipulate an entire
> > array or string, although structures may be copied as a unit."[2])
> > Doesn't that break the tenet right there?
> >
> > Third, and following on from the foregoing, your point reminded me of my
> > youth programming non-pipelined machines with no caches. You could set
> > your watch by (just about) any instruction in the set--and often did,
> > because we penurious microcomputer guys often lacked hardware real-time
> > clocks, too. That is to say, for a while, every instruction has an
> > exactly predictable and constant cycle count. (The _value_ of that
> > constant might depend on the addressing mode, because that would have
> > consequences on memory fetches, but the principle stood.) When the Z80
> > extended the 8080's instruction set, they ate from Tree of Knowledge
> > with block-copy instructions like LDIR and LDDR, and all of a sudden you
> > had instructions with O(n) cycle counts. But as a rule, programmers
> > seemed to welcome this instead of recognizing it as knowing sin, because
> > you generally knew worst-case how many bytes you'd be copying and take
> > that into account. (The worst worst case was a mere 64kB!)
> >
> > Further, Z80 home computers in low-end configurations (that is, no disk
> > drives) often did a shocking thing: they ran with all interrupts masked.
> > All the time. The one non-maskable interrupt was RESET, after which you
> > weren't going to be resuming execution of your program anyway. Not from
> > the same instruction, at least. As I recall the TRS-80 Model I/III/4
> > didn't even have logic on the motherboard to decode the Z80's "interrupt
> > mode 2", which was vectored, I think. Even in the "high-end"
> > configurations of these tiny machines, you got a whopping ONE interrupt
> > to play with ("IM 1").
> >
> > Later, when the Hitachi 6309 smuggled similar block-transfer decadence
> > into its extensions to the Motorola 6809 (to the excitement of we
> > semi-consciously Unix-adjacent OS-9 users) they faced a starker problem,
> > because the 6809 didn't wall off interrupts in the same way the 8080 and
> > Z80. They therefore presented the programmer with the novelty of the
> > restartable instruction, and a new generation of programmers became
> > acquainted with the hard lessons time-sharing minicomputer people were
> > familiar with.
> >
> > My point in this digression is that, in my opinion, it's tough to hold
> > fast to the O(1) tenet you claim for C's core language and to another at
> > the same time: the language's identity as a "portable assembly
> > language". Unless every programmer has control over the compiler--and
> > they don't--you can't predict when the compiler will emit an O(n) block
> > transfer instruction. You'll just have to look at the disassembly.
> >
> > _Maybe_ you can retain purity by...never copying structs. I don't think
> > lint or any other tool ever checked for this. Your advocacy of this
> > tenet is the first time I've heard it presented.
> >
> > If you were to suggest to me that most of the time I've spent in my life
> > arguing with C advocates was with rotten exemplars of the species and
> > therefore was time wasted, I would concede the point.
> >
> > There's just so danged _many_ of them...
> >
> >> Your hobbyhorse awakened one of mine.
> >>
> >> malloc was in v7, before the C standard was written. The standard
> >> spinelessly buckled to allow malloc(0) to return 0, as some
> >> implementations gratuitously did.
> >
> > What was the alternative? There was no such thing as an exception, and
> > if a pointer was an int and an int was as wide as a machine address,
> > there'd be no way to indicate failure in-band, either.
> >
> > If the choice was that or another instance of atoi()'s wincingly awful
> > "does this 0 represent an error or successful conversion of a zero
> > input?" land mine, ANSI might have made the right choice.
> >
> >> I can't imagine that any program ever actually wanted the feature. Now
> >> it's one more undefined behavior that lurks in thousands of programs.
> >
> > Hoare admitted to only one billion-dollar mistake. No one dares count
> > how many to write in C's ledger. This was predicted, wasn't it?
> > Everyone loved C because it was fast: it was performant, because it
> > never met a runtime check it didn't eschew--recall again Kernighan
> > punking Pascal on this exact point--and it was quick for the programmer
> > to write because it never met a _compile_-time check it didn't eschew.
> > C was born as a language for wizards who never made mistakes.
> >
> > The problem is that, like James Madison's fictive government of angels,
> > such entities don't exist. The staff of the CSRC itself may have been
> > overwhelmingly populated with frank, modest, and self-deprecating
> > people--and I'll emphasize here that I'm aware of no accounts that this
> > is anything but true--but C unfortunately played a part in stoking a
> > culture of pretension among software developers. "C is a language in
> > which wizards program. I program in C. Therefore I'm a wizard." is how
> > the syllogism (spot the fallacy) went. I don't know who does more
> > damage--the people who believe their own BS, or the ones who know
> > they're scamming their colleagues.
> >
> >> There are two arguments for malloc(0), Most importantly, it caters for
> >> a limiting case for aggregates generated at runtime--an instance of
> >> Kernighan's Law, "Do nothing gracefully". It also provides a way to
> >> create a distinctive pointer to impart some meta-information, e.g.
> >> "TBD" or "end of subgroup", distinct from the null pointer, which
> >> merely denotes absence.
> >
> > I think I might be confused now. I've frequently seen arrays of structs
> > initialized from lists of literals ending in a series of "NULL"
> > structure members, in code that antedates or ignores C99's wonderful
> > feature of designated initializers for aggregate types.[3] How does
> > malloc(0) get this job done and what benefit does it bring?
> >
> > Last time I posted to TUHS I mentioned a proposal for explicit tail-call
> > elimination in C. I got the syntax wrong. The proposal was "return
> > goto;". The WG14 document number is N2920 and it's by Alex Gilding.
> > Good stuff.[4] I hope we see it in C2y.
> >
> > Predictably, I must confess that I didn't make much headway on
> > Schiller's 1975 "secure kernel" paper. Maybe next time.
> >
> > Regards,
> > Branden
> >
> > [1] https://web.cs.dal.ca/~jamie/UWO/C/the10fromHenryS.html
> >
> > I can easily imagine that the tenet held at _some_ point in the
> > C's history. It's _got_ to be the reason that the language
> > relegates memset() and memcpy() to the standard library (or to the
> > programmer's own devise)! :-O
> >
> > [2] Kernighan & Ritchie, _The C Programming Language_, 2nd edition, p. 2
> >
> > Having thus admitted the camel's nose to the tent, K&R would have
> > done the world a major service by making memset(), or at least
> > bzero(), a language feature, the latter perhaps by having "= 0"
> > validly apply to an lvalue of non-primitive type. Okay,
> > _potentially_ a major service. You'd still need the self-regarding
> > wizard programmers to bother coding it, which they wouldn't in many
> > cases "because speed". Move fast, break stuff.
> >
> > C++ screwed this up too, and stubbornly stuck by it for a long time.
> >
> > https://cplusplus.github.io/CWG/issues/178.html
> >
> > [3] https://gcc.gnu.org/onlinedocs/gcc/Designated-Inits.html
> > [4] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2920.pdf
>
>
[-- Attachment #2: Type: text/html, Size: 15458 bytes --]
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 22:07 ` Douglas McIlroy
@ 2024-09-28 23:05 ` Rob Pike
2024-09-28 23:30 ` Warner Losh
0 siblings, 1 reply; 74+ messages in thread
From: Rob Pike @ 2024-09-28 23:05 UTC (permalink / raw)
To: Douglas McIlroy; +Cc: TUHS main list
[-- Attachment #1: Type: text/plain, Size: 13456 bytes --]
I wrote a letter to the ANSI C (1989) committee.
Please allow malloc(0).
Please allow zero-length arrays.
I got two letters back, saying that malloc(0) is illegal because
zero-length arrays are illegal, and the other vice versa.
I fumed.
-rob
On Sun, Sep 29, 2024 at 8:08 AM Douglas McIlroy <
douglas.mcilroy@dartmouth.edu> wrote:
> I have to concede Branden's "gotcha". Struct copying is definitely not
> O(1).
>
> A real-life hazard of non-O(1) operations. Vic Vyssotsky put bzero (I
> forget what Vic called it) into Fortran II. Sometime later, he found that a
> percolation simulation was running annoyingly slowly. It took some study to
> discover that the real inner loop of the program was not the percolation,
> which touched only a small fraction of a 2D field. More time went into the
> innocuous bzero initializer. The fix was to ADD code to the inner loop to
> remember what entries had been touched, and initialize only those for the
> next round of the simulation.
>
> > for a while, every instruction has [sic] an exactly predictable and
> constant cycle
> > count. ...[Then] all of a sudden you had instructions with O(n) cycle
> counts.
>
> O(n) cycle counts were nothing new. In the 1950s we had the IBM 1620 with
> arbitrary-length arithmetic and the 709 with "convert" instructions whose
> cycle count went up to 256.
>
> >> spinelessly buckled to allow malloc(0) to return 0, as some
> >> implementations gratuitously did.
>
> > What was the alternative? There was no such thing as an exception, and
> > if a pointer was an int and an int was as wide as a machine address,
> > there'd be no way to indicate failure in-band, either.
>
> What makes you think allocating zero space should fail? If the size n of a
> set is determined at run time, why should one have to special-case its
> space allocation when n=0? Subsequent processing of the form for(i=0; i<n;
> i++) {...} will handle it gracefully with no special code. Malloc should do
> as it did in v7--return a non-null pointer different from any other active
> malloc pointer, as Bakul stated. If worse comes to worst[1] this can be
> done by padding up to the next feasible size. Regardless of how the pointer
> is created, any access via it would of course be out of bounds and hence
> wrong.
>
> > How does malloc(0) get this job done and what benefit does it bring?
>
> If I understand the "job" (about initializing structure members)
> correctly, malloc(0) has no bearing on it. The benefit lies elsewhere.
>
> Apropos of tail calls, Rob Pike had a nice name for an explicit tail call,
> "become". It's certainly reasonable, though, to make compilers recognize
> tail calls implicitly.
>
> [1] Worse didn't come to worst in the original malloc. It attached
> metadata to each block, so even blocks of size zero consumed some memory.
>
> Doug
>
> On Sat, Sep 28, 2024 at 1:59 PM Bakul Shah via TUHS <tuhs@tuhs.org> wrote:
>
>> Just responding to random things that I noticed:
>>
>> You don't need special syntax for tail-call. It should be done
>> transparently when a call is the last thing that gets executed. Special
>> syntax will merely allow confused people to use it in the wrong place and
>> get confused more.
>>
>> malloc(0) should return a unique ptr. So that "T* a = malloc(0); T* b =
>> malloc(0); a != (T*)0 && a != b". Without this, malloc(0) acts differently
>> from malloc(n) for n > 0.
>>
>> Note that except for arrays, function arguments & result are copied so
>> copying a struct makes perfect sense. Passing arrays by reference may have
>> been due to residual Fortran influence! [Just guessing] Also note: that one
>> exception has been the cause of many problems.
>>
>> In any case you have not argued convincingly about why dynamic memory
>> allocation should be in the language (runtime) :-) And adding that wouldn't
>> have fixed any of the existing problems with the language.
>>
>> Bakul
>>
>> > On Sep 28, 2024, at 9:58 AM, G. Branden Robinson <
>> g.branden.robinson@gmail.com> wrote:
>> >
>> > At 2024-09-28T09:34:14-0400, Douglas McIlroy wrote:
>> >>> C's refusal to specify dynamic memory allocation in the language
>> >>> runtime (as opposed to, eventually, the standard library)
>> >>
>> >> This complaint overlooks one tenet of C: every operation in what you
>> >> call "language runtime" takes O(1) time. Dynamic memory allocation
>> >> is not such an operation.
>> >
>> > A fair point. Let me argue about it anyway. ;-)
>> >
>> > I'd make three observations. First, K&R did _not_ tout this in their
>> > book presenting ANSI C. I went back and checked the prefaces,
>> > introduction, and the section presenting a simple malloc()/free()
>> > implementation. The tenet you claim for the language is not explicitly
>> > articulated and, if I squint really hard, I can only barely perceive
>> > (imagine?) it deeply between the lines in some of the prefatory material
>> > to which K&R mostly confine their efforts to promote the language. In
>> > my view, a "tenet" is something more overt: the sort of thing U.S.
>> > politicians try to get hung on the walls of every public school
>> > classroom, like Henry Spencer's Ten Commandments of C[1] (which itself
>> > does not mention this "core language has only O(1) features" principle).
>> >
>> > Second, in reviewing K&R I was reminded that structure copying is part
>> > of the language. ("There are no operations that manipulate an entire
>> > array or string, although structures may be copied as a unit."[2])
>> > Doesn't that break the tenet right there?
>> >
>> > Third, and following on from the foregoing, your point reminded me of my
>> > youth programming non-pipelined machines with no caches. You could set
>> > your watch by (just about) any instruction in the set--and often did,
>> > because we penurious microcomputer guys often lacked hardware real-time
>> > clocks, too. That is to say, for a while, every instruction has an
>> > exactly predictable and constant cycle count. (The _value_ of that
>> > constant might depend on the addressing mode, because that would have
>> > consequences on memory fetches, but the principle stood.) When the Z80
>> > extended the 8080's instruction set, they ate from Tree of Knowledge
>> > with block-copy instructions like LDIR and LDDR, and all of a sudden you
>> > had instructions with O(n) cycle counts. But as a rule, programmers
>> > seemed to welcome this instead of recognizing it as knowing sin, because
>> > you generally knew worst-case how many bytes you'd be copying and take
>> > that into account. (The worst worst case was a mere 64kB!)
>> >
>> > Further, Z80 home computers in low-end configurations (that is, no disk
>> > drives) often did a shocking thing: they ran with all interrupts masked.
>> > All the time. The one non-maskable interrupt was RESET, after which you
>> > weren't going to be resuming execution of your program anyway. Not from
>> > the same instruction, at least. As I recall the TRS-80 Model I/III/4
>> > didn't even have logic on the motherboard to decode the Z80's "interrupt
>> > mode 2", which was vectored, I think. Even in the "high-end"
>> > configurations of these tiny machines, you got a whopping ONE interrupt
>> > to play with ("IM 1").
>> >
>> > Later, when the Hitachi 6309 smuggled similar block-transfer decadence
>> > into its extensions to the Motorola 6809 (to the excitement of we
>> > semi-consciously Unix-adjacent OS-9 users) they faced a starker problem,
>> > because the 6809 didn't wall off interrupts in the same way the 8080 and
>> > Z80. They therefore presented the programmer with the novelty of the
>> > restartable instruction, and a new generation of programmers became
>> > acquainted with the hard lessons time-sharing minicomputer people were
>> > familiar with.
>> >
>> > My point in this digression is that, in my opinion, it's tough to hold
>> > fast to the O(1) tenet you claim for C's core language and to another at
>> > the same time: the language's identity as a "portable assembly
>> > language". Unless every programmer has control over the compiler--and
>> > they don't--you can't predict when the compiler will emit an O(n) block
>> > transfer instruction. You'll just have to look at the disassembly.
>> >
>> > _Maybe_ you can retain purity by...never copying structs. I don't think
>> > lint or any other tool ever checked for this. Your advocacy of this
>> > tenet is the first time I've heard it presented.
>> >
>> > If you were to suggest to me that most of the time I've spent in my life
>> > arguing with C advocates was with rotten exemplars of the species and
>> > therefore was time wasted, I would concede the point.
>> >
>> > There's just so danged _many_ of them...
>> >
>> >> Your hobbyhorse awakened one of mine.
>> >>
>> >> malloc was in v7, before the C standard was written. The standard
>> >> spinelessly buckled to allow malloc(0) to return 0, as some
>> >> implementations gratuitously did.
>> >
>> > What was the alternative? There was no such thing as an exception, and
>> > if a pointer was an int and an int was as wide as a machine address,
>> > there'd be no way to indicate failure in-band, either.
>> >
>> > If the choice was that or another instance of atoi()'s wincingly awful
>> > "does this 0 represent an error or successful conversion of a zero
>> > input?" land mine, ANSI might have made the right choice.
>> >
>> >> I can't imagine that any program ever actually wanted the feature. Now
>> >> it's one more undefined behavior that lurks in thousands of programs.
>> >
>> > Hoare admitted to only one billion-dollar mistake. No one dares count
>> > how many to write in C's ledger. This was predicted, wasn't it?
>> > Everyone loved C because it was fast: it was performant, because it
>> > never met a runtime check it didn't eschew--recall again Kernighan
>> > punking Pascal on this exact point--and it was quick for the programmer
>> > to write because it never met a _compile_-time check it didn't eschew.
>> > C was born as a language for wizards who never made mistakes.
>> >
>> > The problem is that, like James Madison's fictive government of angels,
>> > such entities don't exist. The staff of the CSRC itself may have been
>> > overwhelmingly populated with frank, modest, and self-deprecating
>> > people--and I'll emphasize here that I'm aware of no accounts that this
>> > is anything but true--but C unfortunately played a part in stoking a
>> > culture of pretension among software developers. "C is a language in
>> > which wizards program. I program in C. Therefore I'm a wizard." is how
>> > the syllogism (spot the fallacy) went. I don't know who does more
>> > damage--the people who believe their own BS, or the ones who know
>> > they're scamming their colleagues.
>> >
>> >> There are two arguments for malloc(0), Most importantly, it caters for
>> >> a limiting case for aggregates generated at runtime--an instance of
>> >> Kernighan's Law, "Do nothing gracefully". It also provides a way to
>> >> create a distinctive pointer to impart some meta-information, e.g.
>> >> "TBD" or "end of subgroup", distinct from the null pointer, which
>> >> merely denotes absence.
>> >
>> > I think I might be confused now. I've frequently seen arrays of structs
>> > initialized from lists of literals ending in a series of "NULL"
>> > structure members, in code that antedates or ignores C99's wonderful
>> > feature of designated initializers for aggregate types.[3] How does
>> > malloc(0) get this job done and what benefit does it bring?
>> >
>> > Last time I posted to TUHS I mentioned a proposal for explicit tail-call
>> > elimination in C. I got the syntax wrong. The proposal was "return
>> > goto;". The WG14 document number is N2920 and it's by Alex Gilding.
>> > Good stuff.[4] I hope we see it in C2y.
>> >
>> > Predictably, I must confess that I didn't make much headway on
>> > Schiller's 1975 "secure kernel" paper. Maybe next time.
>> >
>> > Regards,
>> > Branden
>> >
>> > [1] https://web.cs.dal.ca/~jamie/UWO/C/the10fromHenryS.html
>> >
>> > I can easily imagine that the tenet held at _some_ point in the
>> > C's history. It's _got_ to be the reason that the language
>> > relegates memset() and memcpy() to the standard library (or to the
>> > programmer's own devise)! :-O
>> >
>> > [2] Kernighan & Ritchie, _The C Programming Language_, 2nd edition, p. 2
>> >
>> > Having thus admitted the camel's nose to the tent, K&R would have
>> > done the world a major service by making memset(), or at least
>> > bzero(), a language feature, the latter perhaps by having "= 0"
>> > validly apply to an lvalue of non-primitive type. Okay,
>> > _potentially_ a major service. You'd still need the self-regarding
>> > wizard programmers to bother coding it, which they wouldn't in many
>> > cases "because speed". Move fast, break stuff.
>> >
>> > C++ screwed this up too, and stubbornly stuck by it for a long time.
>> >
>> > https://cplusplus.github.io/CWG/issues/178.html
>> >
>> > [3] https://gcc.gnu.org/onlinedocs/gcc/Designated-Inits.html
>> > [4] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2920.pdf
>>
>>
[-- Attachment #2: Type: text/html, Size: 16865 bytes --]
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 23:05 ` Rob Pike
@ 2024-09-28 23:30 ` Warner Losh
2024-09-29 10:06 ` Ralph Corderoy
2024-09-30 12:15 ` Dan Cross
0 siblings, 2 replies; 74+ messages in thread
From: Warner Losh @ 2024-09-28 23:30 UTC (permalink / raw)
To: Rob Pike; +Cc: Douglas McIlroy, TUHS main list
[-- Attachment #1: Type: text/plain, Size: 13955 bytes --]
On Sat, Sep 28, 2024, 5:05 PM Rob Pike <robpike@gmail.com> wrote:
> I wrote a letter to the ANSI C (1989) committee.
>
> Please allow malloc(0).
> Please allow zero-length arrays.
>
> I got two letters back, saying that malloc(0) is illegal because
> zero-length arrays are illegal, and the other vice versa.
>
> I fumed.
>
And now we have zero length arrays an UB malloc(0).
Warner
-rob
>
>
> On Sun, Sep 29, 2024 at 8:08 AM Douglas McIlroy <
> douglas.mcilroy@dartmouth.edu> wrote:
>
>> I have to concede Branden's "gotcha". Struct copying is definitely not
>> O(1).
>>
>> A real-life hazard of non-O(1) operations. Vic Vyssotsky put bzero (I
>> forget what Vic called it) into Fortran II. Sometime later, he found that a
>> percolation simulation was running annoyingly slowly. It took some study to
>> discover that the real inner loop of the program was not the percolation,
>> which touched only a small fraction of a 2D field. More time went into the
>> innocuous bzero initializer. The fix was to ADD code to the inner loop to
>> remember what entries had been touched, and initialize only those for the
>> next round of the simulation.
>>
>> > for a while, every instruction has [sic] an exactly predictable and
>> constant cycle
>> > count. ...[Then] all of a sudden you had instructions with O(n) cycle
>> counts.
>>
>> O(n) cycle counts were nothing new. In the 1950s we had the IBM 1620
>> with arbitrary-length arithmetic and the 709 with "convert" instructions
>> whose cycle count went up to 256.
>>
>> >> spinelessly buckled to allow malloc(0) to return 0, as some
>> >> implementations gratuitously did.
>>
>> > What was the alternative? There was no such thing as an exception, and
>> > if a pointer was an int and an int was as wide as a machine address,
>> > there'd be no way to indicate failure in-band, either.
>>
>> What makes you think allocating zero space should fail? If the size n of
>> a set is determined at run time, why should one have to special-case its
>> space allocation when n=0? Subsequent processing of the form for(i=0; i<n;
>> i++) {...} will handle it gracefully with no special code. Malloc should do
>> as it did in v7--return a non-null pointer different from any other active
>> malloc pointer, as Bakul stated. If worse comes to worst[1] this can be
>> done by padding up to the next feasible size. Regardless of how the pointer
>> is created, any access via it would of course be out of bounds and hence
>> wrong.
>>
>> > How does malloc(0) get this job done and what benefit does it bring?
>>
>> If I understand the "job" (about initializing structure members)
>> correctly, malloc(0) has no bearing on it. The benefit lies elsewhere.
>>
>> Apropos of tail calls, Rob Pike had a nice name for an explicit tail
>> call, "become". It's certainly reasonable, though, to make compilers
>> recognize tail calls implicitly.
>>
>> [1] Worse didn't come to worst in the original malloc. It attached
>> metadata to each block, so even blocks of size zero consumed some memory.
>>
>> Doug
>>
>> On Sat, Sep 28, 2024 at 1:59 PM Bakul Shah via TUHS <tuhs@tuhs.org>
>> wrote:
>>
>>> Just responding to random things that I noticed:
>>>
>>> You don't need special syntax for tail-call. It should be done
>>> transparently when a call is the last thing that gets executed. Special
>>> syntax will merely allow confused people to use it in the wrong place and
>>> get confused more.
>>>
>>> malloc(0) should return a unique ptr. So that "T* a = malloc(0); T* b =
>>> malloc(0); a != (T*)0 && a != b". Without this, malloc(0) acts differently
>>> from malloc(n) for n > 0.
>>>
>>> Note that except for arrays, function arguments & result are copied so
>>> copying a struct makes perfect sense. Passing arrays by reference may have
>>> been due to residual Fortran influence! [Just guessing] Also note: that one
>>> exception has been the cause of many problems.
>>>
>>> In any case you have not argued convincingly about why dynamic memory
>>> allocation should be in the language (runtime) :-) And adding that wouldn't
>>> have fixed any of the existing problems with the language.
>>>
>>> Bakul
>>>
>>> > On Sep 28, 2024, at 9:58 AM, G. Branden Robinson <
>>> g.branden.robinson@gmail.com> wrote:
>>> >
>>> > At 2024-09-28T09:34:14-0400, Douglas McIlroy wrote:
>>> >>> C's refusal to specify dynamic memory allocation in the language
>>> >>> runtime (as opposed to, eventually, the standard library)
>>> >>
>>> >> This complaint overlooks one tenet of C: every operation in what you
>>> >> call "language runtime" takes O(1) time. Dynamic memory allocation
>>> >> is not such an operation.
>>> >
>>> > A fair point. Let me argue about it anyway. ;-)
>>> >
>>> > I'd make three observations. First, K&R did _not_ tout this in their
>>> > book presenting ANSI C. I went back and checked the prefaces,
>>> > introduction, and the section presenting a simple malloc()/free()
>>> > implementation. The tenet you claim for the language is not explicitly
>>> > articulated and, if I squint really hard, I can only barely perceive
>>> > (imagine?) it deeply between the lines in some of the prefatory
>>> material
>>> > to which K&R mostly confine their efforts to promote the language. In
>>> > my view, a "tenet" is something more overt: the sort of thing U.S.
>>> > politicians try to get hung on the walls of every public school
>>> > classroom, like Henry Spencer's Ten Commandments of C[1] (which itself
>>> > does not mention this "core language has only O(1) features"
>>> principle).
>>> >
>>> > Second, in reviewing K&R I was reminded that structure copying is part
>>> > of the language. ("There are no operations that manipulate an entire
>>> > array or string, although structures may be copied as a unit."[2])
>>> > Doesn't that break the tenet right there?
>>> >
>>> > Third, and following on from the foregoing, your point reminded me of
>>> my
>>> > youth programming non-pipelined machines with no caches. You could set
>>> > your watch by (just about) any instruction in the set--and often did,
>>> > because we penurious microcomputer guys often lacked hardware real-time
>>> > clocks, too. That is to say, for a while, every instruction has an
>>> > exactly predictable and constant cycle count. (The _value_ of that
>>> > constant might depend on the addressing mode, because that would have
>>> > consequences on memory fetches, but the principle stood.) When the Z80
>>> > extended the 8080's instruction set, they ate from Tree of Knowledge
>>> > with block-copy instructions like LDIR and LDDR, and all of a sudden
>>> you
>>> > had instructions with O(n) cycle counts. But as a rule, programmers
>>> > seemed to welcome this instead of recognizing it as knowing sin,
>>> because
>>> > you generally knew worst-case how many bytes you'd be copying and take
>>> > that into account. (The worst worst case was a mere 64kB!)
>>> >
>>> > Further, Z80 home computers in low-end configurations (that is, no disk
>>> > drives) often did a shocking thing: they ran with all interrupts
>>> masked.
>>> > All the time. The one non-maskable interrupt was RESET, after which
>>> you
>>> > weren't going to be resuming execution of your program anyway. Not
>>> from
>>> > the same instruction, at least. As I recall the TRS-80 Model I/III/4
>>> > didn't even have logic on the motherboard to decode the Z80's
>>> "interrupt
>>> > mode 2", which was vectored, I think. Even in the "high-end"
>>> > configurations of these tiny machines, you got a whopping ONE interrupt
>>> > to play with ("IM 1").
>>> >
>>> > Later, when the Hitachi 6309 smuggled similar block-transfer decadence
>>> > into its extensions to the Motorola 6809 (to the excitement of we
>>> > semi-consciously Unix-adjacent OS-9 users) they faced a starker
>>> problem,
>>> > because the 6809 didn't wall off interrupts in the same way the 8080
>>> and
>>> > Z80. They therefore presented the programmer with the novelty of the
>>> > restartable instruction, and a new generation of programmers became
>>> > acquainted with the hard lessons time-sharing minicomputer people were
>>> > familiar with.
>>> >
>>> > My point in this digression is that, in my opinion, it's tough to hold
>>> > fast to the O(1) tenet you claim for C's core language and to another
>>> at
>>> > the same time: the language's identity as a "portable assembly
>>> > language". Unless every programmer has control over the compiler--and
>>> > they don't--you can't predict when the compiler will emit an O(n) block
>>> > transfer instruction. You'll just have to look at the disassembly.
>>> >
>>> > _Maybe_ you can retain purity by...never copying structs. I don't
>>> think
>>> > lint or any other tool ever checked for this. Your advocacy of this
>>> > tenet is the first time I've heard it presented.
>>> >
>>> > If you were to suggest to me that most of the time I've spent in my
>>> life
>>> > arguing with C advocates was with rotten exemplars of the species and
>>> > therefore was time wasted, I would concede the point.
>>> >
>>> > There's just so danged _many_ of them...
>>> >
>>> >> Your hobbyhorse awakened one of mine.
>>> >>
>>> >> malloc was in v7, before the C standard was written. The standard
>>> >> spinelessly buckled to allow malloc(0) to return 0, as some
>>> >> implementations gratuitously did.
>>> >
>>> > What was the alternative? There was no such thing as an exception, and
>>> > if a pointer was an int and an int was as wide as a machine address,
>>> > there'd be no way to indicate failure in-band, either.
>>> >
>>> > If the choice was that or another instance of atoi()'s wincingly awful
>>> > "does this 0 represent an error or successful conversion of a zero
>>> > input?" land mine, ANSI might have made the right choice.
>>> >
>>> >> I can't imagine that any program ever actually wanted the feature. Now
>>> >> it's one more undefined behavior that lurks in thousands of programs.
>>> >
>>> > Hoare admitted to only one billion-dollar mistake. No one dares count
>>> > how many to write in C's ledger. This was predicted, wasn't it?
>>> > Everyone loved C because it was fast: it was performant, because it
>>> > never met a runtime check it didn't eschew--recall again Kernighan
>>> > punking Pascal on this exact point--and it was quick for the programmer
>>> > to write because it never met a _compile_-time check it didn't eschew.
>>> > C was born as a language for wizards who never made mistakes.
>>> >
>>> > The problem is that, like James Madison's fictive government of angels,
>>> > such entities don't exist. The staff of the CSRC itself may have been
>>> > overwhelmingly populated with frank, modest, and self-deprecating
>>> > people--and I'll emphasize here that I'm aware of no accounts that this
>>> > is anything but true--but C unfortunately played a part in stoking a
>>> > culture of pretension among software developers. "C is a language in
>>> > which wizards program. I program in C. Therefore I'm a wizard." is
>>> how
>>> > the syllogism (spot the fallacy) went. I don't know who does more
>>> > damage--the people who believe their own BS, or the ones who know
>>> > they're scamming their colleagues.
>>> >
>>> >> There are two arguments for malloc(0), Most importantly, it caters for
>>> >> a limiting case for aggregates generated at runtime--an instance of
>>> >> Kernighan's Law, "Do nothing gracefully". It also provides a way to
>>> >> create a distinctive pointer to impart some meta-information, e.g.
>>> >> "TBD" or "end of subgroup", distinct from the null pointer, which
>>> >> merely denotes absence.
>>> >
>>> > I think I might be confused now. I've frequently seen arrays of
>>> structs
>>> > initialized from lists of literals ending in a series of "NULL"
>>> > structure members, in code that antedates or ignores C99's wonderful
>>> > feature of designated initializers for aggregate types.[3] How does
>>> > malloc(0) get this job done and what benefit does it bring?
>>> >
>>> > Last time I posted to TUHS I mentioned a proposal for explicit
>>> tail-call
>>> > elimination in C. I got the syntax wrong. The proposal was "return
>>> > goto;". The WG14 document number is N2920 and it's by Alex Gilding.
>>> > Good stuff.[4] I hope we see it in C2y.
>>> >
>>> > Predictably, I must confess that I didn't make much headway on
>>> > Schiller's 1975 "secure kernel" paper. Maybe next time.
>>> >
>>> > Regards,
>>> > Branden
>>> >
>>> > [1] https://web.cs.dal.ca/~jamie/UWO/C/the10fromHenryS.html
>>> >
>>> > I can easily imagine that the tenet held at _some_ point in the
>>> > C's history. It's _got_ to be the reason that the language
>>> > relegates memset() and memcpy() to the standard library (or to the
>>> > programmer's own devise)! :-O
>>> >
>>> > [2] Kernighan & Ritchie, _The C Programming Language_, 2nd edition, p.
>>> 2
>>> >
>>> > Having thus admitted the camel's nose to the tent, K&R would have
>>> > done the world a major service by making memset(), or at least
>>> > bzero(), a language feature, the latter perhaps by having "= 0"
>>> > validly apply to an lvalue of non-primitive type. Okay,
>>> > _potentially_ a major service. You'd still need the self-regarding
>>> > wizard programmers to bother coding it, which they wouldn't in many
>>> > cases "because speed". Move fast, break stuff.
>>> >
>>> > C++ screwed this up too, and stubbornly stuck by it for a long time.
>>> >
>>> > https://cplusplus.github.io/CWG/issues/178.html
>>> >
>>> > [3] https://gcc.gnu.org/onlinedocs/gcc/Designated-Inits.html
>>> > [4] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2920.pdf
>>>
>>>
[-- Attachment #2: Type: text/html, Size: 17641 bytes --]
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 23:30 ` Warner Losh
@ 2024-09-29 10:06 ` Ralph Corderoy
2024-09-29 12:25 ` Warner Losh
2024-09-30 12:15 ` Dan Cross
1 sibling, 1 reply; 74+ messages in thread
From: Ralph Corderoy @ 2024-09-29 10:06 UTC (permalink / raw)
To: TUHS main list; +Cc: Douglas McIlroy
Hi Werner,
> > I got two letters back, saying that malloc(0) is illegal because
> > zero-length arrays are illegal, and the other vice versa.
>
> And now we have zero length arrays an UB malloc(0).
malloc(0) isn't undefined behaviour but implementation defined.
Are there prominent modern implementations which consider it an error so
return NULL?
--
Cheers, Ralph.
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-29 10:06 ` Ralph Corderoy
@ 2024-09-29 12:25 ` Warner Losh
2024-09-29 15:17 ` Ralph Corderoy
0 siblings, 1 reply; 74+ messages in thread
From: Warner Losh @ 2024-09-29 12:25 UTC (permalink / raw)
To: Ralph Corderoy; +Cc: TUHS main list, Douglas McIlroy
[-- Attachment #1: Type: text/plain, Size: 686 bytes --]
On Sun, Sep 29, 2024, 4:06 AM Ralph Corderoy <ralph@inputplus.co.uk> wrote:
> Hi Werner,
>
> > > I got two letters back, saying that malloc(0) is illegal because
> > > zero-length arrays are illegal, and the other vice versa.
> >
> > And now we have zero length arrays an UB malloc(0).
>
> malloc(0) isn't undefined behaviour but implementation defined.
>
In modern C there is no difference between those two concepts.
Are there prominent modern implementations which consider it an error so
> return NULL?
>
Many. There are a dozen or more malloc implementations in use and they all
are slightly different.
Warner
Warner
> --
> Cheers, Ralph.
>
[-- Attachment #2: Type: text/html, Size: 1568 bytes --]
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-29 12:25 ` Warner Losh
@ 2024-09-29 15:17 ` Ralph Corderoy
0 siblings, 0 replies; 74+ messages in thread
From: Ralph Corderoy @ 2024-09-29 15:17 UTC (permalink / raw)
To: TUHS; +Cc: Douglas McIlroy
Hi Werner,
> > malloc(0) isn't undefined behaviour but implementation defined.
>
> In modern C there is no difference between those two concepts.
Can you explain more about your view or give a link if it's an accepted
opinion. I'm used to an implementation stating its choices from those
given by a C standard, e.g.
(42) Whether the calloc, malloc, realloc, and aligned_alloc functions
return a null pointer or a pointer to an allocated object when
the size requested is zero (7.24.3).
I'd call malloc(0) and know it's not undefined behaviour but one of
those choices.
--
Cheers, Ralph.
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 23:30 ` Warner Losh
2024-09-29 10:06 ` Ralph Corderoy
@ 2024-09-30 12:15 ` Dan Cross
1 sibling, 0 replies; 74+ messages in thread
From: Dan Cross @ 2024-09-30 12:15 UTC (permalink / raw)
To: Warner Losh; +Cc: Douglas McIlroy, TUHS main list
On Sat, Sep 28, 2024 at 7:37 PM Warner Losh <imp@bsdimp.com> wrote:
> On Sat, Sep 28, 2024, 5:05 PM Rob Pike <robpike@gmail.com> wrote:
>> I wrote a letter to the ANSI C (1989) committee.
>>
>> Please allow malloc(0).
>> Please allow zero-length arrays.
>>
>> I got two letters back, saying that malloc(0) is illegal because zero-length arrays are illegal, and the other vice versa.
>>
>> I fumed.
>
> And now we have zero length arrays an UB malloc(0).
I'm late to this; I know. But I wonder if perhaps you meant realloc(0,
p) being made UB in C23? This caused something of a kerfuffle, perhaps
exemplified by an incredibly poorly written "article" in ACM Queue.
I asked Jean-Heyd Meneide why this was done, and he broke it down for
me (the gist of it being, "it was the least-bad of a bunch of bad
options..."). The take from the C standards committee was actually
very reasonable, given the constraints they are forced to live under
nowadays, even if some of the rank and file were infuriated. But I
think the overall situation is illustrative of the sort of dynamic Rob
alluded to. The motivations of C compiler writers and programmers have
diverged; a great frustration for me is that, despite its origins as a
language meant to support systems programming and operation systems
implementation, compiler writers these days seem almost antagonistic
to that use. Linux, for instance, is not written in "C" so much as
"Linux C", which is a dialect of the language selected by selectively
setting compiler flags until some set of reasonable presets tames it
enough to be useful.
- Dan C.
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 16:58 ` G. Branden Robinson
2024-09-28 17:47 ` Luther Johnson
2024-09-28 17:59 ` Bakul Shah via TUHS
@ 2024-09-28 18:01 ` G. Branden Robinson
2024-10-01 13:13 ` arnold
2024-09-28 18:05 ` Larry McVoy
3 siblings, 1 reply; 74+ messages in thread
From: G. Branden Robinson @ 2024-09-28 18:01 UTC (permalink / raw)
To: TUHS main list
[-- Attachment #1: Type: text/plain, Size: 3068 bytes --]
[self-follow-up]
At 2024-09-28T11:58:16-0500, G. Branden Robinson wrote:
> > malloc was in v7, before the C standard was written. The standard
> > spinelessly buckled to allow malloc(0) to return 0, as some
> > implementations gratuitously did.
>
> What was the alternative? There was no such thing as an exception, and
> if a pointer was an int and an int was as wide as a machine address,
> there'd be no way to indicate failure in-band, either.
While I'm making enemies of C advocates, let me just damn myself further
by suggesting an answer to my own question.
The obvious and correct thing to do was to have any standard library
function that could possibly fail return a structure type instead of a
scalar. Such a type would have two components: the value of interest
when valid, and an error indicator.[1]
As John Belushi would have said at the time such design decisions were
being made, "but nooooooooo". Returning a struct was an obviously
HORRIBLE idea. My god, you might be stacking two ints instead of one.
That doubles the badness! Oh, how we yearn for the days of the PDP-7,
when resources were so scarce that a system call didn't return
_anything_. If it failed, the carry flag was set. "One bit of return
value ought to be enough for anyone," as I hope Ken Thompson never said.
Expounders of Doug's tenet would, or should, have acknowledged that by
going to the library _at all_, they were giving up any O(1) guarantee,
and likely starting something O(n) or worse in time and/or space. So
what's so awful about sticking on a piece of O(1) overhead? In the
analysis of algorithms class lecture that the wizards slept through, it
was pointed out that only the highest-order term is retained. Well, the
extra int was easy to see in memory and throw a hissy fit about, and I
suppose a hissy fit is exactly what happened.
Much better to use a global library symbol. Call it "errno". That's
sure to never cause anyone any problems with reentrancy or concurrency
whatsoever. After all:
"...C offers only straightforward, single-thread control flow: tests,
loops, grouping, and subprograms, but not multiprogramming, parallel
operations, synchronization, or coroutines." (K&R 2e, p. 2)
It's grimly fascinating to me now to observe how many security
vulnerabilities and other disasters have arisen from the determination
of C's champions to apply it to all problems, and with especial fervor
to those that K&R explicitly acknowledged it was ill-suited for.
Real wizards, it seems, know only one spell, and it is tellingly
hammer-shaped.
Regards,
Branden
[1] Much later, we came to know this (in slightly cleaner form) as an
"option type". Rust advocates make a big, big deal of this. Only
a slightly bigger one than it deserves, but I perceive a replay of
C's cultural history in the passion of Rust's advocates. Or maybe
something less edifying than passion accounts for this:
https://fasterthanli.me/articles/the-rustconf-keynote-fiasco-explained
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 18:01 ` G. Branden Robinson
@ 2024-10-01 13:13 ` arnold
2024-10-01 13:32 ` Larry McVoy
` (2 more replies)
0 siblings, 3 replies; 74+ messages in thread
From: arnold @ 2024-10-01 13:13 UTC (permalink / raw)
To: tuhs, g.branden.robinson
This is long, my apologies.
"G. Branden Robinson" <g.branden.robinson@gmail.com> wrote:
> [ screed omitted ]
Branden, as they say, Hindsight is 20-20.
But one needs to take into account that Unix and C evolved,
and in particular on VERY small machines. IIRC the original
Unix PDP-11s didn't even have split I/D spaces. Getting a decent
compiler into that small an address space isn't easy (and by now
is a lost art).
The evolution was gradual and, shall we say "organic", without
the pretension and formalisms of a language committee, but simply
to meet the needs of the Bell Labs researchers.
The value of a high level language for OS work was clear from
Multics. But writing a PL/1 compiler from scratch for the tiny
PDP-11 address space made no sense. Thus the path from BCPL to B to C.
Today, new languages are often reactions to older ones.
Java, besides the JVM portability, tried to clean up the syntax
of C / C++ and add some safety and modernism (graphics, concurrency).
C# was a reaction to (and a way to compete with) Java.
Go was very clearly in reaction to current C++, but headed back
in the direction of simplicity. Successfully, IMHO.
Would the word have been better off if Ada had caught on everywhere?
Probably. When I was in grad school studying language design, circa 1982,
it was expected to do so. But the language was VERY challenging for
compiler writers. C was easy to port, and then Unix along with it.
C'est la vie.
Larry wrote:
> I have a somewhat different view. I have a son who is learning to program
> and he asked me about C. I said "C is like driving a sports car on a
> twisty mountain road that has cliffs and no guard rails. If you want to
> check your phone while you are driving, it's not for you. It requires
> your full, focussed attention. So that sounds bad, right? Well, if
> you are someone who enjoys driving a sports car, and are good at it,
> perhaps C is for you."
This goes back to the evolution thing. At the time, C was a huge
step up from FORTRAN and assembly. Programmers who moved to C
appreciated all it gave them over what they had at the time.
C programmers weren't wizards, they were simply using the best
available tool.
Going from a bicycle to an automobile is a big jump (to
continue Larry's analogy).
Larry again:
> I ran a company that developed a product that was orders of magnitude more
> complex than the v7 kernel (low bar but still) all in C and we had *NONE*
> of those supposed problems. We were careful to use stuff that worked,
> I'm "famous" in that company as the guy was viewed as "that was invented
> after 1980 so Larry won't let us use it". Not true, we used mmap and used
> POSIX signals, but mostly true. If you stick to the basics, C just works.
> And is portable, we supported every Unix (even SCO), MacOS, Windows, and
> all the Linux variants from ARM to IBM mainframes.
This is also my experience with gawk, which runs on everything from
ARM (Linux) to Windows to mac to VMS to z/OS (S/390x). OS issues
give me more grief than language issues.
> All that said, I get it, you want guard rails. You are not wrong, the
> caliber of programmers these days are nowhere near Bell Labs or Sun or
> my guys.
This is a very important, key point. As more and more people have
entered the field, the quality / education / knowledge / whatever
has gone down. What was normal to learn and use back in 1983 is
now too difficult for many, if not most, people, even good ones, in
the field now.
The people I work with here (Israel) don't know who Donald Knuth is.
Two of the people in my group, over 40, didn't know what Emacs is.
Shell scripting seems to be too hard for many people to master,
and I see a huge amount of Cargo Cult Programming when it comes
to things like scripts and Unix tools.
Thus Go and Rust are good things, taking the sharp tools out of the
hands of the people who aren't qualified to use them. Same thing Python.
But for me, and I think others of my vintage, this state of affairs
seems sad.
My 4 cents,
Arnold
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 13:13 ` arnold
@ 2024-10-01 13:32 ` Larry McVoy
2024-10-01 13:47 ` arnold
2024-10-01 15:44 ` Bakul Shah via TUHS
2024-10-01 16:40 ` [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum) Paul Winalski
2 siblings, 1 reply; 74+ messages in thread
From: Larry McVoy @ 2024-10-01 13:32 UTC (permalink / raw)
To: arnold; +Cc: tuhs
On Tue, Oct 01, 2024 at 07:13:04AM -0600, arnold@skeeve.com wrote:
> Would the word have been better off if Ada had caught on everywhere?
> Probably. When I was in grad school studying language design, circa 1982,
> it was expected to do so. But the language was VERY challenging for
> compiler writers.
Huh. Rob Netzer and I, as grad students, took cs701 and cs702 at UW Madison.
It was the compilers course (701) and the really hard compilers course (702)
at the time. The first course was to write a compiler for a subset of Ada
and the second on increased the subset to be almost complete.
We were supposed to do it on an IBM mainframe because the professor had his
own version of lex/yacc there. Rob had a 3b1 and asked if we could do it
there if he rewrote the parser stuff. Prof said sure.
In one semester we had a compiler, no optimizer and not much in the
way of graceful error handling, but it compiled stuff that ran. We did
all of Ada other than late binding of variables (I think that was Ada's
templates) and threads and probably some other stuff I don't remember.
Rob is pretty smart, went on to be a tenured prof at Brown before going
back to industry. Maybe he did all the heavy lifting, but I didn't find
that project to very challenging. Did I miss something?
> This is a very important, key point. As more and more people have
> entered the field, the quality / education / knowledge / whatever
> has gone down. What was normal to learn and use back in 1983 is
> now too difficult for many, if not most, people, even good ones, in
> the field now.
>
> But for me, and I think others of my vintage, this state of affairs
> seems sad.
100% agree. A sharp young kid I know is/was working on finding bugs in
binaries. He came to me for some insight and I had to understand what he
was doing and when I did, I kept saying "just hire people that don't do
this stupid stuff" and he kept laughing at me and said it was impossible.
I don't consider myself to be that good of a programmer, I can point to
dozens of people my age that can run circles around me and I'm sure there
are many more. But apparently the bar is pretty low these days and I
agree, that's sad.
--
---
Larry McVoy Retired to fishing http://www.mcvoy.com/lm/boat
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 13:32 ` Larry McVoy
@ 2024-10-01 13:47 ` arnold
2024-10-01 14:01 ` Larry McVoy
2024-10-01 16:49 ` Paul Winalski
0 siblings, 2 replies; 74+ messages in thread
From: arnold @ 2024-10-01 13:47 UTC (permalink / raw)
To: lm, arnold; +Cc: tuhs
Larry McVoy <lm@mcvoy.com> wrote:
> On Tue, Oct 01, 2024 at 07:13:04AM -0600, arnold@skeeve.com wrote:
> > Would the word have been better off if Ada had caught on everywhere?
> > Probably. When I was in grad school studying language design, circa 1982,
> > it was expected to do so. But the language was VERY challenging for
> > compiler writers.
>
> Huh. Rob Netzer and I, as grad students, took cs701 and cs702 at UW Madison.
> It was the compilers course (701) and the really hard compilers course (702)
> at the time. The first course was to write a compiler for a subset of Ada
> and the second on increased the subset to be almost complete.
>
> We were supposed to do it on an IBM mainframe because the professor had his
> own version of lex/yacc there. Rob had a 3b1 and asked if we could do it
> there if he rewrote the parser stuff. Prof said sure.
>
> In one semester we had a compiler, no optimizer and not much in the
> way of graceful error handling, but it compiled stuff that ran. We did
> all of Ada other than late binding of variables (I think that was Ada's
> templates) and threads and probably some other stuff I don't remember.
Did you do generics? That and the run time, which had some real-time
bits to it (*IIRC*, it's been a long time), as well as the cross
object code type checking, would have been real bears.
Like many things, the first 90% is easy, the second 90% is hard. :-)
> I don't consider myself to be that good of a programmer, I can point to
> dozens of people my age that can run circles around me and I'm sure there
> are many more.
You are undoubtedly better than you give yourself credit for, even
if there were people who could run circles around you. I learned
a long time ago, that no matter how good you are, there's always
someone better than you at something. I decided long ago to not
try to compete with Superman.
> But apparently the bar is pretty low these days and I agree, that's sad.
And it makes it much less fun to be out in the working world. :-(
Arnold
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 13:47 ` arnold
@ 2024-10-01 14:01 ` Larry McVoy
2024-10-01 14:18 ` arnold
2024-10-01 14:25 ` Luther Johnson
2024-10-01 16:49 ` Paul Winalski
1 sibling, 2 replies; 74+ messages in thread
From: Larry McVoy @ 2024-10-01 14:01 UTC (permalink / raw)
To: arnold; +Cc: tuhs
On Tue, Oct 01, 2024 at 07:47:10AM -0600, arnold@skeeve.com wrote:
> Larry McVoy <lm@mcvoy.com> wrote:
>
> > On Tue, Oct 01, 2024 at 07:13:04AM -0600, arnold@skeeve.com wrote:
> > > Would the word have been better off if Ada had caught on everywhere?
> > > Probably. When I was in grad school studying language design, circa 1982,
> > > it was expected to do so. But the language was VERY challenging for
> > > compiler writers.
> >
> > Huh. Rob Netzer and I, as grad students, took cs701 and cs702 at UW Madison.
> > It was the compilers course (701) and the really hard compilers course (702)
> > at the time. The first course was to write a compiler for a subset of Ada
> > and the second on increased the subset to be almost complete.
> >
> > We were supposed to do it on an IBM mainframe because the professor had his
> > own version of lex/yacc there. Rob had a 3b1 and asked if we could do it
> > there if he rewrote the parser stuff. Prof said sure.
> >
> > In one semester we had a compiler, no optimizer and not much in the
> > way of graceful error handling, but it compiled stuff that ran. We did
> > all of Ada other than late binding of variables (I think that was Ada's
> > templates) and threads and probably some other stuff I don't remember.
>
> Did you do generics? That and the run time, which had some real-time
> bits to it (*IIRC*, it's been a long time), as well as the cross
> object code type checking, would have been real bears.
None of those ring a bell so
> Like many things, the first 90% is easy, the second 90% is hard. :-)
I guess we did the easy stuff :-(
> > I don't consider myself to be that good of a programmer, I can point to
> > dozens of people my age that can run circles around me and I'm sure there
> > are many more.
>
> You are undoubtedly better than you give yourself credit for, even
> if there were people who could run circles around you. I learned
> a long time ago, that no matter how good you are, there's always
> someone better than you at something. I decided long ago to not
> try to compete with Superman.
Funny, I've come to the same conclusion, both in programming and my
retirement hobby. There is always someone way better than me, but
you are correct, that doesn't mean I'm awful. Just have more to
learn.
A buddy pointed out that I was probably better than 80% of the people
leaving the dock, it's just I fish with a guy who is better than pretty
much everyone.
> > But apparently the bar is pretty low these days and I agree, that's sad.
>
> And it makes it much less fun to be out in the working world. :-(
As a guy in his 2nd retirement (1st didn't stick) I can tell you I am
so happy not having to deal with work stuff. My buddies who are still
working tell me stories I find difficult to believe. They all say I'm
so politically incorrect that I wouldn't last a week in today's world.
If their stories are true, yeah, that's not for me.
Weird politics and crappy programmers, count me out.
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 14:01 ` Larry McVoy
@ 2024-10-01 14:18 ` arnold
2024-10-01 14:25 ` Luther Johnson
1 sibling, 0 replies; 74+ messages in thread
From: arnold @ 2024-10-01 14:18 UTC (permalink / raw)
To: lm, arnold; +Cc: tuhs
Closing off the thread...
> > Did you do generics? That and the run time, which had some real-time
> > bits to it (*IIRC*, it's been a long time), as well as the cross
> > object code type checking, would have been real bears.
>
> None of those ring a bell so
>
> > Like many things, the first 90% is easy, the second 90% is hard. :-)
>
> I guess we did the easy stuff :-(
Even the easy stuff is good learning.
> Funny, I've come to the same conclusion, both in programming and my
> retirement hobby. There is always someone way better than me, but
> you are correct, that doesn't mean I'm awful. Just have more to
> learn.
Right.
> > > But apparently the bar is pretty low these days and I agree, that's sad.
> >
> > And it makes it much less fun to be out in the working world. :-(
>
> As a guy in his 2nd retirement (1st didn't stick) I can tell you I am
> so happy not having to deal with work stuff. My buddies who are still
> working tell me stories I find difficult to believe. They all say I'm
> so politically incorrect that I wouldn't last a week in today's world.
> If their stories are true, yeah, that's not for me.
>
> Weird politics and crappy programmers, count me out.
I don't have to deal with the politics, and my team is good, but
the company has a lot of crappy code.
I am planning to retire quite soon, too. :-)
Arnold
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 14:01 ` Larry McVoy
2024-10-01 14:18 ` arnold
@ 2024-10-01 14:25 ` Luther Johnson
2024-10-01 14:56 ` Dan Cross
1 sibling, 1 reply; 74+ messages in thread
From: Luther Johnson @ 2024-10-01 14:25 UTC (permalink / raw)
To: tuhs
I think because the of the orders of magnitude increase in the demand
for programmers, we now have a very large number of programmers with
little or no math and science (and computer science doesn't count in the
point I'm trying to make here, if that's your only science, you're not
going to have the models in your head from other disciplines to give you
useful analogs) background, and that's a big change from 40 years ago.
So that has had an effect on who is programming, how they think about
it, and how languages have been marketed to that programming audience. IMHO.
On 10/01/2024 07:01 AM, Larry McVoy wrote:
> On Tue, Oct 01, 2024 at 07:47:10AM -0600, arnold@skeeve.com wrote:
>> Larry McVoy <lm@mcvoy.com> wrote:
>>
>>> On Tue, Oct 01, 2024 at 07:13:04AM -0600, arnold@skeeve.com wrote:
>>>> Would the word have been better off if Ada had caught on everywhere?
>>>> Probably. When I was in grad school studying language design, circa 1982,
>>>> it was expected to do so. But the language was VERY challenging for
>>>> compiler writers.
>>> Huh. Rob Netzer and I, as grad students, took cs701 and cs702 at UW Madison.
>>> It was the compilers course (701) and the really hard compilers course (702)
>>> at the time. The first course was to write a compiler for a subset of Ada
>>> and the second on increased the subset to be almost complete.
>>>
>>> We were supposed to do it on an IBM mainframe because the professor had his
>>> own version of lex/yacc there. Rob had a 3b1 and asked if we could do it
>>> there if he rewrote the parser stuff. Prof said sure.
>>>
>>> In one semester we had a compiler, no optimizer and not much in the
>>> way of graceful error handling, but it compiled stuff that ran. We did
>>> all of Ada other than late binding of variables (I think that was Ada's
>>> templates) and threads and probably some other stuff I don't remember.
>> Did you do generics? That and the run time, which had some real-time
>> bits to it (*IIRC*, it's been a long time), as well as the cross
>> object code type checking, would have been real bears.
> None of those ring a bell so
>
>> Like many things, the first 90% is easy, the second 90% is hard. :-)
> I guess we did the easy stuff :-(
>
>>> I don't consider myself to be that good of a programmer, I can point to
>>> dozens of people my age that can run circles around me and I'm sure there
>>> are many more.
>> You are undoubtedly better than you give yourself credit for, even
>> if there were people who could run circles around you. I learned
>> a long time ago, that no matter how good you are, there's always
>> someone better than you at something. I decided long ago to not
>> try to compete with Superman.
> Funny, I've come to the same conclusion, both in programming and my
> retirement hobby. There is always someone way better than me, but
> you are correct, that doesn't mean I'm awful. Just have more to
> learn.
>
> A buddy pointed out that I was probably better than 80% of the people
> leaving the dock, it's just I fish with a guy who is better than pretty
> much everyone.
>
>>> But apparently the bar is pretty low these days and I agree, that's sad.
>> And it makes it much less fun to be out in the working world. :-(
> As a guy in his 2nd retirement (1st didn't stick) I can tell you I am
> so happy not having to deal with work stuff. My buddies who are still
> working tell me stories I find difficult to believe. They all say I'm
> so politically incorrect that I wouldn't last a week in today's world.
> If their stories are true, yeah, that's not for me.
>
> Weird politics and crappy programmers, count me out.
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 14:25 ` Luther Johnson
@ 2024-10-01 14:56 ` Dan Cross
2024-10-01 15:08 ` Stuff Received
` (2 more replies)
0 siblings, 3 replies; 74+ messages in thread
From: Dan Cross @ 2024-10-01 14:56 UTC (permalink / raw)
To: Luther Johnson; +Cc: tuhs
On Tue, Oct 1, 2024 at 10:32 AM Luther Johnson
<luther.johnson@makerlisp.com> wrote:
> I think because the of the orders of magnitude increase in the demand
> for programmers, we now have a very large number of programmers with
> little or no math and science (and computer science doesn't count in the
> point I'm trying to make here, if that's your only science, you're not
> going to have the models in your head from other disciplines to give you
> useful analogs) background, and that's a big change from 40 years ago.
> So that has had an effect on who is programming, how they think about
> it, and how languages have been marketed to that programming audience. IMHO.
I've found a grounding in mathematics useful for programming, but
beyond some knowledge of the physical constraints that the universe
places on us and a very healthy appreciation for the scientific
method, I'm having a hard time understanding how the hard sciences
would help out too much. Electrical engineering seems like it would be
more useful, than, say, chemistry or geology.
I talk to a lot of academics, and I think they see the situation
differently than is presented here. In a nutshell, the way a lot of
them look at it, the amount of computer science in the world increases
constantly while the amount of time they have to teach that to
undergraduates remains fixed. As a result, they have to pick and
choose what they teach very, very carefully, balancing a number of
criteria as they do so. What this translates to in the real world
isn't that the bar is lowered, but that the bar is different.
- Dan C.
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 14:56 ` Dan Cross
@ 2024-10-01 15:08 ` Stuff Received
2024-10-01 15:20 ` Larry McVoy
2024-10-01 19:04 ` arnold
2 siblings, 0 replies; 74+ messages in thread
From: Stuff Received @ 2024-10-01 15:08 UTC (permalink / raw)
To: COFF
[-->COFF]
On 2024-10-01 10:56, Dan Cross wrote (in part):
> I've found a grounding in mathematics useful for programming, but
> beyond some knowledge of the physical constraints that the universe
> places on us and a very healthy appreciation for the scientific
> method, I'm having a hard time understanding how the hard sciences
> would help out too much. Electrical engineering seems like it would be
> more useful, than, say, chemistry or geology.
I see this as related to the old question about whether it is easier to
teach domain experts to program or teach programmers about the domain.
(I worked for a company that wrote/sold scientific libraries for
embedded systems.) We had a mixture but the former was often easier.
S.
>
> I talk to a lot of academics, and I think they see the situation
> differently than is presented here. In a nutshell, the way a lot of
> them look at it, the amount of computer science in the world increases
> constantly while the amount of time they have to teach that to
> undergraduates remains fixed. As a result, they have to pick and
> choose what they teach very, very carefully, balancing a number of
> criteria as they do so. What this translates to in the real world
> isn't that the bar is lowered, but that the bar is different.
>
> - Dan C.
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 14:56 ` Dan Cross
2024-10-01 15:08 ` Stuff Received
@ 2024-10-01 15:20 ` Larry McVoy
2024-10-01 15:38 ` Peter Weinberger (温博格) via TUHS
2024-10-01 19:04 ` arnold
2 siblings, 1 reply; 74+ messages in thread
From: Larry McVoy @ 2024-10-01 15:20 UTC (permalink / raw)
To: Dan Cross; +Cc: Luther Johnson, tuhs
On Tue, Oct 01, 2024 at 10:56:13AM -0400, Dan Cross wrote:
> On Tue, Oct 1, 2024 at 10:32???AM Luther Johnson
> <luther.johnson@makerlisp.com> wrote:
> > I think because the of the orders of magnitude increase in the demand
> > for programmers, we now have a very large number of programmers with
> > little or no math and science (and computer science doesn't count in the
> > point I'm trying to make here, if that's your only science, you're not
> > going to have the models in your head from other disciplines to give you
> > useful analogs) background, and that's a big change from 40 years ago.
> > So that has had an effect on who is programming, how they think about
> > it, and how languages have been marketed to that programming audience. IMHO.
>
> I've found a grounding in mathematics useful for programming, but
> beyond some knowledge of the physical constraints that the universe
> places on us and a very healthy appreciation for the scientific
> method, I'm having a hard time understanding how the hard sciences
> would help out too much. Electrical engineering seems like it would be
> more useful, than, say, chemistry or geology.
>
> I talk to a lot of academics, and I think they see the situation
> differently than is presented here. In a nutshell, the way a lot of
> them look at it, the amount of computer science in the world increases
> constantly while the amount of time they have to teach that to
> undergraduates remains fixed. As a result, they have to pick and
> choose what they teach very, very carefully, balancing a number of
> criteria as they do so. What this translates to in the real world
> isn't that the bar is lowered, but that the bar is different.
I really wish that they made students take something like the PDP-11
assembly class - it was really systems architecture, you learned the
basic idea of a computer: a CPU, a bus to talk to memory, a bus to
talk to I/O, how a stack works, ideally how a context switch works
though that kinda blows minds (I personally don't think you are a
kernel programmer if you haven't implemented swtch() or at least
walked the code and understood all of it).
I did all that and developed a mental model of all computers that
has helped me over the last 4 decades. Yes, my model is overly
simplistic but it still works, even on the x86 craziness. I don't
know how you could get to that mental model with x86, x86 is too
weird. I don't really know which architecture is close to the
simplicity of a PDP-11 today. Anyone?
If I were teaching it, I'd just get a PDP-11 simulator and teach
on that. Maybe.
--
---
Larry McVoy Retired to fishing http://www.mcvoy.com/lm/boat
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 15:20 ` Larry McVoy
@ 2024-10-01 15:38 ` Peter Weinberger (温博格) via TUHS
2024-10-01 15:50 ` ron minnich
0 siblings, 1 reply; 74+ messages in thread
From: Peter Weinberger (温博格) via TUHS @ 2024-10-01 15:38 UTC (permalink / raw)
To: Larry McVoy; +Cc: Luther Johnson, tuhs
Each generation bemoans the qualities of following generations.
(Perhaps justifiably in this case, but we're acting out a stereotyped
pattern.)
Having a mental model of a computer is a good idea, but I'd rather not
have to teach the details of PDP-11 condition codes (which changed in
unexpected ways over time) Now caches loom large, and should such a
course provide a mental model of smart phones? [I think it should, and
the course should cover secure boot, but that leaves lots less time
for assembly language.]
On Tue, Oct 1, 2024 at 11:20 AM Larry McVoy <lm@mcvoy.com> wrote:
>
> On Tue, Oct 01, 2024 at 10:56:13AM -0400, Dan Cross wrote:
> > On Tue, Oct 1, 2024 at 10:32???AM Luther Johnson
> > <luther.johnson@makerlisp.com> wrote:
> > > I think because the of the orders of magnitude increase in the demand
> > > for programmers, we now have a very large number of programmers with
> > > little or no math and science (and computer science doesn't count in the
> > > point I'm trying to make here, if that's your only science, you're not
> > > going to have the models in your head from other disciplines to give you
> > > useful analogs) background, and that's a big change from 40 years ago.
> > > So that has had an effect on who is programming, how they think about
> > > it, and how languages have been marketed to that programming audience. IMHO.
> >
> > I've found a grounding in mathematics useful for programming, but
> > beyond some knowledge of the physical constraints that the universe
> > places on us and a very healthy appreciation for the scientific
> > method, I'm having a hard time understanding how the hard sciences
> > would help out too much. Electrical engineering seems like it would be
> > more useful, than, say, chemistry or geology.
> >
> > I talk to a lot of academics, and I think they see the situation
> > differently than is presented here. In a nutshell, the way a lot of
> > them look at it, the amount of computer science in the world increases
> > constantly while the amount of time they have to teach that to
> > undergraduates remains fixed. As a result, they have to pick and
> > choose what they teach very, very carefully, balancing a number of
> > criteria as they do so. What this translates to in the real world
> > isn't that the bar is lowered, but that the bar is different.
>
> I really wish that they made students take something like the PDP-11
> assembly class - it was really systems architecture, you learned the
> basic idea of a computer: a CPU, a bus to talk to memory, a bus to
> talk to I/O, how a stack works, ideally how a context switch works
> though that kinda blows minds (I personally don't think you are a
> kernel programmer if you haven't implemented swtch() or at least
> walked the code and understood all of it).
>
> I did all that and developed a mental model of all computers that
> has helped me over the last 4 decades. Yes, my model is overly
> simplistic but it still works, even on the x86 craziness. I don't
> know how you could get to that mental model with x86, x86 is too
> weird. I don't really know which architecture is close to the
> simplicity of a PDP-11 today. Anyone?
>
> If I were teaching it, I'd just get a PDP-11 simulator and teach
> on that. Maybe.
> --
> ---
> Larry McVoy Retired to fishing http://www.mcvoy.com/lm/boat
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 15:38 ` Peter Weinberger (温博格) via TUHS
@ 2024-10-01 15:50 ` ron minnich
0 siblings, 0 replies; 74+ messages in thread
From: ron minnich @ 2024-10-01 15:50 UTC (permalink / raw)
To: Peter Weinberger (温博格); +Cc: Luther Johnson, tuhs
[-- Attachment #1: Type: text/plain, Size: 4457 bytes --]
by the way, if you are hankering to do bare metal programming again, and
you're done with C, tinygo is great, and it works on 200 or so boards, from
the bbc micro on up.
tinygo gives you all the nice bits from go, and uses llvm as the backend,
so programs can be very small.
Tamago is also bare metal go, using the standard Go compiler, and has even
flown in space.
I had thought Rust and C would always be the only path to very low level
programming in tiny environments, but it turned out I was too pessimistic.
https://docs.google.com/presentation/d/1nqZicux6SloS8AynBu0B7cJYxw5pKu4IfCzq_vqWccw/edit?usp=sharing
It's been years now since I used C on bare metal; it's Rust or Go for me.
These languages give me all I ever needed for "first instruction after
reset" programming.
On Tue, Oct 1, 2024 at 8:38 AM Peter Weinberger (温博格) via TUHS <
tuhs@tuhs.org> wrote:
> Each generation bemoans the qualities of following generations.
> (Perhaps justifiably in this case, but we're acting out a stereotyped
> pattern.)
>
> Having a mental model of a computer is a good idea, but I'd rather not
> have to teach the details of PDP-11 condition codes (which changed in
> unexpected ways over time) Now caches loom large, and should such a
> course provide a mental model of smart phones? [I think it should, and
> the course should cover secure boot, but that leaves lots less time
> for assembly language.]
>
> On Tue, Oct 1, 2024 at 11:20 AM Larry McVoy <lm@mcvoy.com> wrote:
> >
> > On Tue, Oct 01, 2024 at 10:56:13AM -0400, Dan Cross wrote:
> > > On Tue, Oct 1, 2024 at 10:32???AM Luther Johnson
> > > <luther.johnson@makerlisp.com> wrote:
> > > > I think because the of the orders of magnitude increase in the demand
> > > > for programmers, we now have a very large number of programmers with
> > > > little or no math and science (and computer science doesn't count in
> the
> > > > point I'm trying to make here, if that's your only science, you're
> not
> > > > going to have the models in your head from other disciplines to give
> you
> > > > useful analogs) background, and that's a big change from 40 years
> ago.
> > > > So that has had an effect on who is programming, how they think about
> > > > it, and how languages have been marketed to that programming
> audience. IMHO.
> > >
> > > I've found a grounding in mathematics useful for programming, but
> > > beyond some knowledge of the physical constraints that the universe
> > > places on us and a very healthy appreciation for the scientific
> > > method, I'm having a hard time understanding how the hard sciences
> > > would help out too much. Electrical engineering seems like it would be
> > > more useful, than, say, chemistry or geology.
> > >
> > > I talk to a lot of academics, and I think they see the situation
> > > differently than is presented here. In a nutshell, the way a lot of
> > > them look at it, the amount of computer science in the world increases
> > > constantly while the amount of time they have to teach that to
> > > undergraduates remains fixed. As a result, they have to pick and
> > > choose what they teach very, very carefully, balancing a number of
> > > criteria as they do so. What this translates to in the real world
> > > isn't that the bar is lowered, but that the bar is different.
> >
> > I really wish that they made students take something like the PDP-11
> > assembly class - it was really systems architecture, you learned the
> > basic idea of a computer: a CPU, a bus to talk to memory, a bus to
> > talk to I/O, how a stack works, ideally how a context switch works
> > though that kinda blows minds (I personally don't think you are a
> > kernel programmer if you haven't implemented swtch() or at least
> > walked the code and understood all of it).
> >
> > I did all that and developed a mental model of all computers that
> > has helped me over the last 4 decades. Yes, my model is overly
> > simplistic but it still works, even on the x86 craziness. I don't
> > know how you could get to that mental model with x86, x86 is too
> > weird. I don't really know which architecture is close to the
> > simplicity of a PDP-11 today. Anyone?
> >
> > If I were teaching it, I'd just get a PDP-11 simulator and teach
> > on that. Maybe.
> > --
> > ---
> > Larry McVoy Retired to fishing
> http://www.mcvoy.com/lm/boat
>
[-- Attachment #2: Type: text/html, Size: 5646 bytes --]
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 14:56 ` Dan Cross
2024-10-01 15:08 ` Stuff Received
2024-10-01 15:20 ` Larry McVoy
@ 2024-10-01 19:04 ` arnold
2 siblings, 0 replies; 74+ messages in thread
From: arnold @ 2024-10-01 19:04 UTC (permalink / raw)
To: luther.johnson, crossd; +Cc: tuhs
Dan Cross <crossd@gmail.com> wrote:
> I talk to a lot of academics, and I think they see the situation
> differently than is presented here. In a nutshell, the way a lot of
> them look at it, the amount of computer science in the world increases
> constantly while the amount of time they have to teach that to
> undergraduates remains fixed. As a result, they have to pick and
> choose what they teach very, very carefully, balancing a number of
> criteria as they do so. What this translates to in the real world
> isn't that the bar is lowered, but that the bar is different.
You also have a lot of self-taught programmers, and graduates
of bootcamp programs, getting into programming.
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 13:47 ` arnold
2024-10-01 14:01 ` Larry McVoy
@ 2024-10-01 16:49 ` Paul Winalski
1 sibling, 0 replies; 74+ messages in thread
From: Paul Winalski @ 2024-10-01 16:49 UTC (permalink / raw)
To: arnold; +Cc: Computer Old Farts Followers
[-- Attachment #1: Type: text/plain, Size: 860 bytes --]
On Tue, Oct 1, 2024 at 10:07 AM <arnold@skeeve.com> wrote:
[regarding writing an Ada compiler as a class project]
> Did you do generics? That and the run time, which had some real-time
> bits to it (*IIRC*, it's been a long time), as well as the cross
> object code type checking, would have been real bears.
>
> Like many things, the first 90% is easy, the second 90% is hard. :-)
>
> I was in DEC's compiler group when they were implementing Ada for VAX/VMS.
It gets very tricky when routine libraries are involved. Just figuring
out the compilation order can be a real bear (part of this is the cross
object code type checking you mention).
From my viewpoint Ada suffered two problems. First, it was such a large
language and very tricky to implement--even more so than PL/I. Second, it
had US Government cooties.
-Paul W.
[-- Attachment #2: Type: text/html, Size: 1226 bytes --]
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 13:13 ` arnold
2024-10-01 13:32 ` Larry McVoy
@ 2024-10-01 15:44 ` Bakul Shah via TUHS
2024-10-01 19:07 ` arnold
2024-10-01 16:40 ` [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum) Paul Winalski
2 siblings, 1 reply; 74+ messages in thread
From: Bakul Shah via TUHS @ 2024-10-01 15:44 UTC (permalink / raw)
To: tuhs
> Thus Go and Rust are good things, taking the sharp tools out of the
> hands of the people who aren't qualified to use them. Same thing Python.
Sounds like boomer mentality... Kids these days... :-) Also sounds like
the kind of arguments assembly language programmers presented when *we*
were the "kids" trying out "structured programming"!
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 15:44 ` Bakul Shah via TUHS
@ 2024-10-01 19:07 ` arnold
2024-10-01 20:34 ` Rik Farrow
0 siblings, 1 reply; 74+ messages in thread
From: arnold @ 2024-10-01 19:07 UTC (permalink / raw)
To: tuhs, bakul
Bakul Shah via TUHS <tuhs@tuhs.org> wrote:
>
> > Thus Go and Rust are good things, taking the sharp tools out of the
> > hands of the people who aren't qualified to use them. Same thing Python.
>
> Sounds like boomer mentality... Kids these days... :-) Also sounds like
> the kind of arguments assembly language programmers presented when *we*
> were the "kids" trying out "structured programming"!
It's not that they're intrinsically unqualified. They were never
taught, so they don't know what they're doing. I'm unqualified to
fly a plane because I never learned or practiced, not because I'm not
intelligent enough. Same thing for many of today's programmers
and C / C++.
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 19:07 ` arnold
@ 2024-10-01 20:34 ` Rik Farrow
2024-10-02 0:55 ` Steffen Nurpmeso
2024-10-02 5:49 ` arnold
0 siblings, 2 replies; 74+ messages in thread
From: Rik Farrow @ 2024-10-01 20:34 UTC (permalink / raw)
To: arnold; +Cc: tuhs, bakul
[-- Attachment #1: Type: text/plain, Size: 1189 bytes --]
And my comment about seeing code produced by programmers while doing sales
support dates from 1990. This isn't something new, from my perspective. I
was working in a small programming shop where there were a handful of
excellent programmers, and then sent out to help customers get started
using their libraries. That's when I experienced seeing things that still
make me cringe.
Rik
On Tue, Oct 1, 2024 at 12:07 PM <arnold@skeeve.com> wrote:
> Bakul Shah via TUHS <tuhs@tuhs.org> wrote:
>
> >
> > > Thus Go and Rust are good things, taking the sharp tools out of the
> > > hands of the people who aren't qualified to use them. Same thing
> Python.
> >
> > Sounds like boomer mentality... Kids these days... :-) Also sounds like
> > the kind of arguments assembly language programmers presented when *we*
> > were the "kids" trying out "structured programming"!
>
> It's not that they're intrinsically unqualified. They were never
> taught, so they don't know what they're doing. I'm unqualified to
> fly a plane because I never learned or practiced, not because I'm not
> intelligent enough. Same thing for many of today's programmers
> and C / C++.
>
[-- Attachment #2: Type: text/html, Size: 1664 bytes --]
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 20:34 ` Rik Farrow
@ 2024-10-02 0:55 ` Steffen Nurpmeso
2024-10-02 5:49 ` arnold
1 sibling, 0 replies; 74+ messages in thread
From: Steffen Nurpmeso @ 2024-10-02 0:55 UTC (permalink / raw)
To: Rik Farrow; +Cc: tuhs, bakul
Rik Farrow wrote in
<CACY3YMGcSm+ATwbz1TmKKoOQeKCPsoTnT4u93vFdKpZyyHCZ7A@mail.gmail.com>:
|On Tue, Oct 1, 2024 at 12:07 PM <arnold@skeeve.com> wrote:
|> Bakul Shah via TUHS <tuhs@tuhs.org> wrote:
...
|>> Sounds like boomer mentality... Kids these days... :-) Also sounds like
|>> the kind of arguments assembly language programmers presented when *we*
|>> were the "kids" trying out "structured programming"!
|>
|> It's not that they're intrinsically unqualified. They were never
|> taught, so they don't know what they're doing. I'm unqualified to
|> fly a plane because I never learned or practiced, not because I'm not
|> intelligent enough. Same thing for many of today's programmers
|> and C / C++.
|And my comment about seeing code produced by programmers while doing sales
|support dates from 1990. This isn't something new, from my perspective. I
|was working in a small programming shop where there were a handful of
|excellent programmers, and then sent out to help customers get started
|using their libraries. That's when I experienced seeing things that still
|make me cringe.
Btw the "official Linux firmware"
https://git.kernel.org/?p=linux/kernel/git/firmware/linux-firmware.git;a=summary
introduced a dependency for the "rdfind" utility i think ~two
years ago (it was later made optional) for this code:
$verbose "Finding duplicate files"
rdfind -makesymlinks true -makeresultsfile false "$destdir" >/dev/null
find "$destdir" -type l | while read -r l; do
target="$(realpath "$l")"
$verbose "Correcting path for $l"
ln -fs "$(realpath --relative-to="$(dirname "$(realpath -s "$l")")" "$target")" "$l"
done
I proposed (because it really drove me mad, i have nothing to do
with Linux kernel stuff etc shall you think that)
(
cd "$destdir" && find . -type f | xargs cksum | sort | {
ls= lf=
while read s1 s2 f; do
s="$s1 $s2"
#$verbose $s $f
if [ "$s" = "$ls" ] && cmp "$lf" "$f"; then
$verbose 'duplicate '"${lf##*/}" "${f#./*}"
rm -f "$f"
#ln -s "${lf#./*}" "${f#./*}"
ln -s "${lf##*/}" "${f#./*}"
else
ls=$s
lf=$f
fi
done
}
)
(as a draft, with only light testing, but it is not far from doing
it at maximum) which only uses POSIX default tools etc, but these
guys from very big companies (RedHat; the guy who did *that* is
from AMD) did not even respond, at least to that.
(At times i tried to get rid of rsync dependency of kernel
makefile officially, as that can also be done via plain shell
tools, they at least answered "what is wrong with rsync".)
Maybe because the patch also included
- compress="zstd --compress --quiet --stdout"
+ compress="zstd -T0 --ultra -22 --compress --quiet --stdout"
but that only brought the firmware into line with the normal Linux
kernel make zstd usage. I will never know.
I think what i am trying to say is that maybe "time is money" in
addition to anything else. (I never heard about rdfind though.
Btw its manual (it has one!) says
SEE ALSO
md5sum, sha1sum, find, symlinks
were cksum is a standard tool. So it is. Everyone its own
infrastructure, how large it is; you all only get the binary
updates anyway, my Linux distribution compiles from source; and
what the mesa library alone has grown in new dependencies that are
mostly never needed let alone at runtime, like YAML, that has been
done at release-tarball-creation time in the past. At least
here.)
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 20:34 ` Rik Farrow
2024-10-02 0:55 ` Steffen Nurpmeso
@ 2024-10-02 5:49 ` arnold
2024-10-02 20:42 ` Dan Cross
1 sibling, 1 reply; 74+ messages in thread
From: arnold @ 2024-10-02 5:49 UTC (permalink / raw)
To: rik, arnold; +Cc: tuhs, bakul
Rik Farrow <rik@rikfarrow.com> wrote:
> And my comment about seeing code produced by programmers while doing sales
> support dates from 1990. This isn't something new,
Also true. In the late 80s I was a sysadmin at Emory U. We had a
Vax connected to BITNET with funky hardware and UREP, the Unix RSCS
Emulation Program, from the University of Pennsylvania. Every time
I had to dive into that code, I felt like I needed a shower afterwards. :-)
Arnold
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-02 5:49 ` arnold
@ 2024-10-02 20:42 ` Dan Cross
2024-10-02 21:54 ` Marc Donner
2024-10-05 17:45 ` arnold
0 siblings, 2 replies; 74+ messages in thread
From: Dan Cross @ 2024-10-02 20:42 UTC (permalink / raw)
To: arnold; +Cc: rik, tuhs, bakul
On Wed, Oct 2, 2024 at 2:27 AM <arnold@skeeve.com> wrote:
> Rik Farrow <rik@rikfarrow.com> wrote:
> > And my comment about seeing code produced by programmers while doing sales
> > support dates from 1990. This isn't something new,
>
> Also true. In the late 80s I was a sysadmin at Emory U. We had a
> Vax connected to BITNET with funky hardware and UREP, the Unix RSCS
> Emulation Program, from the University of Pennsylvania. Every time
> I had to dive into that code, I felt like I needed a shower afterwards. :-)
Uh oh, lest the UPenn alumni among us get angry (high, Ron!) I feel I
must point out that UREP wasn't from the University of Pennsylvania,
but rather, from The Pennsylvania State University (yes, "The" is part
of the name). UPenn (upenn.edu) is an Ivy in Philly; Penn State
(psu.edu) is a state school in University Park, which is next to State
College (really, that's the name of the town) with satellite campuses
scattered around the state.
- Dan C.
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-02 20:42 ` Dan Cross
@ 2024-10-02 21:54 ` Marc Donner
2024-10-05 17:45 ` arnold
1 sibling, 0 replies; 74+ messages in thread
From: Marc Donner @ 2024-10-02 21:54 UTC (permalink / raw)
To: Dan Cross; +Cc: rik, tuhs, bakul
[-- Attachment #1: Type: text/plain, Size: 1586 bytes --]
RSCS. Sigh. Remote Spooling Communications Subsystem.
I suppose I could praise it for its elegant layering of abstractions ...
just send a virtual card deck to the virtual card reader on the virtual
machine being used by your correspondent.
Or I could curse it for its absurdity - really, a virtual card deck? 80
character EBCDIC records.
An amazing concept.
=====
nygeek.net
mindthegapdialogs.com/home <https://www.mindthegapdialogs.com/home>
On Wed, Oct 2, 2024 at 4:43 PM Dan Cross <crossd@gmail.com> wrote:
> On Wed, Oct 2, 2024 at 2:27 AM <arnold@skeeve.com> wrote:
> > Rik Farrow <rik@rikfarrow.com> wrote:
> > > And my comment about seeing code produced by programmers while doing
> sales
> > > support dates from 1990. This isn't something new,
> >
> > Also true. In the late 80s I was a sysadmin at Emory U. We had a
> > Vax connected to BITNET with funky hardware and UREP, the Unix RSCS
> > Emulation Program, from the University of Pennsylvania. Every time
> > I had to dive into that code, I felt like I needed a shower afterwards.
> :-)
>
> Uh oh, lest the UPenn alumni among us get angry (high, Ron!) I feel I
> must point out that UREP wasn't from the University of Pennsylvania,
> but rather, from The Pennsylvania State University (yes, "The" is part
> of the name). UPenn (upenn.edu) is an Ivy in Philly; Penn State
> (psu.edu) is a state school in University Park, which is next to State
> College (really, that's the name of the town) with satellite campuses
> scattered around the state.
>
> - Dan C.
>
[-- Attachment #2: Type: text/html, Size: 3022 bytes --]
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-02 20:42 ` Dan Cross
2024-10-02 21:54 ` Marc Donner
@ 2024-10-05 17:45 ` arnold
2024-10-06 12:20 ` Dan Cross
1 sibling, 1 reply; 74+ messages in thread
From: arnold @ 2024-10-05 17:45 UTC (permalink / raw)
To: crossd, arnold; +Cc: tuhs, rik, bakul
Dan Cross <crossd@gmail.com> wrote:
> On Wed, Oct 2, 2024 at 2:27 AM <arnold@skeeve.com> wrote:
> > Rik Farrow <rik@rikfarrow.com> wrote:
> > > And my comment about seeing code produced by programmers while doing sales
> > > support dates from 1990. This isn't something new,
> >
> > Also true. In the late 80s I was a sysadmin at Emory U. We had a
> > Vax connected to BITNET with funky hardware and UREP, the Unix RSCS
> > Emulation Program, from the University of Pennsylvania. Every time
> > I had to dive into that code, I felt like I needed a shower afterwards. :-)
>
> Uh oh, lest the UPenn alumni among us get angry (high, Ron!) I feel I
> must point out that UREP wasn't from the University of Pennsylvania,
> but rather, from The Pennsylvania State University (yes, "The" is part
> of the name). UPenn (upenn.edu) is an Ivy in Philly; Penn State
> (psu.edu) is a state school in University Park, which is next to State
> College (really, that's the name of the town) with satellite campuses
> scattered around the state.
>
> - Dan C.
Ooops. My bad. Thanks Dan, for the correction.
Either way, the code was awful. :-)
Arnold
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-05 17:45 ` arnold
@ 2024-10-06 12:20 ` Dan Cross
2024-10-06 12:29 ` [TUHS] UREP code (was Re: Re: Minimum Array Sizes in 16 bit C (was Maximum)) arnold
0 siblings, 1 reply; 74+ messages in thread
From: Dan Cross @ 2024-10-06 12:20 UTC (permalink / raw)
To: Aharon Robbins; +Cc: TUHS, Rik Farrow, Bakul Shah
[-- Attachment #1: Type: text/plain, Size: 1497 bytes --]
On Sat, Oct 5, 2024, 1:45 PM <arnold@skeeve.com> wrote:
> Dan Cross <crossd@gmail.com> wrote:
>
> > On Wed, Oct 2, 2024 at 2:27 AM <arnold@skeeve.com> wrote:
> > > Rik Farrow <rik@rikfarrow.com> wrote:
> > > > And my comment about seeing code produced by programmers while doing
> sales
> > > > support dates from 1990. This isn't something new,
> > >
> > > Also true. In the late 80s I was a sysadmin at Emory U. We had a
> > > Vax connected to BITNET with funky hardware and UREP, the Unix RSCS
> > > Emulation Program, from the University of Pennsylvania. Every time
> > > I had to dive into that code, I felt like I needed a shower
> afterwards. :-)
> >
> > Uh oh, lest the UPenn alumni among us get angry (high, Ron!) I feel I
> > must point out that UREP wasn't from the University of Pennsylvania,
> > but rather, from The Pennsylvania State University (yes, "The" is part
> > of the name). UPenn (upenn.edu) is an Ivy in Philly; Penn State
> > (psu.edu) is a state school in University Park, which is next to State
> > College (really, that's the name of the town) with satellite campuses
> > scattered around the state.
>
> Ooops. My bad. Thanks Dan, for the correction.
>
> Either way, the code was awful. :-)
>
Ha, no worries; and I can well believe that the code was awful.
I do wonder if a version of it has survived somewhere; I've personally
never seen it, but there's very little information about it out there
anymore.
- Dan C.
[-- Attachment #2: Type: text/html, Size: 2662 bytes --]
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] UREP code (was Re: Re: Minimum Array Sizes in 16 bit C (was Maximum))
2024-10-06 12:20 ` Dan Cross
@ 2024-10-06 12:29 ` arnold
0 siblings, 0 replies; 74+ messages in thread
From: arnold @ 2024-10-06 12:29 UTC (permalink / raw)
To: crossd, arnold; +Cc: tuhs, rik, bakul
Dan Cross <crossd@gmail.com> wrote:
> I do wonder if a version of it has survived somewhere; I've personally
> never seen it, but there's very little information about it out there
> anymore.
I don't have it. I think it was licensed software from Penn State.
I don't remember a lot of details. It gave a similar experience
to that on IBM systems; as your mail travelled point to point you
got a message printed on your terminal as to what node it was going through.
(Or something like that.)
There was also a facility similar to write(1) that let you chat with
someone on a remote BITNET site.
Overall, it did work.
Arnold
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-10-01 13:13 ` arnold
2024-10-01 13:32 ` Larry McVoy
2024-10-01 15:44 ` Bakul Shah via TUHS
@ 2024-10-01 16:40 ` Paul Winalski
2 siblings, 0 replies; 74+ messages in thread
From: Paul Winalski @ 2024-10-01 16:40 UTC (permalink / raw)
To: arnold; +Cc: Computer Old Farts Followers
[-- Attachment #1: Type: text/plain, Size: 412 bytes --]
On Tue, Oct 1, 2024 at 9:13 AM <arnold@skeeve.com> wrote:
> This goes back to the evolution thing. At the time, C was a huge
> step up from FORTRAN and assembly.
>
Certainly it's a step up (and a BIG step up) from assembly. But I'd say C
is a step sidewise from Fortran. An awful lot of HPTC programming involves
throwing multidimensional arrays around and C is not suitable for that.
-Paul W.
[-- Attachment #2: Type: text/html, Size: 718 bytes --]
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 16:58 ` G. Branden Robinson
` (2 preceding siblings ...)
2024-09-28 18:01 ` G. Branden Robinson
@ 2024-09-28 18:05 ` Larry McVoy
2024-09-30 15:49 ` Paul Winalski
3 siblings, 1 reply; 74+ messages in thread
From: Larry McVoy @ 2024-09-28 18:05 UTC (permalink / raw)
To: G. Branden Robinson; +Cc: TUHS main list
On Sat, Sep 28, 2024 at 11:58:12AM -0500, G. Branden Robinson wrote:
> The problem is that, like James Madison's fictive government of angels,
> such entities don't exist. The staff of the CSRC itself may have been
> overwhelmingly populated with frank, modest, and self-deprecating
> people--and I'll emphasize here that I'm aware of no accounts that this
> is anything but true--but C unfortunately played a part in stoking a
> culture of pretension among software developers. "C is a language in
> which wizards program. I program in C. Therefore I'm a wizard." is how
> the syllogism (spot the fallacy) went. I don't know who does more
> damage--the people who believe their own BS, or the ones who know
> they're scamming their colleagues.
I have a somewhat different view. I have a son who is learning to program
and he asked me about C. I said "C is like driving a sports car on a
twisty mountain road that has cliffs and no guard rails. If you want to
check your phone while you are driving, it's not for you. It requires
your full, focussed attention. So that sounds bad, right? Well, if
you are someone who enjoys driving a sports car, and are good at it,
perhaps C is for you."
So I guess I fit the description of thinking I'm a wizard, sort of. I'm
good at C, there is plenty of my C open sourced, you can go read it and
decide for yourself. I enjoy C. But it's not for everyone, in fact,
most programmers these days would be better off in Rust or something
that has guardrails.
I get your points, Branden, but I'd prefer that C sticks around for
the people who enjoy it and are good at it. A small crowd, for sure.
--lm
^ permalink raw reply [flat|nested] 74+ messages in thread
* [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
2024-09-28 18:05 ` Larry McVoy
@ 2024-09-30 15:49 ` Paul Winalski
0 siblings, 0 replies; 74+ messages in thread
From: Paul Winalski @ 2024-09-30 15:49 UTC (permalink / raw)
To: Larry McVoy, Computer Old Farts Followers
[-- Attachment #1: Type: text/plain, Size: 1344 bytes --]
[moving to COFF as this has drifted away from Unix]
On Sat, Sep 28, 2024 at 2:06 PM Larry McVoy <lm@mcvoy.com> wrote:
> I have a somewhat different view. I have a son who is learning to program
> and he asked me about C. I said "C is like driving a sports car on a
> twisty mountain road that has cliffs and no guard rails. If you want to
> check your phone while you are driving, it's not for you. It requires
> your full, focussed attention. So that sounds bad, right? Well, if
> you are someone who enjoys driving a sports car, and are good at it,
> perhaps C is for you."
>
> If you really want a language with no guard rails, try programming in
BLISS.
Regarding C and C++ having dangerous language features--of course they do.
Every higher-level language I've ever seen has its set of toxic language
features that should be avoided if you want reliability and maintainability
for your programs. And a set of things to avoid if you want portability.
Regarding managed dynamic memory allocation schemes that use garbage
collection vs. malloc()/free(), there are some applications where they are
not suitable. I'm thinking about real-time programs. You can't have your
missle defense software pause to do garbage collection when you're trying
to shoot down an incoming ballistic missile.
-Paul W.
[-- Attachment #2: Type: text/html, Size: 1762 bytes --]
^ permalink raw reply [flat|nested] 74+ messages in thread