From mboxrd@z Thu Jan 1 00:00:00 1970 MIME-Version: 1.0 Date: Sat, 28 Nov 2015 17:42:25 +1100 Message-ID: From: da Tyga To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Content-Type: multipart/alternative; boundary=001a114713466eb2e80525941cf1 Subject: Re: [9fans] Compiling ken-cc on Linux Topicbox-Message-UUID: 793fc0a4-ead9-11e9-9d60-3106f5b1d025 --001a114713466eb2e80525941cf1 Content-Type: text/plain; charset=UTF-8 I have been following this discussion about the C compiler and can no longer stop myself from making a (snarky?) comment. The K&R standard for C was very much written when the C language was a higher than assembler language for the PDP-11 (at least that's how I became acquainted with it back in 1976). Most of us, in those days, welcomed something that was more high level than macro-assembler and yet amenable to writing operating systems and utilities (unlike FORTRAN, ALGOL and COBOL of that era). Many of us would use the -s switch to check the generated assembler code and in some cases even modify the assembler code for selected functions to get exactly the desired result. The PDP-11 had a rather simple instruction set, thus the compiler produced relatively predictable code. The undefined behaviours in many cases meant that at least on the PDP-11 we would know what to expect. It was only once code was ported to other systems that these assumptions started getting sorely tested. Fast forward to present time, we have a bloated C standard and even more bloated C++ standards. The target instruction sets are rich with lots of special case instructions; out of sequence execution; multi-level caches add further constraints. So today's compilers need to analyse complex source code to run with good performance on extremely complex targets. We shouldn't forget that in the case of the x86 family the compilers need to optimise for an ever evolving instruction set and retain backward compatibility across earlier variants. On 28 November 2015 at 12:01, erik quanstrom wrote: > > Funny, but actually I was wondering if there is any subtle issue in the > > standards of the C language that makes it somehow hard to implement. > > For example I've met a few times weird implementations of libraries and > > frameworks dictated by broken standards: once they are in, they can never > > be removed due to backward compatibility. I thought that Charles (that > also > > implemented the Limbo compiler) might have referenced these kind of > issues > > in his pun. > > i think the simple answer is: no. but many folks just love complexity, > and are > determined to find it. if you give such a person one problem, they'll > come back > with two problems. i call these folks complicators. don't be a > complicator. > > (i have to remind myself this from time to time.) > > - erik > > --001a114713466eb2e80525941cf1 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
I have been following this discussion about the C compiler= and can no longer stop myself from making a (snarky?) comment.

The K&R standard for C was very much written when the C languag= e was a higher than assembler language for the PDP-11 (at least that's = how I became acquainted with it back in 1976).=C2=A0 Most of us, in those d= ays, welcomed something that was more high level than macro-assembler and y= et amenable to writing operating systems and utilities (unlike FORTRAN, ALG= OL and COBOL of that era).=C2=A0 Many of us would use the -s switch to chec= k the generated assembler code and in some cases even modify the assembler = code for selected functions to get exactly the desired result.
The PDP-11 had a rather simple instruction set, thus the compi= ler produced relatively predictable code.=C2=A0 The undefined behaviours in= many cases meant that at least on the PDP-11 we would know what to expect.= =C2=A0 It was only once code was ported to other systems that these assumpt= ions started getting sorely tested.

Fast forward t= o present time, we have a bloated C standard and even more bloated C++ stan= dards.=C2=A0 The target instruction sets are rich with lots of special case= instructions; out of sequence execution; multi-level caches add further co= nstraints.=C2=A0 So today's compilers need to analyse complex source co= de to run with good performance on extremely complex targets.=C2=A0 We shou= ldn't forget that in the case of the x86 family the compilers need to o= ptimise for an ever evolving instruction set and retain backward compatibil= ity across earlier variants.


On 28 November 2015 at 12:01, erik quanstro= m <quanstro@quanstro.net> wrote:
> Funny, but actually I was wondering if there is any subtle = issue in the
> standards of the C language that makes it somehow hard to implement. > For example I've met a few times weird implementations of librarie= s and
> frameworks dictated by broken standards: once they are in, they can ne= ver
> be removed due to backward compatibility. I thought that Charles (that= also
> implemented the Limbo compiler) might have referenced these kind of is= sues
> in his pun.

i think the simple answer is: no.=C2=A0 but many folks just love complexity= , and are
determined to find it.=C2=A0 if you give such a person one problem, they= 9;ll come back
with two problems.=C2=A0 i call these folks complicators.=C2=A0 don't b= e a complicator.

(i have to remind myself this from time to time.)

- erik


--001a114713466eb2e80525941cf1--