From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Sun, 17 Jul 2011 09:38:47 +0200 From: tlaronde@polynum.com To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Message-ID: <20110717073847.GB539@polynum.com> References: <0a7dc5268ce4dceb21ea20cdcc191693@terzarima.net> <27544caa847ff61fed1ae5f4d87218d0@ladd.quanstro.net> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <27544caa847ff61fed1ae5f4d87218d0@ladd.quanstro.net> User-Agent: Mutt/1.4.2.3i Content-Transfer-Encoding: quoted-printable Subject: Re: [9fans] NUMA Topicbox-Message-UUID: 0304c1d4-ead7-11e9-9d60-3106f5b1d025 On Sat, Jul 16, 2011 at 09:44:02PM -0400, erik quanstrom wrote: > On Sat Jul 16 18:07:28 EDT 2011, forsyth@terzarima.net wrote: > > > to the actual hardware, then you need to write new compilers and > > > recompile everything every few years. > >=20 > > you do anyway. i don't think i've used a distribution yet > > where the upgrade doesn't include completely-recompiled versions of > > everything. >=20 > i've been able to upgrade my systems here through a number of =B5arches > (intel xeon 5[0456]00, 3[04]00, atom; amd phenom) that weren't around > when i first installed my systems, and i'm still using most of the orig= inal > binaries. the hardware isn't forcing an upgrade. But that's possible because the "real" hardware is RISC, and there is a software level managing compatibility (microcode). That's why too, hardware can be "patched". My point is more that when the optimization path is complex, the probability is the combined one (the product); hence it is useless to=20 expect a not neglibible gain for the total, specially when the "best case" for each single optimization is almost orthogonal to all the others. Furthermore, I don't know for others, but I prefer correctness over speed. I mean, if a program is proved to be correct (and very few are), complex acrobatics from the compiler, namely in the "optimization" area, able to wreak havoc all the code assumptions, is something I don't buy. I have an example with gcc4.4, compiling not my source, but D.E.K.'s TeX.=20 With gcc3.x, the "-O2" didn't produce a failing program. With gcc4.4 suddenly the "-O2" does produce a program that does not crash but fail (the offending optimization is=20 ` -foptimize-sibling-calls').=20 The code is not at fault (since it fails not where glue code is added for the WEB to C translation, but in TeX inners; I mean, it's not my small mundain added code, it's pure original D.E.K.'s). But how can one rely on a binary that is so mangled that the fact that you do not see it fail when testing does not prove it will yield a correct result? And, furthermore, that the code is so chewed that the proofs of correctness on the source level do not guarantee anything about the correctness of the compiled result? My gut feeling is that the whole process is going too far, is too complex to be "maintenable" (to be hold in one hand), and that some marginal gains in specific cases are obtained by a general ruin if not of the certainty, at least of some confidence in correctness. And I don't buy that. I would much prefer hints given by the programmer on the source code level. And if the programmer doesn't understand what he's doing, I don't think the compiler will understand it better. But it's the same "philosophy" as the one of the autotools, utilities trying to apply heuristic rules about code made without rules at all. But I guess a e-compiler (indeed a i-compiler) will be the software version of the obsolete lollypop... --=20 Thierry Laronde http://www.kergis.com/ Key fingerprint =3D 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C