From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mike Haertel Message-Id: <200202261947.g1QJlJ043207@ducky.net> To: 9fans@cse.psu.edu Subject: Re: [9fans] GUI toolkit for Plan 9 In-Reply-To: Date: Tue, 26 Feb 2002 11:47:19 -0800 Topicbox-Message-UUID: 574f7b24-eaca-11e9-9e20-41e7f4b1d025 From 9fans-admin@cse.psu.edu Tue Feb 26 11:06:34 2002 From: presotto@plan9.bell-labs.com To: 9fans@cse.psu.edu Subject: Re: [9fans] GUI toolkit for Plan 9 Date: Tue, 26 Feb 2002 14:05:49 -0500 >However, it seems to be an >accepted consequence amongst compiler writers to trade off >possible incorrect code generation against probable speed >gains. I've been burned numerous times by upping the optimization >level in compilers including gcc. This is not a new development. >It was just as true 30 years ago with the fortran and PL1 compilers >I used. I'm not convinced that it's necessarily the compiler writers at fault in many of these cases. Certainly there are compiler and optimizer bugs, and probably the more optimization you apply the more likely you are to tickle these bugs. You can blame the compiler for those. But, at least in the case of C, there are numerous cases where people write code that is semantically undefined according to the detailed rules of the C standard, but does what they want under particular compilers or levels of optimization. (E.g. how many of you have assumed that local variables not declared volatile will hold their values across a longjmp?) So who to blame? The compiler writer, who assumes the code being compiled is standards-conforming? The standards committee, who get to decide which programming idioms will have defined behavior? The programmer, who often doesn't really understand the rules of the language? The authors of programming textbooks, who often downplay or omit these issues entirely?