You're right, there's plenty of blame to go around. More than once I was hoist on my own petard with C aliasing and saved by lots of 'volatile's. Not surprisingly I was screwed more than once by tail recursion optimizations on Sparc's (fun with register files) by lcc. Ditto with instruction ordering on MIPS. In all cases turning off optimization fixed the problem, regardless of where the fault lay. Hence, I can understand Rob's worry about optimization being an option rather than a requirement. Programmers tend to know that they are treading where lesser mortals fear to go when they up the optimization level. Especially with C, we are trained largely by what works and what doesn't. Moving from the VAX to the 68020 uncovered lots of null references that were going unnoticed in the past. Ditto for other coding outside the spec. If nothing slaps you down, you get complacent. Hence, we tend to accept the resulting errors and just back off optimization when it fails. We often don't report the bug because 1) not knowing the compiler writer, its often hard to say, ``When I use the -O256 flag, my 150,000 line program acts funny'' and expect to get any satisfaction. 2) we're afraid after 2 weeks of debugging it'll turn out to be our fault 3) once your code has been rewritten by a machine, its often incredibly hard to figure out what's broken 4) its not my job, man If things don't make it through with optimization turned off, then we report bugs because then we know its not our fault and have no recourse (except the first day trying to program around it). Hence, optimizing compilers tend to self select for retaining bugs.