> On the other hand, though I hardly use C much (though I compile a lot of it), I > have had weird behaviour under -O2, which went away when I removed it. The gcc > people are always messing with the optimizer. That personal experience > combined with reading docs that say things like "the loop unroller is buggy, so > be careful when you use -O3" (and what's the deal with all this -O6 stuff? I > thought -O3 was the "possible overkill, may slow down code more than speed it > up" setting) has made me paranoid and now when the compiler segfaults or the > program segfaults (often C generated by a compiler for another language, which > is probably particularly strangely written), the first thing I do is remove -O. > And it sometimes fixes the problem. For example, the sather compiler will > segfault if compiled with -O. http://www.netlib.org/benchweb has several integer-intensive cpu benchmarks. i downloaded the tower of hanoi to try a few benchy-marks just to continue yesterday's tests. i gave up after finding out that the optimized fbsd ran slower than the unoptimized one (2.95).. same behaviour was observed with gcc3.0 on linux... i can only explain this with the recursive structure of the program... having such (weird?) results kind-of defeats the purpose of doing a benchmark... so i never looked at compiling it on p9 if anyone else is interested, the cpu section of the abovementioned web page has some more programs. i tried a few but didn't find anything interesting