From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Tue, 12 Jul 2011 14:35:12 +0100 From: Ethan Grammatikidis To: 9fans@9fans.net Message-ID: <20110712143512.50bd0bdb@lahti.ethans.dre.am> In-Reply-To: References: <1310400339.71319.YahooMailClassic@web30908.mail.mud.yahoo.com> <088ac8dca18ec9ef9e83f70f167982eb@coraid.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: [9fans] GNU/Linux/Plan 9 disto Topicbox-Message-UUID: ffa32292-ead6-11e9-9d60-3106f5b1d025 On Tue, 12 Jul 2011 02:10:50 +0200 hiro <23hiro@googlemail.com> wrote: > 2. compilers still running fastest on fast CPUs I have a quad core I bought specifically for compiling, that was one of the major tasks the box was going to do as I wanted to get more involved in Source Mage GNU/Linux. It performed admirably, but the vast majority of the compiles were IO bound. The IO in question was all to RAM -- the sources were unpacked to tmpfs and the compilation performed there. I used make -j6; 6 jobs, 2 more than the number of cores, and unless it was compiling Firefox or one or two other very large packages, average core utilization would be around 50%. I don't know if the opinion stated above comes from C++ compiles or from the insane optimization techniques Gcc is sometimes capable of. I choose my words carefully; I mean to use "insane" there because you can increase compilation time dramatically by demanding more optimization from Gcc, and never notice the difference in desktop use. Even given relatively compute-intensive tasks such as compilation, IO limitation is still very significant as my example above shows. To my mind, the really scary thing about Gcc is that the less heavyweight optimization paths are less well tested. You don't really have a choice about burning up your CPUs on optimization which is worthless for most code. I got lucky; when I bought my quad core Gcc did not yet have specific optimizations for it at all, so I was able to use the generic x86_64 optimization path. I stopped using Source Mage before that generic path had a chance to bit-rot. There is yet another thing, and now I'm starting to feel ill relating the troubles. Gcc's optimization can be crap for those programs which most require it! Video transformation requires an algorithm which is actually *slowed* by the optimization attempts of Gcc as well as (iirc) Intel's CC. It can be done right, DEC had a compiler which could optimize it well, but video playback software still relies on assembler code because Gcc doesn't do it right. I'm sure video playback is not the only compute-intensive task in which such an algorithm is used. It doesn't stretch my imagination much to see rendering or even perhaps fluid dynamics having such a problem, and we can fall back to assembler with 8c just as easily as with Gcc. With all these factors, I don't think it's religious at all to say Gcc is one big pile of shit and -- I'm sorry, hiro, -- to wonder at anyone who wants to use it.