From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Sat, 16 Jul 2011 20:06:27 +0200 From: tlaronde@polynum.com To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Message-ID: <20110716180627.GA29488@polynum.com> References: <20110715151535.GA2405@polynum.com> <20110715202157.GA5157@polynum.com> <20110716080247.GA394@polynum.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.3i Subject: Re: [9fans] NUMA Topicbox-Message-UUID: 028db38c-ead7-11e9-9d60-3106f5b1d025 On Sat, Jul 16, 2011 at 12:27:14PM -0400, erik quanstrom wrote: > > The Itanium story, as guessed early by Hennessy and Patterson in > > "Computer Architecture", shows that efficiency relying on too > > complex knowledge, asking too much to the programmers and the > > compilers, is likely to fail. > > another way of looking at itanium is that it's like a multicore > processor that is programed with a single instruction stream. > given a general-purpose workload, it stands to reason that > independent threads are going to be scheduled more > efficiently and independent threads can be added at will without > changing the architechtural model. so it's also easier to scale. That's probably a legitimate view since the gains from pipelining in current processors were finished and engineers were searching gains elsewhere. But from what I remember when reading the description of the aims of the architecture---in CAQA---, since there was no panacea and no great gain to be easily obtained, optimizations had to rely on special cases and great knowledge of low level details by programmers, and some knowledge of higher level for compilers to do "the right thing(TM)", and that seemed unlikely to work without a lot of pain. If RISC has succeeded, this is precisely because the elements were simple enough to be implemented in hardware, and this simplicity allowed to work reliably on optimizations. There is an english expression, IIRC: penny wise and pound fool. Having the basis right is the main gain. One can compare Plan9, that can be viewed as achieving what MACH was aiming to achieve, while Plan9 is really a micro-kernel (to start with by the size of code), while the MACH like microkernels seem to have survived only in assembly since it was the only mean to get a decent efficiency. But people continued to publish thesis and papers about it---some paragraph in the plan9 presentation paper is about this, if my english is not totally at fault...---, refusing to conclude that the results were showing there was definitively something wrong to start with. But in what was called "science", there is now fashions too. Story telling everywhere... -- Thierry Laronde http://www.kergis.com/ Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C