From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Original-To: caml-list@yquem.inria.fr Delivered-To: caml-list@yquem.inria.fr Received: from concorde.inria.fr (concorde.inria.fr [192.93.2.39]) by yquem.inria.fr (Postfix) with ESMTP id 30071BB9A for ; Tue, 27 Sep 2005 15:06:08 +0200 (CEST) Received: from pauillac.inria.fr (pauillac.inria.fr [128.93.11.35]) by concorde.inria.fr (8.13.0/8.13.0) with ESMTP id j8RD67nG014795 for ; Tue, 27 Sep 2005 15:06:07 +0200 Received: from nez-perce.inria.fr (nez-perce.inria.fr [192.93.2.78]) by pauillac.inria.fr (8.7.6/8.7.3) with ESMTP id PAA28172 for ; Tue, 27 Sep 2005 15:06:06 +0200 (MET DST) Received: from cenn.mc.mpls.visi.com (cenn.mc.mpls.visi.com [208.42.156.9]) by nez-perce.inria.fr (8.13.0/8.13.0) with ESMTP id j8RD65G4019647 for ; Tue, 27 Sep 2005 15:06:06 +0200 Received: from [192.168.42.2] (bhurt.dsl.visi.com [208.42.141.66]) by cenn.mc.mpls.visi.com (Postfix) with ESMTP id 793AD8299; Tue, 27 Sep 2005 08:06:05 -0500 (CDT) Date: Tue, 27 Sep 2005 08:06:30 -0500 (CDT) From: Brian Hurt X-X-Sender: bhurt@localhost.localdomain To: skaller Cc: Stefan Monnier , caml-list Subject: Re: [Caml-list] Re: Ant: Efficiency of let/and In-Reply-To: <1127800374.31518.167.camel@rosella> Message-ID: References: <20050926043240.24009.qmail@web26809.mail.ukl.yahoo.com> <87hdc724wo.fsf-monnier+gmane.comp.lang.caml.inria@gnu.org> <1127800374.31518.167.camel@rosella> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Miltered: at concorde with ID 433943BF.000 by Joe's j-chkmail (http://j-chkmail.ensmp.fr)! X-Miltered: at nez-perce with ID 433943BE.000 by Joe's j-chkmail (http://j-chkmail.ensmp.fr)! X-Spam: no; 0.00; caml-list:01 parallelism:01 parallelism:01 symmetric:01 2005,:98 wrote:01 wrote:01 incremental:01 functions:01 functions:01 writting:01 thread:02 thread:02 heh:02 generally:03 X-Spam-Checker-Version: SpamAssassin 3.0.3 (2005-04-27) on yquem.inria.fr X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=none autolearn=disabled version=3.0.3 On Tue, 27 Sep 2005, skaller wrote: > On Mon, 2005-09-26 at 11:30 -0500, Brian Hurt wrote: > >> I'm not even sure how much extra efficiency is there to be had. Obviously >> it'd be hard "thread" calls to complex functions, > > Why? Hyperthreading allows two completely independent processes > to execute on a hyperthread enabled P4 .. the hardware can already > do it .. even better with dual core. Creating a new kernel-level process/thread (required to get code executing on a different processor or pseudo-processor) is generally expensive. You don't want to do it except for very large functions. And then, once you do have the second thread of execution, you now have all the fun of multithreaded code- race conditions and deadlocks and livelocks, oh my. I have contemplated writting a purely-functional (no imperitive) language that does micro-threads ala cilk- but it's more work than I really want to put in to that project. > > There is no lack of small scale low level parallelism in > modern computing systems, just a lack of software that knows > how to take advantage of it. The benefit may be there, theoretically, but practical considerations may make it not worth the effort to go after the benefit. > > There are plenty of places in an average program where one > can determine parallel execution would be ok, so it is really > a lack of capability in the software. > > I personally don't think of this as real parallelism, > that's something you get on a machine with K's or M's > of processing units .. eg the human eye. Heh. We've hit the point where we have so many transistors on a chip we literally don't know what to do with them all- we have no idea how to spend the transistors to provide more than very small incremental performance improvements to single-threaded execution. Which is why the sudden interest in parallelism (Symmetric Mult-Threading aka Hyperthreading, multi-core chips, etc.). The problem is that the theory on how to write race condition/deadlock/livelock -free code isn't there, to my knowledge (someone please prove me wrong). Brian