From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail4-relais-sop.national.inria.fr (mail4-relais-sop.national.inria.fr [192.134.164.105]) by walapai.inria.fr (8.13.6/8.13.6) with ESMTP id p3J9v7Sj027457 for ; Tue, 19 Apr 2011 11:57:07 +0200 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AnsCAJFbrU3RVaG2kWdsb2JhbACHOo93hjuGQXQIFAEBAQEJCwsHFAQhiG+gcYpvgiWFHzGIXQEBAwaFawSFYYgmiXU6 X-IronPort-AV: E=Sophos;i="4.64,238,1301868000"; d="scan'208";a="93424028" Received: from mail-gx0-f182.google.com ([209.85.161.182]) by mail4-smtp-sop.national.inria.fr with ESMTP/TLS/RC4-SHA; 19 Apr 2011 11:57:01 +0200 Received: by gxk28 with SMTP id 28so2837390gxk.27 for ; Tue, 19 Apr 2011 02:57:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=SvOMN+N2d0uapfEE5tDhfSDznmAr3zb5hotU298MTfw=; b=Ttdgoh2eUjsl5yLYAPRyrONQHKfW/XTw0ptnJl5kkoOvS1TFDuvti9ue7PKM71MNkK QNboz3SPO60LgADJHFNVULX5wTFdM+wjflU1Hcmdx159+eb/VjkBmu+hF7IIWnyxwlb0 jSyOcZ259kj9ahzBzJC71tGnZNya+dbx1NnKU= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=BAIuqg8NUzpjepfoNRV63TJmEsX0nbk0zJkLhC85dsQ8XKWqSFzGAlmEeXK0kUdVUA nRxIGSKaHYBA+dOh02K3eM4uQQdXBNSq3WpkeUXNhsRv6XtodpFeCWgpaP2xWtZrjvLm 9TkvazrwLUYaAayqcAPydZ39jQ+NbNjUYoSD8= MIME-Version: 1.0 Received: by 10.236.79.133 with SMTP id i5mr4129724yhe.181.1303207020172; Tue, 19 Apr 2011 02:57:00 -0700 (PDT) Received: by 10.147.170.9 with HTTP; Tue, 19 Apr 2011 02:57:00 -0700 (PDT) In-Reply-To: <4D8CEAA4.2030403@inescporto.pt> References: <2054357367.219171.1300974318806.JavaMail.root@zmbs4.inria.fr> <4D8BD02D.1010505@inria.fr> <4D8C73C8.6020801@inescporto.pt> <1301055903.8429.314.camel@thinkpad> <341494683.237537.1301057887481.JavaMail.root@zmbs4.inria.fr> <4D8C944A.9060601@inria.fr> <4D8CB859.9040709@inescporto.pt> <4D8CDDCC.4010000@ens-lyon.org> <4D8CEAA4.2030403@inescporto.pt> Date: Tue, 19 Apr 2011 12:57:00 +0300 Message-ID: From: Eray Ozkural To: Hugo Ferreira Cc: Martin Jambon , caml-list@inria.fr Content-Type: multipart/alternative; boundary=90e6ba53af3281942f04a142871c Subject: Re: [Caml-list] Efficient OCaml multicore -- roadmap? --90e6ba53af3281942f04a142871c Content-Type: text/plain; charset=ISO-8859-1 On Fri, Mar 25, 2011 at 9:19 PM, Hugo Ferreira wrote: > On 03/25/2011 06:24 PM, Martin Jambon wrote: > >> On 03/25/2011 01:10 PM, Fabrice Le Fessant wrote: >>> >>>> Of course, sharing structured mutable data between threads will not be >>>> possible, but actually, it is a good thing if you want to write correct >>>> programs ;-) >>>> >>> >> On 03/25/11 08:44, Hugo Ferreira replied: >> >>> I'll stick to my guns here. It simply makes solving certain problem >>> unfeasible. Point in case: I work on machine learning algorithms. I >>> use large data-structures that must be processed (altered) >>> in order to learn. Because these data-structures are large it become >>> impractical to copy this to a process every time I start off a new >>> "thread". >>> >> >> The solution would be to use get/set via a message-passing interface. >> >> > Cannot see how this works. Say I want to share a balanced binary tree. > Several processes/threads each take this tree and alter it by adding and > deleting elements. Each (new) tree is then further processed by other > processes/threads. > > How can get/set be used in this scenario? > > I think it won't have good performance and it won't scale, and it will fail for truly delicate shared memory architectures of the future with thousands of cores.... And neither will it support on-chip message passing facilities of those future processors. The shared memory message passing never worked too well, anyway, too many redundant copies. Not fitting for high performance computing. No need at all except for embarrassingly parallel applications. I suppose that's the target, right? Best, -- Eray Ozkural, PhD candidate. Comp. Sci. Dept., Bilkent University, Ankara http://groups.yahoo.com/group/ai-philosophy http://myspace.com/arizanesil http://myspace.com/malfunct --90e6ba53af3281942f04a142871c Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable

On Fri, Mar 25, 2011 at 9:19 PM, Hugo Fe= rreira <hmf@inesc= porto.pt> wrote:
On 03/25/2011 06:24 PM, Martin Jambon wrote:
On 03/25/2011 01:10 PM, Fabrice Le Fessant wrote:
=A0 Of course, sharing structured mutable data between threads will not be=
possible, but actually, it is a good thing if you want to write correct
programs ;-)

On 03/25/11 08:44, Hugo Ferreira replied:
I'll stick to my guns here. It simply makes solving certain problem
unfeasible. Point in case: I work on machine learning algorithms. I
use large data-structures that must be processed (altered)
in order to learn. Because these data-structures are large it become
impractical to copy this to a process every time I start off a new
"thread".

The solution would be to use get/set via a message-passing interface.


Cannot see how this works. Say I want to share a balanced binary tree.
Several processes/threads each take this tree and alter it by adding and
deleting elements. Each (new) tree is then further processed by other
processes/threads.

How can get/set be used in this scenario?


I think it won't have good performance and i= t won't scale, and it will fail for truly delicate shared memory archit= ectures of the future with thousands of cores....

And neither will it support on-chip message passing fac= ilities of those future processors.

The shared mem= ory message passing never worked too well, anyway, too many redundant copie= s. Not fitting for high performance computing.=A0

No need at all except for embarrassingly parallel appli= cations. I suppose that's the target, right?

B= est,

--
Eray Ozkural, PhD candidate.=A0 Comp. Sci. Dept.= , Bilkent University, Ankara
http://groups.yahoo= .com/group/ai-philosophy
h= ttp://myspace.com/arizanesil ht= tp://myspace.com/malfunct

--90e6ba53af3281942f04a142871c--