From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.1.3 (2006-06-01) on yquem.inria.fr X-Spam-Level: X-Spam-Status: No, score=0.3 required=5.0 tests=AWL autolearn=disabled version=3.1.3 X-Original-To: caml-list@yquem.inria.fr Delivered-To: caml-list@yquem.inria.fr Received: from mail4-relais-sop.national.inria.fr (mail4-relais-sop.national.inria.fr [192.134.164.105]) by yquem.inria.fr (Postfix) with ESMTP id D2774BBCA for ; Fri, 9 May 2008 23:01:14 +0200 (CEST) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqACAONYJEjRVcivc2dsb2JhbACSBwEMAwQECQ8Fk1KFQAE X-IronPort-AV: E=Sophos;i="4.27,462,1204498800"; d="scan'208";a="26013132" Received: from wf-out-1314.google.com ([209.85.200.175]) by mail4-smtp-sop.national.inria.fr with ESMTP; 09 May 2008 23:01:03 +0200 Received: by wf-out-1314.google.com with SMTP id 28so1758764wfa.15 for ; Fri, 09 May 2008 14:00:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; bh=EMdyIw9UUnifJRCMkVpXrp64umy7mXb9bH5kLiD0mJ8=; b=iSzIapoLThSOyH8MfIAQljVWCZLfFRjBMKp+fAko4Ly8L0Tnl1EHSx+Tn0wGQOSd+duPyO6Hh3mcxluYaB68V1VkhROWJsAE+iH8p39nqQCnI3AqS1BPtRycQtlwAV9GgipnXH6MrsE85pgf7/0UHg6VpQcp62/h6KUtYa/Tkhc= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=T33/qj/OpHA6gOmZy/isWWIbbwyCTAC3r1JynlPTKOQrDuWN1Z2XjTLxhu7EnL9l8P80/zLuqJaaemhBZl+oPvW0Z9azkrhgm4Mj6hD9lDVr7EStI4ReZVXXPpBeRlGHTepVxLDGUPkQelvFgL3FdIPlxQ/nsLmGwQyJTiA0BAQ= Received: by 10.142.89.9 with SMTP id m9mr2143316wfb.116.1210366859341; Fri, 09 May 2008 14:00:59 -0700 (PDT) Received: by 10.142.115.3 with HTTP; Fri, 9 May 2008 14:00:59 -0700 (PDT) Message-ID: <9d3ec8300805091400q1ed60bf8x95e31814ebf62473@mail.gmail.com> Date: Fri, 9 May 2008 22:00:59 +0100 From: "Till Varoquaux" To: "Gerd Stolpmann" Subject: Re: [Caml-list] Re: Why OCaml rocks Cc: "Jon Harrop" , caml-list@yquem.inria.fr In-Reply-To: <1210365645.17578.88.camel@flake.lan.gerd-stolpmann.de> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <200805090139.54870.jon@ffconsultancy.com> <200805090609.36123.jon@ffconsultancy.com> <1210331526.17578.32.camel@flake.lan.gerd-stolpmann.de> <200805091910.41381.jon@ffconsultancy.com> <1210365645.17578.88.camel@flake.lan.gerd-stolpmann.de> X-Spam: no; 0.00; ocaml:01 ocaml:01 gerd:01 stolpmann:01 gerd:01 stolpmann:01 parallelism:01 o'caml:01 parallelism:01 o'caml:01 mutable:01 descriptors:01 bigarray:01 forks:01 mutable:01 First of all let's try to stop the squabling and have some actual some discussions with actual content (trolling is very tempting and I am the first to fall for it). OCaml is extremly nice but not perfect. Other languages have other tradeoffs and the INRIA is not here to fullfill all our desires. On Fri, May 9, 2008 at 9:40 PM, Gerd Stolpmann wrote: > > Am Freitag, den 09.05.2008, 19:10 +0100 schrieb Jon Harrop: >> On Friday 09 May 2008 12:12:00 Gerd Stolpmann wrote: >> > I think the parallelism capabilities are already excellent. We have been >> > able to implement the application backend of Wink's people search in >> > O'Caml, and it is of course a highly parallel system of programs. This >> > is not the same class raytracers or desktop parallelism fall into - this >> > is highly professional supercomputing. I'm talking about a cluster of >> > ~20 computers with something like 60 CPUs. >> > >> > Of course, we did not use multithreading very much. We are relying on >> > multi-processing (both "fork"ed style and separately started programs), >> > and multiplexing (i.e. application-driven micro-threading). I especially >> > like the latter: Doing multiplexing in O'Caml is fun, and a substitute >> > for most applications of multithreading. For example, you want to query >> > multiple remote servers in parallel: Very easy with multiplexing, >> > whereas the multithreaded counterpart would quickly run into scalability >> > problems (threads are heavy-weight, and need a lot of resources). >> >> If OCaml is good for concurrency on distributed systems that is great but it >> is completely different to CPU-bound parallelism on multicores. > > You sound like somebody who tries to sell hardware :-) > > Well, our algorithms are quite easy to parallelize. I don't see a > difference in whether they are CPU-bound or disk-bound - we also have > lots of CPU-bound stuff, and the parallelization strategies are the > same. > > The important thing is whether the algorithm can be formulated in a way > so that state mutations are rare, or can at least be done in a > "cache-friendly" way. Such algorithms exist for a lot of problems. I > don't know which problems you want to solve, but it sounds like as if it > were special problems. Like for most industries, most of our problems > are simply "do the same for N objects" where N is very large, and > sometimes "sort data", also for large N. > >> > In our case, the mutable data structures that count are on disk. >> > Everything else is only temporary state. >> >> Exactly. That is a completely different kettle of fish to writing high >> performance numerical codes for scientific computing. > > I don't understand. Relying on disk for sharing state is a big problem > for us, but unavoidable. Disk is slow memory with a very special timing. > Experience shows that even accessing state over the network is cheaper > than over disk. Often, we end up designing our algorithms around the > disk access characteristics. Compared to that the access to RAM-backed > state over network is fast and easy. > shm_open shares memories through file descriptors and, under linux/glibc, this done using /dev/shm. You can mmap this as a bigarray and, voila, shared memory. This is quite nice for numerical computation, plus you get closures etc... in your forks. Oh and COW on modern OS's makes this very cheap. >> > I admit that it is a challenge to structure programs in a way such that >> > parallel programs not sharing memory profit from mutable state. Note >> > that it is also a challenge to debug locks in a multithreaded program so >> > that they run 24/7. Parallelism is not easy after all. >> >> Parallelism is easy in F#. > > Wonders must have happened I'm not aware of. How does F# prevent > deadlocks? > >> > This is a quite theoretical statement. We will rather see that most >> > application programmers will not learn parallelism at all, and that >> > consumers will start question the sense of multicores, and the chip >> > industry will search for alternatives. >> >> On the contrary, that is not a theoretical statement at all: it already >> happened. F# already makes it much easier to write high performance parallel >> algorithms and its concurrent GC is the crux of that capability. > > Don't misunderstand me, I'm not anti-F#. I only have no interests right > now in taking advantage of multicores by concurrent GC's. I rather want > to have an ultra-fast single-core execution. I can do the > parallelization myself. > > Gerd > >> > -- > ------------------------------------------------------------ > Gerd Stolpmann * Viktoriastr. 45 * 64293 Darmstadt * Germany > gerd@gerd-stolpmann.de http://www.gerd-stolpmann.de > Phone: +49-6151-153855 Fax: +49-6151-997714 > ------------------------------------------------------------ > > > _______________________________________________ > Caml-list mailing list. Subscription management: > http://yquem.inria.fr/cgi-bin/mailman/listinfo/caml-list > Archives: http://caml.inria.fr > Beginner's list: http://groups.yahoo.com/group/ocaml_beginners > Bug reports: http://caml.inria.fr/bin/caml-bugs > -- http://till-varoquaux.blogspot.com/