From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.1.3 (2006-06-01) on yquem.inria.fr X-Spam-Level: X-Spam-Status: No, score=0.1 required=5.0 tests=AWL autolearn=disabled version=3.1.3 X-Original-To: caml-list@yquem.inria.fr Delivered-To: caml-list@yquem.inria.fr Received: from mail3-relais-sop.national.inria.fr (mail3-relais-sop.national.inria.fr [192.134.164.104]) by yquem.inria.fr (Postfix) with ESMTP id 8F710BBCA for ; Fri, 9 May 2008 22:39:34 +0200 (CEST) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AiQBAEJTJEjU436ummdsb2JhbACBU5A0AQEBAQEIBQgHEQOZFwE X-IronPort-AV: E=Sophos;i="4.27,462,1204498800"; d="scan'208";a="12431292" Received: from moutng.kundenserver.de ([212.227.126.174]) by mail3-smtp-sop.national.inria.fr with ESMTP; 09 May 2008 22:39:34 +0200 Received: from gate.lan.gerd-stolpmann.de (dslb-084-059-099-212.pools.arcor-ip.net [84.59.99.212]) by mrelayeu.kundenserver.de (node=mrelayeu7) with ESMTP (Nemesis) id 0ML2xA-1JuZNM3t6d-0005LV; Fri, 09 May 2008 22:39:33 +0200 Received: from [192.168.0.32] (fw.lan.gerd-stolpmann.de [192.168.1.1]) by gate.lan.gerd-stolpmann.de (Postfix) with ESMTP id 3BACDC074; Fri, 9 May 2008 22:39:32 +0200 (CEST) Subject: Re: [Caml-list] Re: Why OCaml rocks From: Gerd Stolpmann To: Jon Harrop Cc: caml-list@yquem.inria.fr In-Reply-To: <200805091910.41381.jon@ffconsultancy.com> References: <200805090139.54870.jon@ffconsultancy.com> <200805090609.36123.jon@ffconsultancy.com> <1210331526.17578.32.camel@flake.lan.gerd-stolpmann.de> <200805091910.41381.jon@ffconsultancy.com> Content-Type: text/plain Date: Fri, 09 May 2008 22:40:45 +0200 Message-Id: <1210365645.17578.88.camel@flake.lan.gerd-stolpmann.de> Mime-Version: 1.0 X-Mailer: Evolution 2.12.1 Content-Transfer-Encoding: 7bit X-Provags-ID: V01U2FsdGVkX1+tk8IJnudT+1dmDQMrnscXFnKIaejdZJvDlxs lppvXhxczFxraeHFVCQ7G3AQJNyEWn6QUJAzVsvjWT5K8rplyU 5X0Dr3Kw7Xhs0Zn6sHqZ4rFl8Nb3dEP X-Spam: no; 0.00; ocaml:01 gerd:01 stolpmann:01 gerd:01 stolpmann:01 parallelism:01 o'caml:01 parallelism:01 o'caml:01 ocaml:01 mutable:01 mutable:01 viktoriastr:01 64293:01 darmstadt:01 Am Freitag, den 09.05.2008, 19:10 +0100 schrieb Jon Harrop: > On Friday 09 May 2008 12:12:00 Gerd Stolpmann wrote: > > I think the parallelism capabilities are already excellent. We have been > > able to implement the application backend of Wink's people search in > > O'Caml, and it is of course a highly parallel system of programs. This > > is not the same class raytracers or desktop parallelism fall into - this > > is highly professional supercomputing. I'm talking about a cluster of > > ~20 computers with something like 60 CPUs. > > > > Of course, we did not use multithreading very much. We are relying on > > multi-processing (both "fork"ed style and separately started programs), > > and multiplexing (i.e. application-driven micro-threading). I especially > > like the latter: Doing multiplexing in O'Caml is fun, and a substitute > > for most applications of multithreading. For example, you want to query > > multiple remote servers in parallel: Very easy with multiplexing, > > whereas the multithreaded counterpart would quickly run into scalability > > problems (threads are heavy-weight, and need a lot of resources). > > If OCaml is good for concurrency on distributed systems that is great but it > is completely different to CPU-bound parallelism on multicores. You sound like somebody who tries to sell hardware :-) Well, our algorithms are quite easy to parallelize. I don't see a difference in whether they are CPU-bound or disk-bound - we also have lots of CPU-bound stuff, and the parallelization strategies are the same. The important thing is whether the algorithm can be formulated in a way so that state mutations are rare, or can at least be done in a "cache-friendly" way. Such algorithms exist for a lot of problems. I don't know which problems you want to solve, but it sounds like as if it were special problems. Like for most industries, most of our problems are simply "do the same for N objects" where N is very large, and sometimes "sort data", also for large N. > > In our case, the mutable data structures that count are on disk. > > Everything else is only temporary state. > > Exactly. That is a completely different kettle of fish to writing high > performance numerical codes for scientific computing. I don't understand. Relying on disk for sharing state is a big problem for us, but unavoidable. Disk is slow memory with a very special timing. Experience shows that even accessing state over the network is cheaper than over disk. Often, we end up designing our algorithms around the disk access characteristics. Compared to that the access to RAM-backed state over network is fast and easy. > > I admit that it is a challenge to structure programs in a way such that > > parallel programs not sharing memory profit from mutable state. Note > > that it is also a challenge to debug locks in a multithreaded program so > > that they run 24/7. Parallelism is not easy after all. > > Parallelism is easy in F#. Wonders must have happened I'm not aware of. How does F# prevent deadlocks? > > This is a quite theoretical statement. We will rather see that most > > application programmers will not learn parallelism at all, and that > > consumers will start question the sense of multicores, and the chip > > industry will search for alternatives. > > On the contrary, that is not a theoretical statement at all: it already > happened. F# already makes it much easier to write high performance parallel > algorithms and its concurrent GC is the crux of that capability. Don't misunderstand me, I'm not anti-F#. I only have no interests right now in taking advantage of multicores by concurrent GC's. I rather want to have an ultra-fast single-core execution. I can do the parallelization myself. Gerd > -- ------------------------------------------------------------ Gerd Stolpmann * Viktoriastr. 45 * 64293 Darmstadt * Germany gerd@gerd-stolpmann.de http://www.gerd-stolpmann.de Phone: +49-6151-153855 Fax: +49-6151-997714 ------------------------------------------------------------