From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Original-To: caml-list@yquem.inria.fr Delivered-To: caml-list@yquem.inria.fr Received: from mail1-relais-roc.national.inria.fr (mail1-relais-roc.national.inria.fr [192.134.164.82]) by yquem.inria.fr (Postfix) with ESMTP id AA590BBAF for ; Sun, 20 Dec 2009 14:47:34 +0100 (CET) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AoMBAN+5LUvRVdvSkGdsb2JhbACbEz8BAQEBCQkMBxMDpxkKgSiEE4hPAQIDBYQpBIpo X-IronPort-AV: E=Sophos;i="4.47,427,1257116400"; d="scan'208";a="42479119" Received: from mail-ew0-f210.google.com ([209.85.219.210]) by mail1-smtp-roc.national.inria.fr with ESMTP; 20 Dec 2009 14:47:34 +0100 Received: by ewy2 with SMTP id 2so5100882ewy.7 for ; Sun, 20 Dec 2009 05:47:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:reply-to:in-reply-to :references:date:message-id:subject:from:to:cc:content-type; bh=pCdgsosiHWYjZlr0NYRt1nrkF5EWwb3VNyPmJLabMks=; b=FL7pP8T4FlUL/PHD3naENVaYvrtDIHmHemWlgjLGVqxkHXOexo2fmeiwhbzVo5Fx+m NnrSwzh+CN40my0jIc8Q3To9eTD/VUVN9R2ixmE5hhJkJPfoAsnhQUn2Y80gmJCr3VMp 7Z6chWRSqgWeSrqSWPNwwlCoQ2RRZCgQr63fY= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:reply-to:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; b=N1Qpl4c/EGFqHUXYK0ECU9q02f3lUa2XL9Di1LHQfx8h9hrBpfm/R26xTL9HF0IHMu i5UiSZjUXPHrTTQ6OzwL8wrLFKu/6KUOJU8O1SCJdOMY//ZMyvBwQ4WF/BDRsA+JwXbz Q8vbbKTuqNVhB7QYD+YX0WV1QgJKKHbtSB3x8= MIME-Version: 1.0 Received: by 10.216.90.74 with SMTP id d52mr2380748wef.51.1261316851398; Sun, 20 Dec 2009 05:47:31 -0800 (PST) Reply-To: yminsky@gmail.com In-Reply-To: References: <4B2D2BC1.6020204@msu.edu> <200912200443.57698.jon@ffconsultancy.com> Date: Sun, 20 Dec 2009 08:47:31 -0500 Message-ID: <891bd3390912200547i67c3852dv1c91900018fdea9b@mail.gmail.com> Subject: Re: [***SPAM*** Score/Req: 10.1/8.0] Re: [Caml-list] Re: OCaml is broken From: Yaron Minsky To: Erik Rigtorp Cc: caml-list Content-Type: multipart/alternative; boundary=0016e6d99d61e079bf047b293688 X-Spam: no; 0.00; ocaml:01 yaron:01 minsky:01 yminsky:01 ocaml:01 runtime:01 runtimes:01 off-list:01 compiler:01 runtime:01 runtimes:01 off-list:01 compiler:01 req:98 20,:98 --0016e6d99d61e079bf047b293688 Content-Type: text/plain; charset=ISO-8859-1 On Sun, Dec 20, 2009 at 7:21 AM, Erik Rigtorp wrote: > The first step for OCaml would be to be able to run multiple > communicating instances of the runtime bound to one core each in one > process and have them communicate via lock free queues. We've done some experiments in this direction at Jane Street. On Linux, we've been able to get fast enough IPC channels for our purposes that slamming things into the same memory space has not in the end been necessary. (There is I agree some pain associated with running multiple runtimes in the same process. If you're interested, contact me off-list and I can try to get you some of the details of what we ran into.) But have you tried using shared-memory segments for communicating between different processes? You say the latencies are too high, but do you have any measurements you could share? Have you tried queues using shared memory segments, in particular? Inter-thread communication has latency as well, and the performance issues depend on lots of things, OS and hardware platform included. It would help in understanding the tradeoffs. As we go to higher-and-higher numbers of cores, I suspect that message-passing solutions are likely to scale better than shared memory, so I'm not so sure that OCaml is on the wrong path here. I think that most of the work that's needed is going to come in the form of libraries, with only a little work in the compiler and the runtime. Given that, I think this is an issue for the community to solve, not INRIA. y --0016e6d99d61e079bf047b293688 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable On Sun, Dec 20, 2009 at 7:21 AM, Erik Rigtorp <erik@rigtorp.com> wrote:
The first step for OCaml would be to be able to run multiple
communicating instances of the runtime bound to one core each in one
process and have them communicate via lock free queues.
We've done some experiments in this direction at Jane Stre= et. =A0On Linux, we've been able to get fast enough IPC channels for ou= r purposes that slamming things into the same memory space has not in the e= nd been necessary. =A0(There is I agree some pain associated with running m= ultiple runtimes in the same process. =A0If you're interested, contact = me off-list and I can try to get you some of the details of what we ran int= o.)

But have you tried using shared-memory segments for com= municating between different processes? =A0You say the latencies are too hi= gh, but do you have any measurements you could share? Have you tried queues= using shared memory segments, in particular? =A0Inter-thread communication= has latency as well, and the performance issues depend on lots of things, = OS and hardware platform included. =A0It would help in understanding the tr= adeoffs.

As we go to higher-and-higher numbers of cores, I suspe= ct that message-passing solutions are likely to scale better than shared me= mory, so I'm not so sure that OCaml is on the wrong path here. =A0I thi= nk that most of the work that's needed is going to come in the form of = libraries, with only a little work in the compiler and the runtime. =A0Give= n that, I think this is an issue for the community to solve, not INRIA.

y

=A0

--0016e6d99d61e079bf047b293688--