From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Original-To: caml-list@sympa.inria.fr Delivered-To: caml-list@sympa.inria.fr Received: from mail3-relais-sop.national.inria.fr (mail3-relais-sop.national.inria.fr [192.134.164.104]) by sympa.inria.fr (Postfix) with ESMTPS id C9C2D7FF9F for ; Mon, 7 Mar 2016 22:54:24 +0100 (CET) IronPort-PHdr: 9a23:Ckx3HBOqUDqavd/zCGgl6mtUPXoX/o7sNwtQ0KIMzox0KPT7rarrMEGX3/hxlliBBdydsKIbzbqP+Pm9ACQp2tWojjMrSNR0TRgLiMEbzUQLIfWuLgnFFsPsdDEwB89YVVVorDmROElRH9viNRWJ+iXhpQAbFhi3DwdpPOO9QteU1JTokbDrsMSOOk1hv3mUX/BbFF2OtwLft80b08NJC50a7V/3mEZOYPlc3mhyJFiezF7W78a0+4N/oWwL46pyv+YJa6jxfrw5QLpEF3xmdjltvIy4/SXEGC6G4nAbVmBetxNUCgzG5VmuW5L4riL+teNV1yyTPMmwRrcxD2eM9aBuHS7vlC4CfxQw6mfQm4QknaVHqRerrgZ5xJ/8b4ScNf44daTYK4BJDVFdV9pcAnQSSri3aJECWq9YZb5V Authentication-Results: mail3-smtp-sop.national.inria.fr; spf=None smtp.pra=yotambarnoy@gmail.com; spf=Pass smtp.mailfrom=yotambarnoy@gmail.com; spf=None smtp.helo=postmaster@mail-yw0-f177.google.com Received-SPF: None (mail3-smtp-sop.national.inria.fr: no sender authenticity information available from domain of yotambarnoy@gmail.com) identity=pra; client-ip=209.85.161.177; receiver=mail3-smtp-sop.national.inria.fr; envelope-from="yotambarnoy@gmail.com"; x-sender="yotambarnoy@gmail.com"; x-conformance=sidf_compatible Received-SPF: Pass (mail3-smtp-sop.national.inria.fr: domain of yotambarnoy@gmail.com designates 209.85.161.177 as permitted sender) identity=mailfrom; client-ip=209.85.161.177; receiver=mail3-smtp-sop.national.inria.fr; envelope-from="yotambarnoy@gmail.com"; x-sender="yotambarnoy@gmail.com"; x-conformance=sidf_compatible; x-record-type="v=spf1" Received-SPF: None (mail3-smtp-sop.national.inria.fr: no sender authenticity information available from domain of postmaster@mail-yw0-f177.google.com) identity=helo; client-ip=209.85.161.177; receiver=mail3-smtp-sop.national.inria.fr; envelope-from="yotambarnoy@gmail.com"; x-sender="postmaster@mail-yw0-f177.google.com"; x-conformance=sidf_compatible X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A0CIAAAX991WlrGhVdFchAxtBro6AQ2BaRcBBYVyAoEpBzgUAQEBAQEBAQEQAQEBAQcNCQkhL4ItghQBAQEDARIRHQEbDw4BAwELBgULAwoqAgIhAQERAQUBDgENBhMbB4dsAQMKCA6hVYExPjGLNoFqgleFRAoZJw1Rg3EBAQEBAQEEAQEBAQEBAQESAQEECgSGCYQ9gj2BV1OCGzgTgScFjiSJBoVjhhWBdY56hwyGCxEegQ8eAQGCOA0RCIFmHi4BiAGBOwEBAQ X-IPAS-Result: A0CIAAAX991WlrGhVdFchAxtBro6AQ2BaRcBBYVyAoEpBzgUAQEBAQEBAQEQAQEBAQcNCQkhL4ItghQBAQEDARIRHQEbDw4BAwELBgULAwoqAgIhAQERAQUBDgENBhMbB4dsAQMKCA6hVYExPjGLNoFqgleFRAoZJw1Rg3EBAQEBAQEEAQEBAQEBAQESAQEECgSGCYQ9gj2BV1OCGzgTgScFjiSJBoVjhhWBdY56hwyGCxEegQ8eAQGCOA0RCIFmHi4BiAGBOwEBAQ X-IronPort-AV: E=Sophos;i="5.22,553,1449529200"; d="scan'208,217";a="167517634" Received: from mail-yw0-f177.google.com ([209.85.161.177]) by mail3-smtp-sop.national.inria.fr with ESMTP/TLS/AES128-GCM-SHA256; 07 Mar 2016 22:54:22 +0100 Received: by mail-yw0-f177.google.com with SMTP id d65so34116819ywb.0 for ; Mon, 07 Mar 2016 13:54:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=z/eMjsnfWbUZqPVAxPMy79N+j2zGxxXSRWCE2K9nvfU=; b=jLtBv2PpCsO7OQ318Q2AT7SD4uuuUh6WZp7oS3wRU2z7QNCf9bvvB9uZmJ92dQwsYA hmQIf+HDG0WiieoiOpzhQPFQ02vez7Git4s53dCv7qQCY3v/pi72b1pRQPZ0zf56FMBg K6aQGNoVSkgXRfDtY4uCxrq43VdNlJxDV9aTsD7d9T7BiqfrWPhPfQnRcqEq7lPXD1YF 0EcMDKik/vSP8uPgmUbUksF2MLydIRQGG/lofDRjl2HYkJ014NQwy8XpJN50gNvUyIDX 4McBGWLqxeRdW+Tf21gQNyR4Bd/kTEm3mjfOvwUISvyzpowOGvTHr+JS8ySAa9iZ5mP9 2oYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=z/eMjsnfWbUZqPVAxPMy79N+j2zGxxXSRWCE2K9nvfU=; b=FKlmRmTKS0kGxL7iU7A9Uw5DEuXsY1mt0vZctRPpC24ZgyEBayoGpuJIyvmUrxJfj4 e3o8pMo1n4kS4505RIdJ5VLSySqKLD/rUIsHZyxt8Gbae9EXqgSp7RrE77/8Y48Nh2EL d/PpV8cvPRgCrFJRNbCYP9aG0wGwWdWwCV8554KgyQn9Dn5iAOtNJ+s0LeB1raL/XOxn tKxloORuYqnlvmWxnbeenMMengkn4JP2KsrYee0brU9DWemWuvoiummt0QneQ4/7Z6YH d6SFh7IOUHhqcgUQR3rgSkUcvVpKKBec5jQuxBSUINVIJLS5M7I9EHhSuloXVYT6l9H/ k6Kw== X-Gm-Message-State: AD7BkJJvE8IsPO4Ek7LvRvpFNFejBjLqZTXUuzgOng8c15L8MXdFuNt6Jg2sVzog1ZrKPeImF8L+Upxk1h2Gaw== X-Received: by 10.129.86.131 with SMTP id k125mr12753400ywb.158.1457387661877; Mon, 07 Mar 2016 13:54:21 -0800 (PST) MIME-Version: 1.0 Received: by 10.37.13.3 with HTTP; Mon, 7 Mar 2016 13:54:02 -0800 (PST) In-Reply-To: <86d1r69ho4.fsf@gmail.com> References: <86h9gi9msc.fsf@gmail.com> <86d1r69ho4.fsf@gmail.com> From: Yotam Barnoy Date: Mon, 7 Mar 2016 16:54:02 -0500 Message-ID: To: Malcolm Matalka Cc: Yaron Minsky , Jesper Louis Andersen , Ocaml Mailing List Content-Type: multipart/alternative; boundary=001a1143312ce2e8c2052d7c8150 Subject: Re: [Caml-list] Question about Lwt/Async --001a1143312ce2e8c2052d7c8150 Content-Type: text/plain; charset=UTF-8 Out of curiosity, what polling mechanism is available on the lwt side? On Mon, Mar 7, 2016 at 3:06 PM, Malcolm Matalka wrote: > Yaron Minsky writes: > > > Right now, only select and epoll are supported, but adding support for > > something else isn't hard. The Async_unix library has an interface > > called File_descr_watcher_intf.S, which both select and epoll go > > through. Adding support for another shouldn't be difficult if someone > > with the right OS expertise wants to do it. > > > > Is there a particular kernel API you want support for? > > kqueue, I run most things on FreeBSD and select is sadly mostly useless > for anything serious. I've played with the idea of adding kqueue > support myself but haven't had the time. > > > > > y > > > > On Mon, Mar 7, 2016 at 1:16 PM, Malcolm Matalka > wrote: > >> Yaron Minsky writes: > >> > >>> This is definitely a fraught topic, and it's unfortunate that there's > >>> no clear solution. > >>> > >>> To add a bit more information: > >>> > >>> - Async is more portable than it once was. There's now Core_kernel, > >>> Async_kernel and Async_rpc_kernel, which allows us to do things like > >>> run Async applications in the browser. I would think Windows > >>> support would be pretty doable by someone who understands that world > >>> well. > >>> > >>> That said, the chain of dependencies brought in by Async is still > >>> quite big. This is something that could perhaps be improved, either > >>> with better dead code analysis in OCaml, or some tweaks to > >>> Async_kernel and Core_kernel themselves. > >> > >> When I last looked at the scheduler it was limited to [select] or > >> [epoll], is this still the case? How difficult would it be to expand on > >> those? > >> > >>> > >>> - There are things we could contemplate to make it easier to bridge > >>> the divide. Jeremie Dimino did a proof of concept rewrite of lwt to > >>> use async as its implementation, where an Lwt.t and a Deferred.t are > >>> equal at the type level. > >>> > >>> https://github.com/janestreet/lwt-async > >>> > >>> Another possibility, and one that might be easier to write, would be > >>> to allow Lwt code to run using the Async scheduler as another > >>> possible back-end. This would allow one to have programs that used > >>> both Async and Lwt together in one program, without running on > >>> different threads. > >>> > >>> It's worth mentioning if that there is interest in making Async more > >>> suitable for a wider variety of goals, we're happy to work with > >>> outside contributors on it. For example, if someone wanted to work on > >>> Windows support for Async, we'd be happy to help out on integrating > >>> that work. > >>> > >>> Probably the biggest issue is executable size. That will get better > >>> when we release an unpacked version of our external libraries. But > >>> even then, the module-level granularity captures more things than > >>> would be ideal. > >>> > >>> y > >>> > >>> On Mon, Mar 7, 2016 at 10:16 AM, Jesper Louis Andersen > >>> wrote: > >>>> > >>>> On Mon, Mar 7, 2016 at 2:38 AM, Yotam Barnoy > wrote: > >>>>> > >>>>> Also, what happens to general utility functions that aren't > rewritten for > >>>>> Async/Lwt -- as far as I can tell, being in non-monadic code, they > will > >>>>> always starve other threads, since they cannot yield to another > Async/Lwt > >>>>> thread. Is this perception correct? > >>>> > >>>> > >>>> Yes. > >>>> > >>>> On one hand, your observation is negative in the sense that now your > code > >>>> has "color" in the sense that it is written for one library only. And > you > >>>> have to transform code to having the right color before it can be > used. This > >>>> is not the case if the concurrency model is at a lower level[0]. > >>>> > >>>> On the other hand, your observation is positive: cooperative > scheduling > >>>> makes the points in which the code can switch explicit. This gives the > >>>> programmer far more control over when you are done with a task and > start to > >>>> process the next task. You can also avoid the preemption check in the > code > >>>> all the time. If your code manipulates lots of shared data, it also > >>>> simplifies things since you don't usually have to protect data with a > mutex > >>>> in a single-threaded context as much[1]. Cooperative models, if > carefully > >>>> managed, can exploit structure in the problem domain, whereas a > preemptive > >>>> model needs to fit all. > >>>> > >>>> My personal opinion is that the preemptive model eventually wins over > the > >>>> cooperative model, much like it has in most (all popular) operating > systems. > >>>> It is simply more productive to take an up-front performance hit as a > >>>> sacrifice for a system which is more robust against stray code > misbehaving. > >>>> If a cooperative system fails, it is fails catastrophically. If a > preemptive > >>>> system fails, it degrades in performance. > >>>> > >>>> But given I have more than 10 years of Erlang programming behind me > by now, > >>>> I'm obviously biased toward certain computational models :) > >>>> > >>>> [0] Erlang would be one such example, where the system is preemptively > >>>> scheduling for you and you can use any code in any place without > having to > >>>> worry about blocking for latency. Go is quasi-preemptive because it > checks > >>>> on function calls, but in contrast to Erlang a loop is not forced to > factor > >>>> through a recursion, so it can in principle run indefinitely. Haskell > (GHC) > >>>> is quasi-preemptive as well, checking on memory allocation > boundaries. So > >>>> the thing to look out for in GHC is latency from processing large > arrays > >>>> with no allocation, say. > >>>> > >>>> [1] Erlang has two VM runtimes for this reason. One is > single-threaded and > >>>> can avoid lots of locks which is far faster for certain workloads, or > on > >>>> embedded devices with a single core only. > >>>> > >>>> -- > >>>> J. > --001a1143312ce2e8c2052d7c8150 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Out of curiosity, what polling mechanism is available on t= he lwt side?

On Mon, Mar 7, 2016 at 3:06 PM, Malcolm Matalka <mmatalka@gmail.com&= gt; wrote:
Yaron = Minsky <yminsky@janestreet.com= > writes:

> Right now, only select and epoll are supported, but adding support for=
> something else isn't hard.=C2=A0 The Async_unix library has an int= erface
> called File_descr_watcher_intf.S, which both select and epoll go
> through.=C2=A0 Adding support for another shouldn't be difficult i= f someone
> with the right OS expertise wants to do it.
>
> Is there a particular kernel API you want support for?

kqueue, I run most things on FreeBSD and select is sadly mostly usel= ess
for anything serious.=C2=A0 I've played with the idea of adding kqueue<= br> support myself but haven't had the time.

>
> y
>
> On Mon, Mar 7, 2016 at 1:16 PM, Malcolm Matalka <mmatalka@gmail.com> wrote:
>> Yaron Minsky <yminsky= @janestreet.com> writes:
>>
>>> This is definitely a fraught topic, and it's unfortunate t= hat there's
>>> no clear solution.
>>>
>>> To add a bit more information:
>>>
>>> - Async is more portable than it once was.=C2=A0 There's n= ow Core_kernel,
>>>=C2=A0 =C2=A0Async_kernel and Async_rpc_kernel, which allows us= to do things like
>>>=C2=A0 =C2=A0run Async applications in the browser.=C2=A0 I wou= ld think Windows
>>>=C2=A0 =C2=A0support would be pretty doable by someone who unde= rstands that world
>>>=C2=A0 =C2=A0well.
>>>
>>>=C2=A0 =C2=A0That said, the chain of dependencies brought in by= Async is still
>>>=C2=A0 =C2=A0quite big.=C2=A0 This is something that could perh= aps be improved, either
>>>=C2=A0 =C2=A0with better dead code analysis in OCaml, or some t= weaks to
>>>=C2=A0 =C2=A0Async_kernel and Core_kernel themselves.
>>
>> When I last looked at the scheduler it was limited to [select] or<= br> >> [epoll], is this still the case?=C2=A0 How difficult would it be t= o expand on
>> those?
>>
>>>
>>> - There are things we could contemplate to make it easier to b= ridge
>>>=C2=A0 =C2=A0the divide.=C2=A0 Jeremie Dimino did a proof of co= ncept rewrite of lwt to
>>>=C2=A0 =C2=A0use async as its implementation, where an Lwt.t an= d a Deferred.t are
>>>=C2=A0 =C2=A0equal at the type level.
>>>
>>>=C2=A0 =C2=A0 =C2=A0https://github.com/janestreet= /lwt-async
>>>
>>>=C2=A0 =C2=A0Another possibility, and one that might be easier = to write, would be
>>>=C2=A0 =C2=A0to allow Lwt code to run using the Async scheduler= as another
>>>=C2=A0 =C2=A0possible back-end.=C2=A0 This would allow one to h= ave programs that used
>>>=C2=A0 =C2=A0both Async and Lwt together in one program, withou= t running on
>>>=C2=A0 =C2=A0different threads.
>>>
>>> It's worth mentioning if that there is interest in making = Async more
>>> suitable for a wider variety of goals, we're happy to work= with
>>> outside contributors on it.=C2=A0 For example, if someone want= ed to work on
>>> Windows support for Async, we'd be happy to help out on in= tegrating
>>> that work.
>>>
>>> Probably the biggest issue is executable size.=C2=A0 That will= get better
>>> when we release an unpacked version of our external libraries.= =C2=A0 But
>>> even then, the module-level granularity captures more things t= han
>>> would be ideal.
>>>
>>> y
>>>
>>> On Mon, Mar 7, 2016 at 10:16 AM, Jesper Louis Andersen
>>> <jesper.= louis.andersen@gmail.com> wrote:
>>>>
>>>> On Mon, Mar 7, 2016 at 2:38 AM, Yotam Barnoy <yotambarnoy@gmail.com> wrote:
>>>>>
>>>>> Also, what happens to general utility functions that a= ren't rewritten for
>>>>> Async/Lwt -- as far as I can tell, being in non-monadi= c code, they will
>>>>> always starve other threads, since they cannot yield t= o another Async/Lwt
>>>>> thread. Is this perception correct?
>>>>
>>>>
>>>> Yes.
>>>>
>>>> On one hand, your observation is negative in the sense tha= t now your code
>>>> has "color" in the sense that it is written for = one library only. And you
>>>> have to transform code to having the right color before it= can be used. This
>>>> is not the case if the concurrency model is at a lower lev= el[0].
>>>>
>>>> On the other hand, your observation is positive: cooperati= ve scheduling
>>>> makes the points in which the code can switch explicit. Th= is gives the
>>>> programmer far more control over when you are done with a = task and start to
>>>> process the next task. You can also avoid the preemption c= heck in the code
>>>> all the time. If your code manipulates lots of shared data= , it also
>>>> simplifies things since you don't usually have to prot= ect data with a mutex
>>>> in a single-threaded context as much[1]. Cooperative model= s, if carefully
>>>> managed, can exploit structure in the problem domain, wher= eas a preemptive
>>>> model needs to fit all.
>>>>
>>>> My personal opinion is that the preemptive model eventuall= y wins over the
>>>> cooperative model, much like it has in most (all popular) = operating systems.
>>>> It is simply more productive to take an up-front performan= ce hit as a
>>>> sacrifice for a system which is more robust against stray = code misbehaving.
>>>> If a cooperative system fails, it is fails catastrophicall= y. If a preemptive
>>>> system fails, it degrades in performance.
>>>>
>>>> But given I have more than 10 years of Erlang programming = behind me by now,
>>>> I'm obviously biased toward certain computational mode= ls :)
>>>>
>>>> [0] Erlang would be one such example, where the system is = preemptively
>>>> scheduling for you and you can use any code in any place w= ithout having to
>>>> worry about blocking for latency. Go is quasi-preemptive b= ecause it checks
>>>> on function calls, but in contrast to Erlang a loop is not= forced to factor
>>>> through a recursion, so it can in principle run indefinite= ly. Haskell (GHC)
>>>> is quasi-preemptive as well, checking on memory allocation= boundaries. So
>>>> the thing to look out for in GHC is latency from processin= g large arrays
>>>> with no allocation, say.
>>>>
>>>> [1] Erlang has two VM runtimes for this reason. One is sin= gle-threaded and
>>>> can avoid lots of locks which is far faster for certain wo= rkloads, or on
>>>> embedded devices with a single core only.
>>>>
>>>> --
>>>> J.

--001a1143312ce2e8c2052d7c8150--