Out of curiosity, what polling mechanism is available on the lwt side? On Mon, Mar 7, 2016 at 3:06 PM, Malcolm Matalka wrote: > Yaron Minsky writes: > > > Right now, only select and epoll are supported, but adding support for > > something else isn't hard. The Async_unix library has an interface > > called File_descr_watcher_intf.S, which both select and epoll go > > through. Adding support for another shouldn't be difficult if someone > > with the right OS expertise wants to do it. > > > > Is there a particular kernel API you want support for? > > kqueue, I run most things on FreeBSD and select is sadly mostly useless > for anything serious. I've played with the idea of adding kqueue > support myself but haven't had the time. > > > > > y > > > > On Mon, Mar 7, 2016 at 1:16 PM, Malcolm Matalka > wrote: > >> Yaron Minsky writes: > >> > >>> This is definitely a fraught topic, and it's unfortunate that there's > >>> no clear solution. > >>> > >>> To add a bit more information: > >>> > >>> - Async is more portable than it once was. There's now Core_kernel, > >>> Async_kernel and Async_rpc_kernel, which allows us to do things like > >>> run Async applications in the browser. I would think Windows > >>> support would be pretty doable by someone who understands that world > >>> well. > >>> > >>> That said, the chain of dependencies brought in by Async is still > >>> quite big. This is something that could perhaps be improved, either > >>> with better dead code analysis in OCaml, or some tweaks to > >>> Async_kernel and Core_kernel themselves. > >> > >> When I last looked at the scheduler it was limited to [select] or > >> [epoll], is this still the case? How difficult would it be to expand on > >> those? > >> > >>> > >>> - There are things we could contemplate to make it easier to bridge > >>> the divide. Jeremie Dimino did a proof of concept rewrite of lwt to > >>> use async as its implementation, where an Lwt.t and a Deferred.t are > >>> equal at the type level. > >>> > >>> https://github.com/janestreet/lwt-async > >>> > >>> Another possibility, and one that might be easier to write, would be > >>> to allow Lwt code to run using the Async scheduler as another > >>> possible back-end. This would allow one to have programs that used > >>> both Async and Lwt together in one program, without running on > >>> different threads. > >>> > >>> It's worth mentioning if that there is interest in making Async more > >>> suitable for a wider variety of goals, we're happy to work with > >>> outside contributors on it. For example, if someone wanted to work on > >>> Windows support for Async, we'd be happy to help out on integrating > >>> that work. > >>> > >>> Probably the biggest issue is executable size. That will get better > >>> when we release an unpacked version of our external libraries. But > >>> even then, the module-level granularity captures more things than > >>> would be ideal. > >>> > >>> y > >>> > >>> On Mon, Mar 7, 2016 at 10:16 AM, Jesper Louis Andersen > >>> wrote: > >>>> > >>>> On Mon, Mar 7, 2016 at 2:38 AM, Yotam Barnoy > wrote: > >>>>> > >>>>> Also, what happens to general utility functions that aren't > rewritten for > >>>>> Async/Lwt -- as far as I can tell, being in non-monadic code, they > will > >>>>> always starve other threads, since they cannot yield to another > Async/Lwt > >>>>> thread. Is this perception correct? > >>>> > >>>> > >>>> Yes. > >>>> > >>>> On one hand, your observation is negative in the sense that now your > code > >>>> has "color" in the sense that it is written for one library only. And > you > >>>> have to transform code to having the right color before it can be > used. This > >>>> is not the case if the concurrency model is at a lower level[0]. > >>>> > >>>> On the other hand, your observation is positive: cooperative > scheduling > >>>> makes the points in which the code can switch explicit. This gives the > >>>> programmer far more control over when you are done with a task and > start to > >>>> process the next task. You can also avoid the preemption check in the > code > >>>> all the time. If your code manipulates lots of shared data, it also > >>>> simplifies things since you don't usually have to protect data with a > mutex > >>>> in a single-threaded context as much[1]. Cooperative models, if > carefully > >>>> managed, can exploit structure in the problem domain, whereas a > preemptive > >>>> model needs to fit all. > >>>> > >>>> My personal opinion is that the preemptive model eventually wins over > the > >>>> cooperative model, much like it has in most (all popular) operating > systems. > >>>> It is simply more productive to take an up-front performance hit as a > >>>> sacrifice for a system which is more robust against stray code > misbehaving. > >>>> If a cooperative system fails, it is fails catastrophically. If a > preemptive > >>>> system fails, it degrades in performance. > >>>> > >>>> But given I have more than 10 years of Erlang programming behind me > by now, > >>>> I'm obviously biased toward certain computational models :) > >>>> > >>>> [0] Erlang would be one such example, where the system is preemptively > >>>> scheduling for you and you can use any code in any place without > having to > >>>> worry about blocking for latency. Go is quasi-preemptive because it > checks > >>>> on function calls, but in contrast to Erlang a loop is not forced to > factor > >>>> through a recursion, so it can in principle run indefinitely. Haskell > (GHC) > >>>> is quasi-preemptive as well, checking on memory allocation > boundaries. So > >>>> the thing to look out for in GHC is latency from processing large > arrays > >>>> with no allocation, say. > >>>> > >>>> [1] Erlang has two VM runtimes for this reason. One is > single-threaded and > >>>> can avoid lots of locks which is far faster for certain workloads, or > on > >>>> embedded devices with a single core only. > >>>> > >>>> -- > >>>> J. >