On Mon, Mar 7, 2016 at 2:38 AM, Yotam Barnoy wrote: > Also, what happens to general utility functions that aren't rewritten for > Async/Lwt -- as far as I can tell, being in non-monadic code, they will > always starve other threads, since they cannot yield to another Async/Lwt > thread. Is this perception correct? Yes. On one hand, your observation is negative in the sense that now your code has "color" in the sense that it is written for one library only. And you have to transform code to having the right color before it can be used. This is not the case if the concurrency model is at a lower level[0]. On the other hand, your observation is positive: cooperative scheduling makes the points in which the code can switch explicit. This gives the programmer far more control over when you are done with a task and start to process the next task. You can also avoid the preemption check in the code all the time. If your code manipulates lots of shared data, it also simplifies things since you don't usually have to protect data with a mutex in a single-threaded context as much[1]. Cooperative models, if carefully managed, can exploit structure in the problem domain, whereas a preemptive model needs to fit all. My personal opinion is that the preemptive model eventually wins over the cooperative model, much like it has in most (all popular) operating systems. It is simply more productive to take an up-front performance hit as a sacrifice for a system which is more robust against stray code misbehaving. If a cooperative system fails, it is fails catastrophically. If a preemptive system fails, it degrades in performance. But given I have more than 10 years of Erlang programming behind me by now, I'm obviously biased toward certain computational models :) [0] Erlang would be one such example, where the system is preemptively scheduling for you and you can use any code in any place without having to worry about blocking for latency. Go is quasi-preemptive because it checks on function calls, but in contrast to Erlang a loop is not forced to factor through a recursion, so it can in principle run indefinitely. Haskell (GHC) is quasi-preemptive as well, checking on memory allocation boundaries. So the thing to look out for in GHC is latency from processing large arrays with no allocation, say. [1] Erlang has two VM runtimes for this reason. One is single-threaded and can avoid lots of locks which is far faster for certain workloads, or on embedded devices with a single core only. -- J.