caml-list - the Caml user's mailing list
 help / color / mirror / Atom feed
* [Caml-list] Thread behaviour
@ 2013-09-27 10:10 Tom Ridge
  2013-09-27 10:22 ` Simon Cruanes
                   ` (2 more replies)
  0 siblings, 3 replies; 26+ messages in thread
From: Tom Ridge @ 2013-09-27 10:10 UTC (permalink / raw)
  To: caml-list

Dear caml-list,

I have a little program which creates a thread, and then sits in a loop:

--

let f () =
  let _ = ignore (print_endline "3") in
  let _ = ignore (print_endline "hello") in
  let _ = ignore (print_endline "4") in
  ()

let main () =
  let _ = ignore (print_endline "1") in
  let t = Thread.create f () in
  (* let _ = Thread.join t in *)
  let _ = ignore (print_endline "2") in
  while true do
    flush stdout;
  done

let _ = main ()

--

I compile the program with the following Makefile clause:

test.byte: test.ml FORCE
ocamlc -o $@ -thread unix.cma threads.cma $<

When I run the program I get the output:

1
2

and the program then sits in the loop. I was expecting the output from
f to show up as well. If you wait a while, it does. But you have to
wait quite a while.

What am I doing wrong here? I notice that if I put Thread.yield in the
while loop then f's output gets printed pretty quickly. But why should
the while loop affect scheduling of f's thread?

Thanks

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-09-27 10:10 [Caml-list] Thread behaviour Tom Ridge
@ 2013-09-27 10:22 ` Simon Cruanes
  2013-09-27 10:27 ` Romain Bardou
  2013-09-30  9:18 ` Xavier Leroy
  2 siblings, 0 replies; 26+ messages in thread
From: Simon Cruanes @ 2013-09-27 10:22 UTC (permalink / raw)
  To: Tom Ridge; +Cc: caml-list

[-- Attachment #1: Type: text/plain, Size: 1447 bytes --]

Le Fri, 27 Sep 2013, Tom Ridge a écrit :
> Dear caml-list,
> 
> I have a little program which creates a thread, and then sits in a loop:
> 
> --
> 
> let f () =
>   let _ = ignore (print_endline "3") in
>   let _ = ignore (print_endline "hello") in
>   let _ = ignore (print_endline "4") in
>   ()
> 
> let main () =
>   let _ = ignore (print_endline "1") in
>   let t = Thread.create f () in
>   (* let _ = Thread.join t in *)
>   let _ = ignore (print_endline "2") in
>   while true do
>     flush stdout;
>   done
> 
> let _ = main ()
> 
> --
> 
> I compile the program with the following Makefile clause:
> 
> test.byte: test.ml FORCE
> ocamlc -o $@ -thread unix.cma threads.cma $<
> 
> When I run the program I get the output:
> 
> 1
> 2
> 
> and the program then sits in the loop. I was expecting the output from
> f to show up as well. If you wait a while, it does. But you have to
> wait quite a while.
> 
> What am I doing wrong here? I notice that if I put Thread.yield in the
> while loop then f's output gets printed pretty quickly. But why should
> the while loop affect scheduling of f's thread?

I believe the main lock on the GC can only be released if you allocate
or call yield() explicitely. In this case, I suspect that flush does
neither, so the loop goes on and the f thread starves.

Please let more knowledgeable people correct me if I'm confused.

Cheers,

-- 
Simon

[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-09-27 10:10 [Caml-list] Thread behaviour Tom Ridge
  2013-09-27 10:22 ` Simon Cruanes
@ 2013-09-27 10:27 ` Romain Bardou
  2013-09-27 10:51   ` Benedikt Grundmann
  2013-09-30  9:18 ` Xavier Leroy
  2 siblings, 1 reply; 26+ messages in thread
From: Romain Bardou @ 2013-09-27 10:27 UTC (permalink / raw)
  To: caml-list

Le 27/09/2013 12:10, Tom Ridge a écrit :
> Dear caml-list,
> 
> I have a little program which creates a thread, and then sits in a loop:
> 
> --
> 
> let f () =
>   let _ = ignore (print_endline "3") in
>   let _ = ignore (print_endline "hello") in
>   let _ = ignore (print_endline "4") in
>   ()
> 
> let main () =
>   let _ = ignore (print_endline "1") in
>   let t = Thread.create f () in
>   (* let _ = Thread.join t in *)
>   let _ = ignore (print_endline "2") in
>   while true do
>     flush stdout;
>   done
> 
> let _ = main ()
> 
> --
> 
> I compile the program with the following Makefile clause:
> 
> test.byte: test.ml FORCE
> ocamlc -o $@ -thread unix.cma threads.cma $<
> 
> When I run the program I get the output:
> 
> 1
> 2
> 
> and the program then sits in the loop. I was expecting the output from
> f to show up as well. If you wait a while, it does. But you have to
> wait quite a while.
> 
> What am I doing wrong here? I notice that if I put Thread.yield in the
> while loop then f's output gets printed pretty quickly. But why should
> the while loop affect scheduling of f's thread?
> 
> Thanks
> 

OCaml's thread, unfortunately, are kind of cooperative: you need to
yield explicitly. Note that you will obtain an even different (worse)
result with a native program. I observed this myself without looking at
the thread code itself so maybe there is actually a way to
"automatically yield" but as far as I know there is no way to obtain the
behavior you want without using either yields or processes instead of
threads. This is the reason for the Procord library I am developing
(first version to be released before the next OUPS meeting).

Also, you don't need to ignore the result of print_endline, as
print_endline returns unit. And using let _ = ... in is the same as
using ignore, so using both is not needed.

Cheers,

-- 
Romain Bardou

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-09-27 10:27 ` Romain Bardou
@ 2013-09-27 10:51   ` Benedikt Grundmann
  2013-09-28 19:09     ` Tom Ridge
  0 siblings, 1 reply; 26+ messages in thread
From: Benedikt Grundmann @ 2013-09-27 10:51 UTC (permalink / raw)
  To: Romain Bardou; +Cc: caml-list

[-- Attachment #1: Type: text/plain, Size: 2626 bytes --]

The ticker thread will cause yields which will be honored on the next
allocation of the thread that currently has the caml lock.  That said we
have seen that sometimes the lock is reacquired by the same thread again.
So there are some fairness issues.


On Fri, Sep 27, 2013 at 11:27 AM, Romain Bardou <romain.bardou@inria.fr>wrote:

> Le 27/09/2013 12:10, Tom Ridge a écrit :
> > Dear caml-list,
> >
> > I have a little program which creates a thread, and then sits in a loop:
> >
> > --
> >
> > let f () =
> >   let _ = ignore (print_endline "3") in
> >   let _ = ignore (print_endline "hello") in
> >   let _ = ignore (print_endline "4") in
> >   ()
> >
> > let main () =
> >   let _ = ignore (print_endline "1") in
> >   let t = Thread.create f () in
> >   (* let _ = Thread.join t in *)
> >   let _ = ignore (print_endline "2") in
> >   while true do
> >     flush stdout;
> >   done
> >
> > let _ = main ()
> >
> > --
> >
> > I compile the program with the following Makefile clause:
> >
> > test.byte: test.ml FORCE
> > ocamlc -o $@ -thread unix.cma threads.cma $<
> >
> > When I run the program I get the output:
> >
> > 1
> > 2
> >
> > and the program then sits in the loop. I was expecting the output from
> > f to show up as well. If you wait a while, it does. But you have to
> > wait quite a while.
> >
> > What am I doing wrong here? I notice that if I put Thread.yield in the
> > while loop then f's output gets printed pretty quickly. But why should
> > the while loop affect scheduling of f's thread?
> >
> > Thanks
> >
>
> OCaml's thread, unfortunately, are kind of cooperative: you need to
> yield explicitly. Note that you will obtain an even different (worse)
> result with a native program. I observed this myself without looking at
> the thread code itself so maybe there is actually a way to
> "automatically yield" but as far as I know there is no way to obtain the
> behavior you want without using either yields or processes instead of
> threads. This is the reason for the Procord library I am developing
> (first version to be released before the next OUPS meeting).
>
> Also, you don't need to ignore the result of print_endline, as
> print_endline returns unit. And using let _ = ... in is the same as
> using ignore, so using both is not needed.
>
> Cheers,
>
> --
> Romain Bardou
>
> --
> Caml-list mailing list.  Subscription management and archives:
> https://sympa.inria.fr/sympa/arc/caml-list
> Beginner's list: http://groups.yahoo.com/group/ocaml_beginners
> Bug reports: http://caml.inria.fr/bin/caml-bugs
>

[-- Attachment #2: Type: text/html, Size: 3685 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-09-27 10:51   ` Benedikt Grundmann
@ 2013-09-28 19:09     ` Tom Ridge
  2013-09-29  7:54       ` Tom Ridge
                         ` (2 more replies)
  0 siblings, 3 replies; 26+ messages in thread
From: Tom Ridge @ 2013-09-28 19:09 UTC (permalink / raw)
  To: caml-list; +Cc: Romain Bardou, Benedikt Grundmann

Would it be fair to say that OCaml does not currently support
pre-emptively scheduled threads?

I have read the lecture from Xavier archived here:

http://alan.petitepomme.net/cwn/2002.11.26.html#8

I would like to implement a library to handle messaging between
possibly-distributed OCaml processes. Alas, my design naively requires
pre-emptively scheduled threads (although it may be possible to change
the design e.g. to work with Lwt) - each message queue is accompanied
by a thread which reinitializes connections when connections go down
etc., hiding this complexity from the user.

Quoting Xavier:

"Scheduling I/O and computation concurrently, and managing process
stacks, is the job of the operating system."

But what if you want to implement a messaging library in OCaml? It
seems unlikely that all operating systems would fix on a standard
implementation of distributed message passing (or, even more funky,
distributed persistent message queues).


On 27 September 2013 11:51, Benedikt Grundmann
<bgrundmann@janestreet.com> wrote:
> The ticker thread will cause yields which will be honored on the next
> allocation of the thread that currently has the caml lock.  That said we
> have seen that sometimes the lock is reacquired by the same thread again.
> So there are some fairness issues.
>
>
> On Fri, Sep 27, 2013 at 11:27 AM, Romain Bardou <romain.bardou@inria.fr>
> wrote:
>>
>> Le 27/09/2013 12:10, Tom Ridge a écrit :
>> > Dear caml-list,
>> >
>> > I have a little program which creates a thread, and then sits in a loop:
>> >
>> > --
>> >
>> > let f () =
>> >   let _ = ignore (print_endline "3") in
>> >   let _ = ignore (print_endline "hello") in
>> >   let _ = ignore (print_endline "4") in
>> >   ()
>> >
>> > let main () =
>> >   let _ = ignore (print_endline "1") in
>> >   let t = Thread.create f () in
>> >   (* let _ = Thread.join t in *)
>> >   let _ = ignore (print_endline "2") in
>> >   while true do
>> >     flush stdout;
>> >   done
>> >
>> > let _ = main ()
>> >
>> > --
>> >
>> > I compile the program with the following Makefile clause:
>> >
>> > test.byte: test.ml FORCE
>> > ocamlc -o $@ -thread unix.cma threads.cma $<
>> >
>> > When I run the program I get the output:
>> >
>> > 1
>> > 2
>> >
>> > and the program then sits in the loop. I was expecting the output from
>> > f to show up as well. If you wait a while, it does. But you have to
>> > wait quite a while.
>> >
>> > What am I doing wrong here? I notice that if I put Thread.yield in the
>> > while loop then f's output gets printed pretty quickly. But why should
>> > the while loop affect scheduling of f's thread?
>> >
>> > Thanks
>> >
>>
>> OCaml's thread, unfortunately, are kind of cooperative: you need to
>> yield explicitly. Note that you will obtain an even different (worse)
>> result with a native program. I observed this myself without looking at
>> the thread code itself so maybe there is actually a way to
>> "automatically yield" but as far as I know there is no way to obtain the
>> behavior you want without using either yields or processes instead of
>> threads. This is the reason for the Procord library I am developing
>> (first version to be released before the next OUPS meeting).
>>
>> Also, you don't need to ignore the result of print_endline, as
>> print_endline returns unit. And using let _ = ... in is the same as
>> using ignore, so using both is not needed.
>>
>> Cheers,
>>
>> --
>> Romain Bardou
>>
>> --
>> Caml-list mailing list.  Subscription management and archives:
>> https://sympa.inria.fr/sympa/arc/caml-list
>> Beginner's list: http://groups.yahoo.com/group/ocaml_beginners
>> Bug reports: http://caml.inria.fr/bin/caml-bugs
>
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-09-28 19:09     ` Tom Ridge
@ 2013-09-29  7:54       ` Tom Ridge
  2013-09-29 12:37         ` Yaron Minsky
  2013-09-30  8:16       ` Romain Bardou
  2013-10-07 14:49       ` Goswin von Brederlow
  2 siblings, 1 reply; 26+ messages in thread
From: Tom Ridge @ 2013-09-29  7:54 UTC (permalink / raw)
  To: caml-list

[-- Attachment #1: Type: text/plain, Size: 4254 bytes --]

Having read that lecture again, I understand that I should be using a
message passing interface written in some other language, with bindings to
OCaml.

Thanks

On Saturday, 28 September 2013, Tom Ridge wrote:

> Would it be fair to say that OCaml does not currently support
> pre-emptively scheduled threads?
>
> I have read the lecture from Xavier archived here:
>
> http://alan.petitepomme.net/cwn/2002.11.26.html#8
>
> I would like to implement a library to handle messaging between
> possibly-distributed OCaml processes. Alas, my design naively requires
> pre-emptively scheduled threads (although it may be possible to change
> the design e.g. to work with Lwt) - each message queue is accompanied
> by a thread which reinitializes connections when connections go down
> etc., hiding this complexity from the user.
>
> Quoting Xavier:
>
> "Scheduling I/O and computation concurrently, and managing process
> stacks, is the job of the operating system."
>
> But what if you want to implement a messaging library in OCaml? It
> seems unlikely that all operating systems would fix on a standard
> implementation of distributed message passing (or, even more funky,
> distributed persistent message queues).
>
>
> On 27 September 2013 11:51, Benedikt Grundmann
> <bgrundmann@janestreet.com <javascript:;>> wrote:
> > The ticker thread will cause yields which will be honored on the next
> > allocation of the thread that currently has the caml lock.  That said we
> > have seen that sometimes the lock is reacquired by the same thread again.
> > So there are some fairness issues.
> >
> >
> > On Fri, Sep 27, 2013 at 11:27 AM, Romain Bardou <romain.bardou@inria.fr<javascript:;>
> >
> > wrote:
> >>
> >> Le 27/09/2013 12:10, Tom Ridge a écrit :
> >> > Dear caml-list,
> >> >
> >> > I have a little program which creates a thread, and then sits in a
> loop:
> >> >
> >> > --
> >> >
> >> > let f () =
> >> >   let _ = ignore (print_endline "3") in
> >> >   let _ = ignore (print_endline "hello") in
> >> >   let _ = ignore (print_endline "4") in
> >> >   ()
> >> >
> >> > let main () =
> >> >   let _ = ignore (print_endline "1") in
> >> >   let t = Thread.create f () in
> >> >   (* let _ = Thread.join t in *)
> >> >   let _ = ignore (print_endline "2") in
> >> >   while true do
> >> >     flush stdout;
> >> >   done
> >> >
> >> > let _ = main ()
> >> >
> >> > --
> >> >
> >> > I compile the program with the following Makefile clause:
> >> >
> >> > test.byte: test.ml FORCE
> >> > ocamlc -o $@ -thread unix.cma threads.cma $<
> >> >
> >> > When I run the program I get the output:
> >> >
> >> > 1
> >> > 2
> >> >
> >> > and the program then sits in the loop. I was expecting the output from
> >> > f to show up as well. If you wait a while, it does. But you have to
> >> > wait quite a while.
> >> >
> >> > What am I doing wrong here? I notice that if I put Thread.yield in the
> >> > while loop then f's output gets printed pretty quickly. But why should
> >> > the while loop affect scheduling of f's thread?
> >> >
> >> > Thanks
> >> >
> >>
> >> OCaml's thread, unfortunately, are kind of cooperative: you need to
> >> yield explicitly. Note that you will obtain an even different (worse)
> >> result with a native program. I observed this myself without looking at
> >> the thread code itself so maybe there is actually a way to
> >> "automatically yield" but as far as I know there is no way to obtain the
> >> behavior you want without using either yields or processes instead of
> >> threads. This is the reason for the Procord library I am developing
> >> (first version to be released before the next OUPS meeting).
> >>
> >> Also, you don't need to ignore the result of print_endline, as
> >> print_endline returns unit. And using let _ = ... in is the same as
> >> using ignore, so using both is not needed.
> >>
> >> Cheers,
> >>
> >> --
> >> Romain Bardou
> >>
> >> --
> >> Caml-list mailing list.  Subscription management and archives:
> >> https://sympa.inria.fr/sympa/arc/caml-list
> >> Beginner's list: http://groups.yahoo.com/group/ocaml_beginners
> >> Bug reports: http://caml.inria.fr/bin/caml-bugs
> >
> >
>

[-- Attachment #2: Type: text/html, Size: 5798 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-09-29  7:54       ` Tom Ridge
@ 2013-09-29 12:37         ` Yaron Minsky
  2013-09-29 16:25           ` Tom Ridge
  0 siblings, 1 reply; 26+ messages in thread
From: Yaron Minsky @ 2013-09-29 12:37 UTC (permalink / raw)
  To: Tom Ridge; +Cc: caml-list

I think you've come away with the wrong conclusion.  Here's my summary
of the state of play.

OCaml has a single runtime lock, but _does_ support pre-emptive OS
threads.  You can use them for concurrency but not parallelism.  Code
that runs in a tight loop in one thread, though, can starve another
thread, particularly if it doesn't allocate, because OCaml only gives
up the runtime lock when it allocates.

You should probably avoid direct use of threads for concurrent
programming.  Monadic concurrency libraries like lwt or async are the
way to go.  You can definitely build message-passing libraries on top
of these, and many have.

You can also build such libraries against raw threads, but I would recommend.

None of which is to argue that you shouldn't use a binding of some
non-OCaml message-passing library.  I've heard excellent things about
zeromq,which has bindings in opam.

y

On Sun, Sep 29, 2013 at 3:54 AM, Tom Ridge
<tom.j.ridge+caml@googlemail.com> wrote:
> Having read that lecture again, I understand that I should be using a
> message passing interface written in some other language, with bindings to
> OCaml.
>
> Thanks
>
>
> On Saturday, 28 September 2013, Tom Ridge wrote:
>>
>> Would it be fair to say that OCaml does not currently support
>> pre-emptively scheduled threads?
>>
>> I have read the lecture from Xavier archived here:
>>
>> http://alan.petitepomme.net/cwn/2002.11.26.html#8
>>
>> I would like to implement a library to handle messaging between
>> possibly-distributed OCaml processes. Alas, my design naively requires
>> pre-emptively scheduled threads (although it may be possible to change
>> the design e.g. to work with Lwt) - each message queue is accompanied
>> by a thread which reinitializes connections when connections go down
>> etc., hiding this complexity from the user.
>>
>> Quoting Xavier:
>>
>> "Scheduling I/O and computation concurrently, and managing process
>> stacks, is the job of the operating system."
>>
>> But what if you want to implement a messaging library in OCaml? It
>> seems unlikely that all operating systems would fix on a standard
>> implementation of distributed message passing (or, even more funky,
>> distributed persistent message queues).
>>
>>
>> On 27 September 2013 11:51, Benedikt Grundmann
>> <bgrundmann@janestreet.com> wrote:
>> > The ticker thread will cause yields which will be honored on the next
>> > allocation of the thread that currently has the caml lock.  That said we
>> > have seen that sometimes the lock is reacquired by the same thread
>> > again.
>> > So there are some fairness issues.
>> >
>> >
>> > On Fri, Sep 27, 2013 at 11:27 AM, Romain Bardou <romain.bardou@inria.fr>
>> > wrote:
>> >>
>> >> Le 27/09/2013 12:10, Tom Ridge a écrit :
>> >> > Dear caml-list,
>> >> >
>> >> > I have a little program which creates a thread, and then sits in a
>> >> > loop:
>> >> >
>> >> > --
>> >> >
>> >> > let f () =
>> >> >   let _ = ignore (print_endline "3") in
>> >> >   let _ = ignore (print_endline "hello") in
>> >> >   let _ = ignore (print_endline "4") in
>> >> >   ()
>> >> >
>> >> > let main () =
>> >> >   let _ = ignore (print_endline "1") in
>> >> >   let t = Thread.create f () in
>> >> >   (* let _ = Thread.join t in *)
>> >> >   let _ = ignore (print_endline "2") in
>> >> >   while true do
>> >> >     flush stdout;
>> >> >   done
>> >> >
>> >> > let _ = main ()
>> >> >
>> >> > --
>> >> >
>> >> > I compile the program with the following Makefile clause:
>> >> >
>> >> > test.byte: test.ml FORCE
>> >> > ocamlc -o $@ -thread unix.cma threads.cma $<
>> >> >
>> >> > When I run the program I get the output:
>> >> >
>> >> > 1
>> >> > 2
>> >> >
>> >> > and the program then sits in the loop. I was expecting the output
>> >> > from
>> >> > f to show up as well. If you wait a while, it does. But you have to
>> >> > wait quite a while.
>> >> >
>> >> > What am I doing wrong here? I notice that if I put Thread.yield in
>> >> > the
>> >> > while loop then f's output gets printed pretty quickly. But why
>> >> > should
>> >> > the while loop affect scheduling of f's thread?
>> >> >
>> >> > Thanks
>> >> >
>> >>
>> >> OCaml's thread, unfortunately, are kind of cooperative: you need to
>> >> yield explicitly. Note that you will obtain an even different (worse)
>> >> result with a native program. I observed this myself without looking at
>> >> the thread code itself so maybe there is actually a way to
>> >> "automatically yield" but as far as I know there is no way to obtain
>> >> the
>> >> behavior you want without using either yields or processes instead of
>> >> threads. This is the reason for the Procord library I am developing
>> >> (first version to be released before the next OUPS meeting).
>> >>
>> >> Also, you don't need to ignore the result of print_endline, as
>> >> print_endline returns unit. And using let _ = ... in is the same as
>> >> using ignore, so using both is not needed.
>> >>
>> >> Cheers,
>> >>
>> >> --
>> >> Romain Bardou
>> >>
>> >> --
>> >> Caml-list mailing list.  Subscription management and archives:
>> >> https://sympa.inria.fr/sympa/arc/caml-list
>> >> Beginner's list: http://groups.yahoo.com/group/ocaml_beginners
>> >> Bug reports: http://caml.inria.fr/bin/caml-bugs
>> >
>> >

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-09-29 12:37         ` Yaron Minsky
@ 2013-09-29 16:25           ` Tom Ridge
  2013-09-29 16:46             ` Chet Murthy
  0 siblings, 1 reply; 26+ messages in thread
From: Tom Ridge @ 2013-09-29 16:25 UTC (permalink / raw)
  To: Yaron Minsky; +Cc: caml-list

On 29 September 2013 13:37, Yaron Minsky <yminsky@janestreet.com> wrote:
> I think you've come away with the wrong conclusion.  Here's my summary
> of the state of play.
>
> OCaml has a single runtime lock, but _does_ support pre-emptive OS
> threads.  You can use them for concurrency but not parallelism.  Code
> that runs in a tight loop in one thread, though, can starve another
> thread, particularly if it doesn't allocate, because OCaml only gives
> up the runtime lock when it allocates.

OK, but this means that I can't think of OCaml code running in
different threads in a simple way - I have to consider the single
runtime lock at least (if I care about performance). So maybe the
question is: how should I understand the functioning of the single
runtime lock? For example, is starvation the only thing I need to
think about?

>
> You should probably avoid direct use of threads for concurrent
> programming.  Monadic concurrency libraries like lwt or async are the
> way to go.  You can definitely build message-passing libraries on top
> of these, and many have.

Could you provide some pointers to these libraries? I would be very
interested in this. One thing I didn't immediately see how to do, when
reading the Lwt docs, was to provide a user interface (connect, send,
recv etc) that did not expose the fact that co-operative threads were
in the background. It also seems to me that if someone uses my library
with their code, and their code itself uses Lwt, then there is the
possibility for strange interactions (my Lwt threads are not isolated
from their Lwt threads; with system threads and system-provided
pre-emptive scheduling, you (maybe) do get some isolation/fairness
properties).

>
> You can also build such libraries against raw threads, but I would recommend.
>
> None of which is to argue that you shouldn't use a binding of some
> non-OCaml message-passing library.  I've heard excellent things about
> zeromq,which has bindings in opam.

OK! So +1 for zeromq. I will check it out. Any votes for any other
libraries? OCamlMPI?

BTW thanks to everyone for taking the time to respond to these
questions, and apologies for my ignorance.

T

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-09-29 16:25           ` Tom Ridge
@ 2013-09-29 16:46             ` Chet Murthy
  2013-09-29 17:18               ` Tom Ridge
  0 siblings, 1 reply; 26+ messages in thread
From: Chet Murthy @ 2013-09-29 16:46 UTC (permalink / raw)
  To: caml-list, Tom Ridge; +Cc: Yaron Minsky


> OK, but this means that I can't think of OCaml code running in
> different threads in a simple way - I have to consider the single
> runtime lock at least (if I care about performance). So maybe the
> question is: how should I understand the functioning of the single
> runtime lock? For example, is starvation the only thing I need to
> think about?

Tom,

Yaron's note made a careful distinction between two things:
"concurrency" and "performance".  Let me explain a little more what
he's getting at.

"concurrency" means writing programs using constructs that look like
concurrent execution, for the purpose of .... well, for whatever
purpose you'd like, but usually modularity.

  --> in short, you don't care if your code's running on a single
      processor -- you just want to -write- it with multiple threads

  [of course, here you care about -starvation-, hence the worry about
  a tight loop with no possibility of suspension/"releasing the global
  lock"; but let's set that aside, since it's a different concern
  (even the LISPm, from what I remember, had issues here -- heck, so
  do some JVMs, for GC)]

"parallelism" means exploiting hardware facilities to -run- code in
parallel, usually for purposes of performance.

  --> you care very much if your code "properly" exploits
      SMP/multicore parallelism

When you speak of "performance", I'm going to guess that you're really
speaking of "parallelism" and not "concurrency".  If so, Ocaml's
threads are not for you, nor are monadic programming constructs.  Not
merely effectively, but -by- -construction- any threading system with
a global lock can only use a single CPU.  You can get more than that
from Ocaml, if Ocaml threads, spend a lot of time in C/C++ code that's
CPU-intensive.  But I'm assuming that that's not what you're looking
to do.

Does this help?
--chet--


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-09-29 16:46             ` Chet Murthy
@ 2013-09-29 17:18               ` Tom Ridge
  2013-09-29 17:47                 ` Chet Murthy
  0 siblings, 1 reply; 26+ messages in thread
From: Tom Ridge @ 2013-09-29 17:18 UTC (permalink / raw)
  To: Chet Murthy, caml-list

Hi Chet,



On 29 September 2013 17:46, Chet Murthy <murthy.chet@gmail.com> wrote:
>
>> OK, but this means that I can't think of OCaml code running in
>> different threads in a simple way - I have to consider the single
>> runtime lock at least (if I care about performance). So maybe the
>> question is: how should I understand the functioning of the single
>> runtime lock? For example, is starvation the only thing I need to
>> think about?
>
> Tom,
>
> Yaron's note made a careful distinction between two things:
> "concurrency" and "performance".  Let me explain a little more what
> he's getting at.

I sort-of understand what people mean by concurrency and
*parallelism*. Parallelism is e.g. trying to use multiple CPUs.
Concurrency is using constructs like threads etc. I think these words
aren't the best choice if trying to make this distinction, but that is
another issue.

>
> "concurrency" means writing programs using constructs that look like
> concurrent execution, for the purpose of .... well, for whatever
> purpose you'd like, but usually modularity.
>
>   --> in short, you don't care if your code's running on a single
>       processor -- you just want to -write- it with multiple threads

This is my situation. I don't care if the code runs on a single core
(in fact, I hope it does), but I do want to use threads which are
scheduled reasonably independently and reasonably fairly. My first
example shows that one thread is effectively starved by the other
thread.

>
>   [of course, here you care about -starvation-, hence the worry about
>   a tight loop with no possibility of suspension/"releasing the global
>   lock"; but let's set that aside, since it's a different concern
>   (even the LISPm, from what I remember, had issues here -- heck, so
>   do some JVMs, for GC)]

This is my concern! I want to understand what the issues are with
running code which uses multiple threads (from OCaml's Thread lib), on
a single core, given that there is a global lock.

>
> "parallelism" means exploiting hardware facilities to -run- code in
> parallel, usually for purposes of performance.
>
>   --> you care very much if your code "properly" exploits
>       SMP/multicore parallelism
>
> When you speak of "performance", I'm going to guess that you're really
> speaking of "parallelism" and not "concurrency".

No! I really am talking about concurrency. I am a firm believer that
message-passing is a very good way to utilize multiple cores. What I
want is to implement a message passing library, using multi-threading,
on a single core. I want to understand the issues that the single
runtime lock can cause when trying to do this. My naive idea is that
different threads execute as they would, say, using POSIX threads in
C. But the presence of the single runtime lock means that threads do
not behave like this (the code I started this thread off with
illustrates this). So how do they behave?

> If so, Ocaml's
> threads are not for you, nor are monadic programming constructs.  Not
> merely effectively, but -by- -construction- any threading system with
> a global lock can only use a single CPU.  You can get more than that
> from Ocaml, if Ocaml threads, spend a lot of time in C/C++ code that's
> CPU-intensive.  But I'm assuming that that's not what you're looking
> to do.

I didn't quite get that last sentence, but yes, I understand what you
are saying. I really do want to understand how to think about my code,
which uses OCaml Thread threads, on a single core, given that there is
a global lock. Now, it may be that to understand this there is no more
abstract approach than looking at the OCaml implementation. That is
fine. But it may be that people will say "the only problem you can
have is starvation, and you can avoid that by ensuring that each of
your threads calls Thread.yield() sufficiently often". That would be a
nice answer to have!

Thanks

T

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-09-29 17:18               ` Tom Ridge
@ 2013-09-29 17:47                 ` Chet Murthy
  2013-09-30  8:24                   ` Romain Bardou
  0 siblings, 1 reply; 26+ messages in thread
From: Chet Murthy @ 2013-09-29 17:47 UTC (permalink / raw)
  To: Tom Ridge; +Cc: caml-list


> This is my situation. I don't care if the code runs on a single core
> (in fact, I hope it does), but I do want to use threads which are
> scheduled reasonably independently and reasonably fairly. My first
> example shows that one thread is effectively starved by the other
> thread.

Ah.  ok.  In this case, it's easier.  You just need to ensure that in
every loop,in every recursive function, there's a call to something
that yield()s, on every path.  It's that simple, and that icky.  But
then, if you have code that literally doesn't do anything that yields,
in a loop, it's compute-intensive, and -that- means you're not really
asking for concurrency, are you?

BTW, to your original question "why should the while loop affect
scheduling of f's thread": because there is a global operation
(scheduling) that needs cooperation from all threads in order to
execute.  And that requires explicit coding by the programmer.  Now,
the compiler -could- have inserted code to do the yield() (in some old
LISPms, it was done at every backward jump and return, I think).

I can't speculate as to why it wasn't done, but given that the goal of
ocaml's threads is concurrency, and not parallelism, it isn't common
(at least, in my experience) to write code that doesn't naturally
reach yield points frequently.

--chet--


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-09-28 19:09     ` Tom Ridge
  2013-09-29  7:54       ` Tom Ridge
@ 2013-09-30  8:16       ` Romain Bardou
  2013-10-01  3:32         ` Ivan Gotovchits
  2013-10-07 14:49       ` Goswin von Brederlow
  2 siblings, 1 reply; 26+ messages in thread
From: Romain Bardou @ 2013-09-30  8:16 UTC (permalink / raw)
  To: Tom Ridge; +Cc: caml-list, Benedikt Grundmann

It happens I implemented a library to handle messaging between
possibly-distributed OCaml processes myself. Well, for now one can only
send one message and receive one message: the goal is to be able to run
a function 'a -> 'b in another process, given marshaling functions for
'a and 'b and exceptions. I do plan to add the possibility of sending
more "intermediate" messages (I need it). It works on Linux and Windows.

I did not do any official release yet, but you can have a look here:

https://github.com/cryptosense/procord

You can clone the directory and "make doc" to view the Ocamldoc
documentation in HTML. (I would have put it on my web page but I can't
access it right now.)

In the documentation directory there is a motivation.txt file which
explains the need for the library and compares it to other solutions.

I need this library for a commercial product, so I plan to maintain it
in the long run.

If you plan to build your own library, consider contributing to Procord
instead :) If you don't want to I would be happy to hear what you want
to do differently.

I will present Procord at the next OUPS meeting, and I plan to do an
official release before that. Before, I need to check whether the MIT
license must be copied to every source file or whether a LICENSE file in
the root directory is enough.

Cheers,

-- 
Romain Bardou

Le 28/09/2013 21:09, Tom Ridge a écrit :
> Would it be fair to say that OCaml does not currently support
> pre-emptively scheduled threads?
> 
> I have read the lecture from Xavier archived here:
> 
> http://alan.petitepomme.net/cwn/2002.11.26.html#8
> 
> I would like to implement a library to handle messaging between
> possibly-distributed OCaml processes. Alas, my design naively requires
> pre-emptively scheduled threads (although it may be possible to change
> the design e.g. to work with Lwt) - each message queue is accompanied
> by a thread which reinitializes connections when connections go down
> etc., hiding this complexity from the user.
> 
> Quoting Xavier:
> 
> "Scheduling I/O and computation concurrently, and managing process
> stacks, is the job of the operating system."
> 
> But what if you want to implement a messaging library in OCaml? It
> seems unlikely that all operating systems would fix on a standard
> implementation of distributed message passing (or, even more funky,
> distributed persistent message queues).
> 
> 
> On 27 September 2013 11:51, Benedikt Grundmann
> <bgrundmann@janestreet.com> wrote:
>> The ticker thread will cause yields which will be honored on the next
>> allocation of the thread that currently has the caml lock.  That said we
>> have seen that sometimes the lock is reacquired by the same thread again.
>> So there are some fairness issues.
>>
>>
>> On Fri, Sep 27, 2013 at 11:27 AM, Romain Bardou <romain.bardou@inria.fr>
>> wrote:
>>>
>>> Le 27/09/2013 12:10, Tom Ridge a écrit :
>>>> Dear caml-list,
>>>>
>>>> I have a little program which creates a thread, and then sits in a loop:
>>>>
>>>> --
>>>>
>>>> let f () =
>>>>   let _ = ignore (print_endline "3") in
>>>>   let _ = ignore (print_endline "hello") in
>>>>   let _ = ignore (print_endline "4") in
>>>>   ()
>>>>
>>>> let main () =
>>>>   let _ = ignore (print_endline "1") in
>>>>   let t = Thread.create f () in
>>>>   (* let _ = Thread.join t in *)
>>>>   let _ = ignore (print_endline "2") in
>>>>   while true do
>>>>     flush stdout;
>>>>   done
>>>>
>>>> let _ = main ()
>>>>
>>>> --
>>>>
>>>> I compile the program with the following Makefile clause:
>>>>
>>>> test.byte: test.ml FORCE
>>>> ocamlc -o $@ -thread unix.cma threads.cma $<
>>>>
>>>> When I run the program I get the output:
>>>>
>>>> 1
>>>> 2
>>>>
>>>> and the program then sits in the loop. I was expecting the output from
>>>> f to show up as well. If you wait a while, it does. But you have to
>>>> wait quite a while.
>>>>
>>>> What am I doing wrong here? I notice that if I put Thread.yield in the
>>>> while loop then f's output gets printed pretty quickly. But why should
>>>> the while loop affect scheduling of f's thread?
>>>>
>>>> Thanks
>>>>
>>>
>>> OCaml's thread, unfortunately, are kind of cooperative: you need to
>>> yield explicitly. Note that you will obtain an even different (worse)
>>> result with a native program. I observed this myself without looking at
>>> the thread code itself so maybe there is actually a way to
>>> "automatically yield" but as far as I know there is no way to obtain the
>>> behavior you want without using either yields or processes instead of
>>> threads. This is the reason for the Procord library I am developing
>>> (first version to be released before the next OUPS meeting).
>>>
>>> Also, you don't need to ignore the result of print_endline, as
>>> print_endline returns unit. And using let _ = ... in is the same as
>>> using ignore, so using both is not needed.
>>>
>>> Cheers,
>>>
>>> --
>>> Romain Bardou
>>>
>>> --
>>> Caml-list mailing list.  Subscription management and archives:
>>> https://sympa.inria.fr/sympa/arc/caml-list
>>> Beginner's list: http://groups.yahoo.com/group/ocaml_beginners
>>> Bug reports: http://caml.inria.fr/bin/caml-bugs
>>
>>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-09-29 17:47                 ` Chet Murthy
@ 2013-09-30  8:24                   ` Romain Bardou
  2013-10-07 14:57                     ` Goswin von Brederlow
  0 siblings, 1 reply; 26+ messages in thread
From: Romain Bardou @ 2013-09-30  8:24 UTC (permalink / raw)
  To: caml-list

Le 29/09/2013 19:47, Chet Murthy a écrit :
> 
>> This is my situation. I don't care if the code runs on a single core
>> (in fact, I hope it does), but I do want to use threads which are
>> scheduled reasonably independently and reasonably fairly. My first
>> example shows that one thread is effectively starved by the other
>> thread.
> 
> Ah.  ok.  In this case, it's easier.  You just need to ensure that in
> every loop,in every recursive function, there's a call to something
> that yield()s, on every path.  It's that simple, and that icky.  But
> then, if you have code that literally doesn't do anything that yields,
> in a loop, it's compute-intensive, and -that- means you're not really
> asking for concurrency, are you?
> 
> BTW, to your original question "why should the while loop affect
> scheduling of f's thread": because there is a global operation
> (scheduling) that needs cooperation from all threads in order to
> execute.  And that requires explicit coding by the programmer.  Now,
> the compiler -could- have inserted code to do the yield() (in some old
> LISPms, it was done at every backward jump and return, I think).
> 
> I can't speculate as to why it wasn't done, but given that the goal of
> ocaml's threads is concurrency, and not parallelism, it isn't common
> (at least, in my experience) to write code that doesn't naturally
> reach yield points frequently.

Unfortunately there is a huge class of such kind of code: code which
uses libraries which do not yield. For instance, code which loads a DLL
to communicate with hardware and which may block (or worse).

Cheers,

-- 
Romain Bardou

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-09-27 10:10 [Caml-list] Thread behaviour Tom Ridge
  2013-09-27 10:22 ` Simon Cruanes
  2013-09-27 10:27 ` Romain Bardou
@ 2013-09-30  9:18 ` Xavier Leroy
  2013-09-30 15:12   ` Tom Ridge
  2013-10-01  5:01   ` Pierre Chambart
  2 siblings, 2 replies; 26+ messages in thread
From: Xavier Leroy @ 2013-09-30  9:18 UTC (permalink / raw)
  To: caml-list

On 2013-09-27 12:10, Tom Ridge wrote:
> I have a little program which creates a thread, and then sits in a loop:
> [...]
> When I run the program I get the output:
> 
> 1
> 2
> 
> and the program then sits in the loop.

On my machine (OCaml 4.01.0, Ubuntu 12.04 LTS), I sometimes see what
you see, and sometimes I see the expected output:

1
2
3
hello
4

It all depends on the whim of the OS scheduler.  OCaml has no control
over it.  And you shoudn't expect any kind of fairness from the OS
scheduler, esp. Linux's, which gladly jettisons any pretense of
fairness in the hope of getting better throughput.

> Would it be fair to say that OCaml does not currently support
> pre-emptively scheduled threads?

This would be inexact.  OCaml's threads yield the runtime lock, giving
other threads a chance to grab it and run, in the following cases:

1- when performing an I/O operation that may block or take a long time
("flush" in your example)
2- when the program calls Thread.yield (duh)
3- when a timer interrupt is detected at certain program points:
  3a- for native-code compilation: minor heap allocations
  3b- for bytecode compilation: function calls + beginning of every
      loop iteration.

What happens after yielding is the business of the OS scheduler,
though.

> I should be using a message passing interface written in some other
> language, with bindings to OCaml.

You have plenty of options depending on what you really need in the
end:

- For parallelism (getting speedups from a sequential program),
  consider the Parmap and Functory libraries.
- For concurrency with message passing and possible distribution,
  consider JoCaml.
- If you want to write your own message-passing library and want to
  control every bit of the scheduling, LWT (lightweight cooperative
  threads) is probably best.
- You can also use system threads, but only if you give up on any hope
  of fairness and write in a style that is resistant against whatever
  awful scheduling the OS will do for you.

Hope this helps,

- Xavier Leroy

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-09-30  9:18 ` Xavier Leroy
@ 2013-09-30 15:12   ` Tom Ridge
  2013-09-30 16:01     ` Török Edwin
  2013-09-30 16:56     ` Gabriel Kerneis
  2013-10-01  5:01   ` Pierre Chambart
  1 sibling, 2 replies; 26+ messages in thread
From: Tom Ridge @ 2013-09-30 15:12 UTC (permalink / raw)
  To: Xavier Leroy; +Cc: caml-list

On 30 September 2013 10:18, Xavier Leroy <Xavier.Leroy@inria.fr> wrote:
> On 2013-09-27 12:10, Tom Ridge wrote:
>> I have a little program which creates a thread, and then sits in a loop:
>> [...]
>> When I run the program I get the output:
>>
>> 1
>> 2
>>
>> and the program then sits in the loop.
>
> On my machine (OCaml 4.01.0, Ubuntu 12.04 LTS), I sometimes see what
> you see, and sometimes I see the expected output:
>
> 1
> 2
> 3
> hello
> 4
>
> It all depends on the whim of the OS scheduler.  OCaml has no control
> over it.  And you shoudn't expect any kind of fairness from the OS
> scheduler, esp. Linux's, which gladly jettisons any pretense of
> fairness in the hope of getting better throughput.
>

Ah! You are saying that the problem (maybe) lies with the Linux scheduler.
This had never occurred to me. Probably because I assumed the Linux
phrase "completely fair scheduler" meant something (although
admittedly, I never tried to find out what).

Thanks

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-09-30 15:12   ` Tom Ridge
@ 2013-09-30 16:01     ` Török Edwin
  2013-09-30 16:56     ` Gabriel Kerneis
  1 sibling, 0 replies; 26+ messages in thread
From: Török Edwin @ 2013-09-30 16:01 UTC (permalink / raw)
  To: caml-list

On 09/30/2013 06:12 PM, Tom Ridge wrote:
> On 30 September 2013 10:18, Xavier Leroy <Xavier.Leroy@inria.fr> wrote:
>> On 2013-09-27 12:10, Tom Ridge wrote:
>>> I have a little program which creates a thread, and then sits in a loop:
>>> [...]
>>> When I run the program I get the output:
>>>
>>> 1
>>> 2
>>>
>>> and the program then sits in the loop.
>>
>> On my machine (OCaml 4.01.0, Ubuntu 12.04 LTS), I sometimes see what
>> you see, and sometimes I see the expected output:
>>
>> 1
>> 2
>> 3
>> hello
>> 4
>>
>> It all depends on the whim of the OS scheduler.  OCaml has no control
>> over it.  And you shoudn't expect any kind of fairness from the OS
>> scheduler, esp. Linux's, which gladly jettisons any pretense of
>> fairness in the hope of getting better throughput.
>>
> 
> Ah! You are saying that the problem (maybe) lies with the Linux scheduler.
> This had never occurred to me. Probably because I assumed the Linux
> phrase "completely fair scheduler" meant something (although
> admittedly, I never tried to find out what).

I think there are two things to consider here:
 * deciding which process(es) to run on the CPU(s), when there are multiple runnable processes/threads (i.e. they are not blocked/sleeping). This is called CFS in Linux, and you can check how well it behaves with top %CPU, /proc/sched_debug, etc.
Although it has some heuristics too, for example related to pipe(2) handling.

 * deciding how to lock a mutex / what thread to wake up when a mutex is released/ condvar is signaled

This is handled first in libpthread (IIRC there is some limited user-space spinning involved), then handed off to the kernel via futex(7).
Depending how its implemented there can be fairness issues, see also 'thundering herd' problem, etc.
AFAIK the kernel uses ticket locks for its spinlocks to get more fairness since quite some time (see http://lwn.net/Articles/267968/ and http://lwn.net/Articles/531254/), but I lost track whether futexes actually use this mechanism or something else. If there is something wrong with the fairness in thread scheduling I'd expect it to be in this part though.

As far as OCaml is concerned one might try to implement something similar to ticket locks in userspace
(or have an array of condition variables and wake up directly the thread we want, and track their execution times?), but it won't be as efficient as if the kernel was (fixed) to do it properly.

Best regards,
--Edwin

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-09-30 15:12   ` Tom Ridge
  2013-09-30 16:01     ` Török Edwin
@ 2013-09-30 16:56     ` Gabriel Kerneis
  2013-09-30 18:18       ` Alain Frisch
  1 sibling, 1 reply; 26+ messages in thread
From: Gabriel Kerneis @ 2013-09-30 16:56 UTC (permalink / raw)
  To: Tom Ridge; +Cc: Xavier Leroy, caml-list

On Mon, Sep 30, 2013 at 04:12:23PM +0100, Tom Ridge wrote:
> > It all depends on the whim of the OS scheduler.  OCaml has no control
> > over it.  And you shoudn't expect any kind of fairness from the OS
> > scheduler, esp. Linux's, which gladly jettisons any pretense of
> > fairness in the hope of getting better throughput.
> 
> Ah! You are saying that the problem (maybe) lies with the Linux scheduler.
> This had never occurred to me. Probably because I assumed the Linux
> phrase "completely fair scheduler" meant something (although
> admittedly, I never tried to find out what).

A bit of off-topic Linux history for those interested:

The behaviour of sched_yield() changed in Linux 2.6.23 (with the
introduction of the CFS), to adopt a semantics where it does not yield,
but only reduces the priority of the calling thread.  This change has
been debated a lot, in particular because it broke many Java
applications.  As a work-around, a kernel variable (sched_compat_yield)
was introduced to optionnally recover the original behaviour; the
rationale for the change was that sched_yield is arguably ill-specified
and programs should not rely on it but use (non-portable and tricky)
futexes, or (portable) mutexes instead.

The variable sched_compat_yield has been remove in Linux 2.6.39, with no
clear rationale, during one of the many refactoring of the CFS.

As a matter of fact, this change also broke OCaml scheduling.  Nowadays,
sched_yield() is disabled in the ocaml runtime on Linux.  Have a look at
http://caml.inria.fr/mantis/view.php?id=2663 for more details on the
caml-side of the story (in French).

-- 
Gabriel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-09-30 16:56     ` Gabriel Kerneis
@ 2013-09-30 18:18       ` Alain Frisch
  0 siblings, 0 replies; 26+ messages in thread
From: Alain Frisch @ 2013-09-30 18:18 UTC (permalink / raw)
  To: 'caml-list@inria.fr'

On 9/30/2013 6:56 PM, Gabriel Kerneis wrote:
> A bit of off-topic Linux history for those interested:
 > ...
> As a matter of fact, this change also broke OCaml scheduling.  Nowadays,
> sched_yield() is disabled in the ocaml runtime on Linux.  Have a look at
> http://caml.inria.fr/mantis/view.php?id=2663 for more details on the
> caml-side of the story (in French).

And, more recently:

http://caml.inria.fr/mantis/view.php?id=5373

-- Alain

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-09-30  8:16       ` Romain Bardou
@ 2013-10-01  3:32         ` Ivan Gotovchits
  0 siblings, 0 replies; 26+ messages in thread
From: Ivan Gotovchits @ 2013-10-01  3:32 UTC (permalink / raw)
  To: Romain Bardou; +Cc: Tom Ridge, caml-list, Benedikt Grundmann

Romain Bardou <romain.bardou@inria.fr> writes:

> It happens I implemented a library to handle messaging between
> possibly-distributed OCaml processes myself. Well, for now one can only
> send one message and receive one message: the goal is to be able to run
> a function 'a -> 'b in another process, given marshaling functions for
> 'a and 'b and exceptions. I do plan to add the possibility of sending
> more "intermediate" messages (I need it). It works on Linux and Windows.
>
> I did not do any official release yet, but you can have a look here:
>
> https://github.com/cryptosense/procord
>

And it happens that I implemented yet another such library. It is in a
heavy use and battle tested in a commercial product, but it is not
yet released, and may have some hidden bugs. Also it doesn't work on
Windows (it uses fork, pipes and AF_UNIX sockets).

The main idea behind the library is to play the same role for Lwt, as
Async_parallel plays for Async. Forking with lwt (or any other async
framework) is not an easy stuff, but it is doable. The second goal I
pursued is to make its interface as simple as possible. Task can be
created as easy, as «Parallel.run (print_endline "Hello, world")».

I hope I will release it at the beginning of the next year.



-- 
         (__) 
         (oo) 
   /------\/ 
  / |    ||   
 *  /\---/\ 
    ~~   ~~   
...."Have you mooed today?"...

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-09-30  9:18 ` Xavier Leroy
  2013-09-30 15:12   ` Tom Ridge
@ 2013-10-01  5:01   ` Pierre Chambart
  2013-10-01  7:21     ` Gabriel Kerneis
  2013-10-02 10:37     ` Wojciech Meyer
  1 sibling, 2 replies; 26+ messages in thread
From: Pierre Chambart @ 2013-10-01  5:01 UTC (permalink / raw)
  To: Xavier Leroy; +Cc: caml-list

On 30/09/2013 05:18, Xavier Leroy wrote:
> On 2013-09-27 12:10, Tom Ridge wrote:
>> I have a little program which creates a thread, and then sits in a loop:
>> [...]
>> When I run the program I get the output:
>>
>> 1
>> 2
>>
>> and the program then sits in the loop.
> It all depends on the whim of the OS scheduler.  OCaml has no control
> over it.  And you shoudn't expect any kind of fairness from the OS
> scheduler, esp. Linux's, which gladly jettisons any pretense of
> fairness in the hope of getting better throughput.
Usualy, the scheduler is fair when you force all threads to run on the
same processor.
But I would still prefer the LWT way for doing message passing.
-- 
Pierre

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-10-01  5:01   ` Pierre Chambart
@ 2013-10-01  7:21     ` Gabriel Kerneis
  2013-10-02 10:37     ` Wojciech Meyer
  1 sibling, 0 replies; 26+ messages in thread
From: Gabriel Kerneis @ 2013-10-01  7:21 UTC (permalink / raw)
  To: Pierre Chambart; +Cc: Xavier Leroy, caml-list

On Tue, Oct 01, 2013 at 01:01:47AM -0400, Pierre Chambart wrote:
> > It all depends on the whim of the OS scheduler.  OCaml has no control
> > over it.  And you shoudn't expect any kind of fairness from the OS
> > scheduler, esp. Linux's, which gladly jettisons any pretense of
> > fairness in the hope of getting better throughput.
> Usualy, the scheduler is fair when you force all threads to run on the
> same processor.

I wouldn't be so sure if I were you.  Have a look for example at the first line
of Table 6.2, page 141 of:
http://www.pps.univ-paris-diderot.fr/~kerneis/research/files/kerneis-phd-thesis.pdf

On a single core, creating a new thread, then calling sched_yield until the
CFS switches to it takes an absurd amount of time.  (This micro-benchmark is
arguably very bad style for POSIX threads, but it serves to illustrate how
confusing the CFS behaviour can be.)  On two cores, on the other hand, sane
behaviour is restored.

-- 
Gabriel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-10-01  5:01   ` Pierre Chambart
  2013-10-01  7:21     ` Gabriel Kerneis
@ 2013-10-02 10:37     ` Wojciech Meyer
  2013-10-02 11:52       ` Francois Berenger
  1 sibling, 1 reply; 26+ messages in thread
From: Wojciech Meyer @ 2013-10-02 10:37 UTC (permalink / raw)
  To: Pierre Chambart; +Cc: Xavier Leroy, Caml List

[-- Attachment #1: Type: text/plain, Size: 1278 bytes --]

Agreed here, but it does not preclude of using very lightweight alternative
based on system threads like parmap or functory for the parts have a high
degree of parallerism. (and of course only if your program allows to do
this)


On Tue, Oct 1, 2013 at 6:01 AM, Pierre Chambart <pierre.chambart@laposte.net
> wrote:

> On 30/09/2013 05:18, Xavier Leroy wrote:
> > On 2013-09-27 12:10, Tom Ridge wrote:
> >> I have a little program which creates a thread, and then sits in a loop:
> >> [...]
> >> When I run the program I get the output:
> >>
> >> 1
> >> 2
> >>
> >> and the program then sits in the loop.
> > It all depends on the whim of the OS scheduler.  OCaml has no control
> > over it.  And you shoudn't expect any kind of fairness from the OS
> > scheduler, esp. Linux's, which gladly jettisons any pretense of
> > fairness in the hope of getting better throughput.
> Usualy, the scheduler is fair when you force all threads to run on the
> same processor.
> But I would still prefer the LWT way for doing message passing.
> --
> Pierre
>
> --
> Caml-list mailing list.  Subscription management and archives:
> https://sympa.inria.fr/sympa/arc/caml-list
> Beginner's list: http://groups.yahoo.com/group/ocaml_beginners
> Bug reports: http://caml.inria.fr/bin/caml-bugs
>

[-- Attachment #2: Type: text/html, Size: 2113 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-10-02 10:37     ` Wojciech Meyer
@ 2013-10-02 11:52       ` Francois Berenger
  2013-10-02 11:58         ` Wojciech Meyer
  0 siblings, 1 reply; 26+ messages in thread
From: Francois Berenger @ 2013-10-02 11:52 UTC (permalink / raw)
  To: caml-list

On 10/2/13 7:37 PM, Wojciech Meyer wrote:
> Agreed here, but it does not preclude of using very lightweight
> alternative based on system threads like parmap

Just for the record, Parmap is fork-based, not thread-based.

 > or functory for the
> parts have a high degree of parallerism. (and of course only if your
> program allows to do this)
>
>
> On Tue, Oct 1, 2013 at 6:01 AM, Pierre Chambart
> <pierre.chambart@laposte.net <mailto:pierre.chambart@laposte.net>> wrote:
>
>     On 30/09/2013 05:18, Xavier Leroy wrote:
>      > On 2013-09-27 12:10, Tom Ridge wrote:
>      >> I have a little program which creates a thread, and then sits in
>     a loop:
>      >> [...]
>      >> When I run the program I get the output:
>      >>
>      >> 1
>      >> 2
>      >>
>      >> and the program then sits in the loop.
>      > It all depends on the whim of the OS scheduler.  OCaml has no control
>      > over it.  And you shoudn't expect any kind of fairness from the OS
>      > scheduler, esp. Linux's, which gladly jettisons any pretense of
>      > fairness in the hope of getting better throughput.
>     Usualy, the scheduler is fair when you force all threads to run on the
>     same processor.
>     But I would still prefer the LWT way for doing message passing.
>     --
>     Pierre
>
>     --
>     Caml-list mailing list.  Subscription management and archives:
>     https://sympa.inria.fr/sympa/arc/caml-list
>     Beginner's list: http://groups.yahoo.com/group/ocaml_beginners
>     Bug reports: http://caml.inria.fr/bin/caml-bugs
>
>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-10-02 11:52       ` Francois Berenger
@ 2013-10-02 11:58         ` Wojciech Meyer
  0 siblings, 0 replies; 26+ messages in thread
From: Wojciech Meyer @ 2013-10-02 11:58 UTC (permalink / raw)
  To: Francois Berenger; +Cc: Caml List

[-- Attachment #1: Type: text/plain, Size: 2241 bytes --]

Oh yes! Thanks.


On Wed, Oct 2, 2013 at 12:52 PM, Francois Berenger <berenger@riken.jp>wrote:

> On 10/2/13 7:37 PM, Wojciech Meyer wrote:
>
>> Agreed here, but it does not preclude of using very lightweight
>> alternative based on system threads like parmap
>>
>
> Just for the record, Parmap is fork-based, not thread-based.
>
> > or functory for the
>
>> parts have a high degree of parallerism. (and of course only if your
>> program allows to do this)
>>
>>
>> On Tue, Oct 1, 2013 at 6:01 AM, Pierre Chambart
>> <pierre.chambart@laposte.net <mailto:pierre.chambart@**laposte.net<pierre.chambart@laposte.net>>>
>> wrote:
>>
>>     On 30/09/2013 05:18, Xavier Leroy wrote:
>>      > On 2013-09-27 12:10, Tom Ridge wrote:
>>      >> I have a little program which creates a thread, and then sits in
>>     a loop:
>>      >> [...]
>>      >> When I run the program I get the output:
>>      >>
>>      >> 1
>>      >> 2
>>      >>
>>      >> and the program then sits in the loop.
>>      > It all depends on the whim of the OS scheduler.  OCaml has no
>> control
>>      > over it.  And you shoudn't expect any kind of fairness from the OS
>>      > scheduler, esp. Linux's, which gladly jettisons any pretense of
>>      > fairness in the hope of getting better throughput.
>>     Usualy, the scheduler is fair when you force all threads to run on the
>>     same processor.
>>     But I would still prefer the LWT way for doing message passing.
>>     --
>>     Pierre
>>
>>     --
>>     Caml-list mailing list.  Subscription management and archives:
>>     https://sympa.inria.fr/sympa/**arc/caml-list<https://sympa.inria.fr/sympa/arc/caml-list>
>>     Beginner's list: http://groups.yahoo.com/group/**ocaml_beginners<http://groups.yahoo.com/group/ocaml_beginners>
>>     Bug reports: http://caml.inria.fr/bin/caml-**bugs<http://caml.inria.fr/bin/caml-bugs>
>>
>>
>>
>
> --
> Caml-list mailing list.  Subscription management and archives:
> https://sympa.inria.fr/sympa/**arc/caml-list<https://sympa.inria.fr/sympa/arc/caml-list>
> Beginner's list: http://groups.yahoo.com/group/**ocaml_beginners<http://groups.yahoo.com/group/ocaml_beginners>
> Bug reports: http://caml.inria.fr/bin/caml-**bugs<http://caml.inria.fr/bin/caml-bugs>
>

[-- Attachment #2: Type: text/html, Size: 3413 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-09-28 19:09     ` Tom Ridge
  2013-09-29  7:54       ` Tom Ridge
  2013-09-30  8:16       ` Romain Bardou
@ 2013-10-07 14:49       ` Goswin von Brederlow
  2 siblings, 0 replies; 26+ messages in thread
From: Goswin von Brederlow @ 2013-10-07 14:49 UTC (permalink / raw)
  To: caml-list

On Sat, Sep 28, 2013 at 08:09:12PM +0100, Tom Ridge wrote:
> Would it be fair to say that OCaml does not currently support
> pre-emptively scheduled threads?
> 
> I have read the lecture from Xavier archived here:
> 
> http://alan.petitepomme.net/cwn/2002.11.26.html#8
> 
> I would like to implement a library to handle messaging between
> possibly-distributed OCaml processes. Alas, my design naively requires
> pre-emptively scheduled threads (although it may be possible to change
> the design e.g. to work with Lwt) - each message queue is accompanied
> by a thread which reinitializes connections when connections go down
> etc., hiding this complexity from the user.
> 
> Quoting Xavier:
> 
> "Scheduling I/O and computation concurrently, and managing process
> stacks, is the job of the operating system."
> 
> But what if you want to implement a messaging library in OCaml? It
> seems unlikely that all operating systems would fix on a standard
> implementation of distributed message passing (or, even more funky,
> distributed persistent message queues).

Why do you need pre-emptively scheduled threads? That would mean you
have threads that are more important than others. Ocaml has no
priorities in its threads for that.

If you want to do I/O and computation concurrently then you probably
need to offload the computation into seperate processes (with their
own runtime). Otherwise, because you can't give the I/O thread a
higher priority, the computation threads will block the I/O a lot of
the time.

One way around that is to do the computations in C, without the
runtime lock. That way the I/O thread can run in parallel with
computations and computations can run on multiple cores.

> On 27 September 2013 11:51, Benedikt Grundmann
> <bgrundmann@janestreet.com> wrote:
> > The ticker thread will cause yields which will be honored on the next
> > allocation of the thread that currently has the caml lock.  That said we
> > have seen that sometimes the lock is reacquired by the same thread again.
> > So there are some fairness issues.
> >
> >
> > On Fri, Sep 27, 2013 at 11:27 AM, Romain Bardou <romain.bardou@inria.fr>
> > wrote:
> >>
> >> Le 27/09/2013 12:10, Tom Ridge a écrit :
> >> > Dear caml-list,
> >> >
> >> > I have a little program which creates a thread, and then sits in a loop:
> >> >
> >> > --
> >> >
> >> > let f () =
> >> >   let _ = ignore (print_endline "3") in
> >> >   let _ = ignore (print_endline "hello") in
> >> >   let _ = ignore (print_endline "4") in
> >> >   ()
> >> >
> >> > let main () =
> >> >   let _ = ignore (print_endline "1") in
> >> >   let t = Thread.create f () in
> >> >   (* let _ = Thread.join t in *)
> >> >   let _ = ignore (print_endline "2") in
> >> >   while true do
> >> >     flush stdout;
> >> >   done

Never ever do that. You have a busy loop there that just eats up 100%
cpu time. All while holding the global runtime lock. So the thread
will be blocked trying to aquire the lock.

> >> >
> >> > let _ = main ()
> >> >
> >> > --
> >> >
> >> > I compile the program with the following Makefile clause:
> >> >
> >> > test.byte: test.ml FORCE
> >> > ocamlc -o $@ -thread unix.cma threads.cma $<
> >> >
> >> > When I run the program I get the output:
> >> >
> >> > 1
> >> > 2
> >> >
> >> > and the program then sits in the loop. I was expecting the output from
> >> > f to show up as well. If you wait a while, it does. But you have to
> >> > wait quite a while.
> >> >
> >> > What am I doing wrong here? I notice that if I put Thread.yield in the
> >> > while loop then f's output gets printed pretty quickly. But why should
> >> > the while loop affect scheduling of f's thread?
> >> >
> >> > Thanks
> >> >
> >>
> >> OCaml's thread, unfortunately, are kind of cooperative: you need to
> >> yield explicitly. Note that you will obtain an even different (worse)
> >> result with a native program. I observed this myself without looking at
> >> the thread code itself so maybe there is actually a way to
> >> "automatically yield" but as far as I know there is no way to obtain the
> >> behavior you want without using either yields or processes instead of
> >> threads. This is the reason for the Procord library I am developing
> >> (first version to be released before the next OUPS meeting).
> >>
> >> Also, you don't need to ignore the result of print_endline, as
> >> print_endline returns unit. And using let _ = ... in is the same as
> >> using ignore, so using both is not needed.

"let _ = ..." and "ignore" are dangerous since they don't type check.
Better would be:

let () = print_endline "3" in

> >> Cheers,
> >>
> >> --
> >> Romain Bardou

MfG
	Goswin

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Caml-list] Thread behaviour
  2013-09-30  8:24                   ` Romain Bardou
@ 2013-10-07 14:57                     ` Goswin von Brederlow
  0 siblings, 0 replies; 26+ messages in thread
From: Goswin von Brederlow @ 2013-10-07 14:57 UTC (permalink / raw)
  To: caml-list

On Mon, Sep 30, 2013 at 10:24:01AM +0200, Romain Bardou wrote:
> Le 29/09/2013 19:47, Chet Murthy a écrit :
> > 
> >> This is my situation. I don't care if the code runs on a single core
> >> (in fact, I hope it does), but I do want to use threads which are
> >> scheduled reasonably independently and reasonably fairly. My first
> >> example shows that one thread is effectively starved by the other
> >> thread.
> > 
> > Ah.  ok.  In this case, it's easier.  You just need to ensure that in
> > every loop,in every recursive function, there's a call to something
> > that yield()s, on every path.  It's that simple, and that icky.  But
> > then, if you have code that literally doesn't do anything that yields,
> > in a loop, it's compute-intensive, and -that- means you're not really
> > asking for concurrency, are you?
> > 
> > BTW, to your original question "why should the while loop affect
> > scheduling of f's thread": because there is a global operation
> > (scheduling) that needs cooperation from all threads in order to
> > execute.  And that requires explicit coding by the programmer.  Now,
> > the compiler -could- have inserted code to do the yield() (in some old
> > LISPms, it was done at every backward jump and return, I think).
> > 
> > I can't speculate as to why it wasn't done, but given that the goal of
> > ocaml's threads is concurrency, and not parallelism, it isn't common
> > (at least, in my experience) to write code that doesn't naturally
> > reach yield points frequently.
> 
> Unfortunately there is a huge class of such kind of code: code which
> uses libraries which do not yield. For instance, code which loads a DLL
> to communicate with hardware and which may block (or worse).
> 
> Cheers,

But that coulde would be external and you would release the global
lock before the blocking call. That's what e.g. Unix.write does. You
can have many threads that run in parallel. But only one at a time can
run ocaml code. All others can only run external code.

Also your example was bad because in real code you wouldn't
(shouldn't) have a busy loop that does nothing without at least a
yield in it. You assumed the flush would do something that triggeres
the scheduler but that doesn't seem to be the case if the buffer is
empty. Kind of makes sense. If there is nothing to flush then the call
just returns and nothing happens. Only when there is something to
flush it writes data, releasing the runtime lock and scheduling as a
side effect.

MfG
	Goswin

PS: Tip: Use Printf.printf "...%!"

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2013-10-07 14:57 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-09-27 10:10 [Caml-list] Thread behaviour Tom Ridge
2013-09-27 10:22 ` Simon Cruanes
2013-09-27 10:27 ` Romain Bardou
2013-09-27 10:51   ` Benedikt Grundmann
2013-09-28 19:09     ` Tom Ridge
2013-09-29  7:54       ` Tom Ridge
2013-09-29 12:37         ` Yaron Minsky
2013-09-29 16:25           ` Tom Ridge
2013-09-29 16:46             ` Chet Murthy
2013-09-29 17:18               ` Tom Ridge
2013-09-29 17:47                 ` Chet Murthy
2013-09-30  8:24                   ` Romain Bardou
2013-10-07 14:57                     ` Goswin von Brederlow
2013-09-30  8:16       ` Romain Bardou
2013-10-01  3:32         ` Ivan Gotovchits
2013-10-07 14:49       ` Goswin von Brederlow
2013-09-30  9:18 ` Xavier Leroy
2013-09-30 15:12   ` Tom Ridge
2013-09-30 16:01     ` Török Edwin
2013-09-30 16:56     ` Gabriel Kerneis
2013-09-30 18:18       ` Alain Frisch
2013-10-01  5:01   ` Pierre Chambart
2013-10-01  7:21     ` Gabriel Kerneis
2013-10-02 10:37     ` Wojciech Meyer
2013-10-02 11:52       ` Francois Berenger
2013-10-02 11:58         ` Wojciech Meyer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).