caml-list - the Caml user's mailing list
 help / color / mirror / Atom feed
* Smart ways to implement worker threads
@ 2010-07-14 16:09 Goswin von Brederlow
  2010-07-15 15:58 ` [Caml-list] " Rich Neswold
                   ` (2 more replies)
  0 siblings, 3 replies; 31+ messages in thread
From: Goswin von Brederlow @ 2010-07-14 16:09 UTC (permalink / raw)
  To: caml-list

Hi,

I'm writing a little programm that, besides some minor other stuff,
computes a lot of blockwise checksums and blockwise Reed-Solomon
codes. Both of these are done on BigArrays and in external C functions
surounded by enter/leave_blocking_section. They would be ideal for doing
multiple blocks in parallel, one per core.

I'm probably not the first with such a problem, so what are the smart
ways to implement this?

I think it makes sense to start one thread per core at the start of the
programm and keep them around till the end. They won't be needed all the
time, or even most of the time, but when they are needed the responce
time matters. Or is Thread.create overhead neglible nowadays?

But then I can see multiple possibilities of the top of my head:

1) Have all threads call call Unix.select. Whatever thread wakes up
parses the incoming request, does the checksum or Reed-Solomon as
needed, replies and goes back to select.

Problem 1: One of the FDs is a socket accepting new connects. If one
comes in the number of FDs to listen on changes and I need to signal all
threads to refresh their FD lists for Unix.select.

Problem 2: I have several global structures that will need locking
against multiple threads altering them. Might even run the danger of
deadlocks when structures are locked in different orders in different
threads.


2) Create pure worker threads that only do checksuming and Reed-Solomon
codes. I create a queue of jobs protected by a Mutex.t and a Condition.t
to signal worker threads when a job is added to the queue.

Problem: The main thread will usualy be in a Unix.select call while the
worker threads work. How does a worker thread signal the main thread
that a job is done? Have a job_done FD and write 1 byte to it? Send a
signal so the main thread aborts the Unix.select?

Possible solution: The worker threads might be able to reply on their
own when they are done without signaling the main thread. Not sure how
that would work with error handling though.


3) Like 2 but have an extra Input thread that runs Unix.select. Have two
queues. The Input thread put incoming requests into the first queue. The
main thread waits on the first queue and puts jobs for the worker
threads in the second queue. The worker threads wait on the second queue
and puts the request back into the first queue for finalization.


4) Do some magic with Event.t?

Problem: never used this and I could use a small example how to use
this.


5) Implement lock-free task steeling queues?

Probably overkill for this and I think that would involve a fair bit of
external C code with some inline asm, right?


So far I think option 3 might be the simplest approach. At least I don't
see any glaring problems with it.

What do you think? Other Ideas? Ready-to-use modules for this?

MfG
        Goswin


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-14 16:09 Smart ways to implement worker threads Goswin von Brederlow
@ 2010-07-15 15:58 ` Rich Neswold
  2010-07-15 16:19   ` David McClain
                     ` (2 more replies)
  2010-07-15 16:32 ` Romain Beauxis
  2010-07-17  9:52 ` Goswin von Brederlow
  2 siblings, 3 replies; 31+ messages in thread
From: Rich Neswold @ 2010-07-15 15:58 UTC (permalink / raw)
  To: Goswin von Brederlow; +Cc: caml-list

[-- Attachment #1: Type: text/plain, Size: 1685 bytes --]

On Wed, Jul 14, 2010 at 11:09 AM, Goswin von Brederlow <goswin-v-b@web.de>wrote:

> 4) Do some magic with Event.t?
>
> Problem: never used this and I could use a small example how to use
> this.
>

Event.t (and its associated library) *is* magical in that it provides an
absolutely beautiful concurrent programming model. Forget about select() and
mutexes and other ugly threading concepts. Event.t and friends is how it
should be done.

John H. Reppy's "Concurrent Programming in
ML<http://sharingcentre.net/87081-concurrent-programming-ml.html>"
provides a thorough understanding of how to use this module effectively.
This book presents the material in a very understandable
way: deficiencies in current threading models are discussed as well as how
CML solves the limitations and constraints. The book can be purchased or
downloaded free online.

The few applications I've written that use CML, I found it was more than
sufficient (speed-wise). Whether your application is more demanding, I can't
tell.

Hopefully someone on the list with more experience can comment whether there
are caveats (performance-related or others) in OCaml's support of CML. If
there are, there should be an effort in fixing the problems (I would help in
any way I could.) I'd also recommend incorporating Satoshi Ogasawara's
Concurrent
Cell <http://caml.inria.fr/cgi-bin/hump.en.cgi?contrib=654> project into the
standard library. (This project adds IVars and MVars and other constructs
described in John H. Reppy's book, but not available in the OCaml standard
library.)

Hope this helps!

-- 
Rich

Google Reader: https://www.google.com/reader/shared/rich.neswold
Jabber ID: rich@neswold.homeunix.net

[-- Attachment #2: Type: text/html, Size: 2811 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-15 15:58 ` [Caml-list] " Rich Neswold
@ 2010-07-15 16:19   ` David McClain
  2010-07-15 17:16   ` Ashish Agarwal
  2010-07-15 18:24   ` Goswin von Brederlow
  2 siblings, 0 replies; 31+ messages in thread
From: David McClain @ 2010-07-15 16:19 UTC (permalink / raw)
  To: caml-list

[-- Attachment #1: Type: text/plain, Size: 807 bytes --]

I have to second the motion to read and understand Reppy's fine book.  
However, be aware that CML was based on an architecture of spaghetti  
threads, where new tasks can be spawn with little or no cost, and the  
GC was responsible for reclaiming dead threads.

That is not the same as OCaml's architecture, and so OCaml can  
implement about 90% of CML but that last 10% might kill your dreams.  
CML likes to spawn potential handlers, of which only one will get the  
go ahead. The others are expected to die, after possibly cleaning up  
state. In this regard, CML is more akin to Erlang.

Dr. David McClain
Chief Technical Officer
Refined Audiometrics Laboratory
4391 N. Camino Ferreo
Tucson, AZ  85750

email: dbm@refined-audiometrics.com
phone: 1.520.390.3995
web: http://refined-audiometrics.com



[-- Attachment #2: Type: text/html, Size: 5130 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-14 16:09 Smart ways to implement worker threads Goswin von Brederlow
  2010-07-15 15:58 ` [Caml-list] " Rich Neswold
@ 2010-07-15 16:32 ` Romain Beauxis
  2010-07-15 17:46   ` Goswin von Brederlow
  2010-07-17  9:52 ` Goswin von Brederlow
  2 siblings, 1 reply; 31+ messages in thread
From: Romain Beauxis @ 2010-07-15 16:32 UTC (permalink / raw)
  To: caml-list

	Hi !

Le mercredi 14 juillet 2010 11:09:49, Goswin von Brederlow a écrit :
> What do you think? Other Ideas? Ready-to-use modules for this?

I won't commend on CML; which I don't know at all but the description you give 
seems to match the requirements we had in liquidsoap when we implemented out 
internal scheduler.

The module that implements it is there:
  http://www.rastageeks.org/ocaml-duppy/Duppy.html

Basically, it provides a facility around to create a select-based scheduler 
where you can define your custom priorities, create a queue for a certain range 
of priority and run them in threads.

The main scheduling is done by inside the tasks, so this is transparent.

Basically, to use it, you initiate a scheduler, create threads running 
associated queues and submit tasks to the scheduler.


Romain


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-15 15:58 ` [Caml-list] " Rich Neswold
  2010-07-15 16:19   ` David McClain
@ 2010-07-15 17:16   ` Ashish Agarwal
  2010-07-15 18:24   ` Goswin von Brederlow
  2 siblings, 0 replies; 31+ messages in thread
From: Ashish Agarwal @ 2010-07-15 17:16 UTC (permalink / raw)
  To: Rich Neswold; +Cc: Goswin von Brederlow, caml-list

[-- Attachment #1: Type: text/plain, Size: 2423 bytes --]

The link given to Reppy's book leads to either: a direct download option
which does not work, or a sponsored option which goes to sites that seem
unreliable and require us to register. Is the download available more easily
elsewhere? I didn't get anywhere with google. Thanks.


On Thu, Jul 15, 2010 at 11:58 AM, Rich Neswold <rich.neswold@gmail.com>wrote:

> On Wed, Jul 14, 2010 at 11:09 AM, Goswin von Brederlow <goswin-v-b@web.de>wrote:
>
>> 4) Do some magic with Event.t?
>>
>> Problem: never used this and I could use a small example how to use
>> this.
>>
>
> Event.t (and its associated library) *is* magical in that it provides an
> absolutely beautiful concurrent programming model. Forget about select() and
> mutexes and other ugly threading concepts. Event.t and friends is how it
> should be done.
>
> John H. Reppy's "Concurrent Programming in ML<http://sharingcentre.net/87081-concurrent-programming-ml.html>"
> provides a thorough understanding of how to use this module effectively.
> This book presents the material in a very understandable
> way: deficiencies in current threading models are discussed as well as how
> CML solves the limitations and constraints. The book can be purchased or
> downloaded free online.
>
> The few applications I've written that use CML, I found it was more than
> sufficient (speed-wise). Whether your application is more demanding, I can't
> tell.
>
> Hopefully someone on the list with more experience can comment whether
> there are caveats (performance-related or others) in OCaml's support of CML.
> If there are, there should be an effort in fixing the problems (I would help
> in any way I could.) I'd also recommend incorporating Satoshi Ogasawara's Concurrent
> Cell <http://caml.inria.fr/cgi-bin/hump.en.cgi?contrib=654> project into
> the standard library. (This project adds IVars and MVars and other
> constructs described in John H. Reppy's book, but not available in the OCaml
> standard library.)
>
> Hope this helps!
>
> --
> Rich
>
> Google Reader: https://www.google.com/reader/shared/rich.neswold
> Jabber ID: rich@neswold.homeunix.net
>
> _______________________________________________
> Caml-list mailing list. Subscription management:
> http://yquem.inria.fr/cgi-bin/mailman/listinfo/caml-list
> Archives: http://caml.inria.fr
> Beginner's list: http://groups.yahoo.com/group/ocaml_beginners
> Bug reports: http://caml.inria.fr/bin/caml-bugs
>
>

[-- Attachment #2: Type: text/html, Size: 4005 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-15 16:32 ` Romain Beauxis
@ 2010-07-15 17:46   ` Goswin von Brederlow
  2010-07-15 18:44     ` Romain Beauxis
  0 siblings, 1 reply; 31+ messages in thread
From: Goswin von Brederlow @ 2010-07-15 17:46 UTC (permalink / raw)
  To: Romain Beauxis; +Cc: caml-list

Romain Beauxis <toots@rastageeks.org> writes:

> 	Hi !
>
> Le mercredi 14 juillet 2010 11:09:49, Goswin von Brederlow a écrit :
>> What do you think? Other Ideas? Ready-to-use modules for this?
>
> I won't commend on CML; which I don't know at all but the description you give 
> seems to match the requirements we had in liquidsoap when we implemented out 
> internal scheduler.
>
> The module that implements it is there:
>   http://www.rastageeks.org/ocaml-duppy/Duppy.html
>
> Basically, it provides a facility around to create a select-based scheduler 
> where you can define your custom priorities, create a queue for a certain range 
> of priority and run them in threads.
>
> The main scheduling is done by inside the tasks, so this is transparent.
>
> Basically, to use it, you initiate a scheduler, create threads running 
> associated queues and submit tasks to the scheduler.
>
>
> Romain

I don't see where that helps at all. I don't want to offload the IO into
threads and schedule them and Duppy seems to only handle IO tasks.

Except if I pick Solution 1 and then it still doesn't help anything
since I can already run select in every thread. The IO should not be
scheduled by priorities and isn't the bottleneck anyway. Seems this
would just add overhead.

MfG
        Goswin


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-15 15:58 ` [Caml-list] " Rich Neswold
  2010-07-15 16:19   ` David McClain
  2010-07-15 17:16   ` Ashish Agarwal
@ 2010-07-15 18:24   ` Goswin von Brederlow
  2010-07-15 18:37     ` David McClain
                       ` (2 more replies)
  2 siblings, 3 replies; 31+ messages in thread
From: Goswin von Brederlow @ 2010-07-15 18:24 UTC (permalink / raw)
  To: Rich Neswold; +Cc: caml-list

Rich Neswold <rich.neswold@gmail.com> writes:

> On Wed, Jul 14, 2010 at 11:09 AM, Goswin von Brederlow <goswin-v-b@web.de>
> wrote:
>
>     4) Do some magic with Event.t?
>
>     Problem: never used this and I could use a small example how to use
>     this.
>
>
> Event.t (and its associated library) *is* magical in that it provides an
> absolutely beautiful concurrent programming model. Forget about select() and
> mutexes and other ugly threading concepts. Event.t and friends is how it should
> be done.
>
> John H. Reppy's "Concurrent Programming in ML" provides a thorough
> understanding of how to use this module effectively. This book presents the
> material in a very understandable way: deficiencies in current threading
> models are discussed as well as how CML solves the limitations and constraints.
> The book can be purchased or downloaded free online.

It is too bad I don't want to lear CML but use Ocaml. The CML examples
from the book don't translate into ocaml since the interface is just a
little bit different and those differences are what throws me off. I
figue spawn becomes Thread.cread and I have to add Event.sync or
Event.select and Event.wrap at some points. At which point the book
becomes useless to understanding how Ocamls Event module is to be used
I'm afraid. Also doesn't tell me what Ocamls limitations and constraints
are.

So if you have used this in ocaml could you give a short example?
E.g. the merge sort from the book.

MfG
        Goswin


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-15 18:24   ` Goswin von Brederlow
@ 2010-07-15 18:37     ` David McClain
  2010-07-15 18:40     ` David McClain
  2010-07-15 19:56     ` Rich Neswold
  2 siblings, 0 replies; 31+ messages in thread
From: David McClain @ 2010-07-15 18:37 UTC (permalink / raw)
  To: caml-list

[-- Attachment #1: Type: text/plain, Size: 10990 bytes --]

> It is too bad I don't want to lear CML but use Ocaml. The CML examples
> from the book don't translate into ocaml since the interface is just a
> little bit different and those differences are what throws me off. I

That may appear to be the case from only a cursory review of CML. But  
I find that OCaml's notions of Events, Channels, etc, correspond  
quite closely to what John Reppy describes.

The whole point of Reppy's work was to show how "Events" could be  
made into functional objects, with operations for combination among  
them.

I don't have the "sort" routine translated, but here is some Lisp  
code that attempts to provide the multiple-readers / single-writer  
locks as might be used in a database application. It demonstrates the  
use of wrap, sync, etc...

-----------------------------------
;; rwgate.lisp -- Multiple Reader/Single Writer using Reppy's Channels
;;
;; DM/MCFA  01/00
;; ----------------------------------------------------

(defpackage "RWGATE"
   (:use "USEFUL-MACROS" "COMMON-LISP" "REPPY-CHANNELS" "SPM"  
"LISPWORKS")
   (:export "MAKE-LOCK"
            "WRAP-RDLOCKEVT"
            "WRAP-WRLOCKEVT"
            "WITH-READLOCK"
            "WITH-WRITELOCK"))

(in-package "RWGATE")

;; ---------------------------------------------------------------
;; This package implements a multiple-reader/single-writer lock
;; protocol using the amazing capabilities of the Reppy channels.
;;
;; Rule of engagement:
;;
;; 1. A lock is available for reading if no write locks are in place,
;;    or else the read lock requestor is equal to the write lock holder.
;;
;; 2. A lock is available for writing if no read locks and no write  
locks
;;    are in place,
;;    or else the write lock requestor is equal to the write lock  
holder,
;;    or else the write lock requestor is equal to every read lock  
holder.
;;
;; These rules ensure that multiple readers can run, while only one
;; writer can run. No requirements for nesting of read/write lock
;; requests. That is, a writer can request a read lock and vice versa,
;; and issue lock releases in any order.
;;
;; A lock holder can request any number of additional locks. The lock  
will
;; actually be released when an equal number of releases of like kind
;; have been obtained. For every write lock there is a write release,
;; and for every read lock there is a read release.
;;
;; The Reppy protocol is protected with UNWIND-PROTECT to ensure that
;; locks held are released on exit from the function block being  
executed
;; within the province of a lock. Lock releases are handled  
transparently
;; to the user.
;;
;; ---------------------------------------------------------------
;; Lock server protocol with event combinators

(defclass rw-lock (<serviceable-protocol-mixin>)
   ((rdlocks :accessor  rw-lock-rdlocks  :initform 0)
    (wrlocks :accessor  rw-lock-wrlocks  :initform 0)
    (wrowner :accessor  rw-lock-wrowner  :initform nil)
    (rdqueue :accessor  rw-lock-rdqueue  :initform nil)
    (rdowners :accessor rw-lock-rdowners :initform nil)
    (wrqueue :accessor  rw-lock-wrqueue  :initform nil)))

(defun make-lock ()
   (make-instance
    'rw-lock
    :handlers (list

               :read
               #'(lambda (req gate who)
                   (declare (ignore req))
		  (labels ((take-it ()
			     (incf (rw-lock-rdlocks gate))
			     (push who (rw-lock-rdowners gate))
			     (spawn #'send who t)))
		    (cond ((eq who (rw-lock-wrowner gate))
			   ;; we own a write lock already so go ahead...
			   (take-it))
			
			  ((plusp (rw-lock-wrlocks gate))
			   ;; outstanding write lock so enqueue in
			   ;; pending readers queue...
			   (push who (rw-lock-rdqueue gate)))
			
			  (t
			   ;; no outstanding writer so take it...
			   (take-it))
			  )))
	
               :release-read
               #'(lambda (req gate who)
                   (declare (ignore req))
		  (removef (rw-lock-rdowners gate) who :count 1)
                   (if (and (zerop (decf (rw-lock-rdlocks gate)))
                            (zerop (rw-lock-wrlocks gate)))
		      ;; no more readers and no more writers
		      ;; (a writer might have been me...)
		      ;; so go ahead and start writers
		      ;; there should be no pending readers
		      ;; since there were no writers
                       (let ((writer (pop (rw-lock-wrqueue gate))))
                         (when writer
                           (incf (rw-lock-wrlocks gate))
                           (setf (rw-lock-wrowner gate) writer)
                           (spawn #'send writer t))
                         )))
	
               :write
               #'(lambda (req gate who)
                   (declare (ignore req))
		  (labels ((take-it ()
			     (incf (rw-lock-wrlocks gate))
			     (setf (rw-lock-wrowner gate) who)
			     (spawn #'send who t)))
		    (cond ((and (zerop (rw-lock-rdlocks gate))
				(zerop (rw-lock-wrlocks gate)))
			   ;; gate available so take it
			   (take-it))
			
			  ((eq who (rw-lock-wrowner gate))
			   ;; gate already owned by requestor
			   ;; so incr lock count and tell him its okay...
			   (take-it))
			
			  ((every #'(lambda (rdr)
				      (eq rdr who))
				  (rw-lock-rdowners gate))
			   ;; only one reader and it is me...
                            ;; but I may be in the list numerous  
times...
			   ;; so go ahead and grab a write lock.
			   (take-it))
			
			  (t
			   ;; gate not available -- put caller on
			   ;; waiting writers queue
			   (conc1f (rw-lock-wrqueue gate) who))
			  )))
	
               :release-write
               #'(lambda (req gate who)
                   (declare (ignore req who))
                   (labels
                       ((run-writer ()
                          (let ((writer (pop (rw-lock-wrqueue gate))))
                            (if writer
                                (progn
				 (incf (rw-lock-wrlocks gate))
				 (setf (rw-lock-wrowner gate) writer)
                                  (spawn #'send writer t)
                                  t)
			     nil)))
                        (run-readers ()
                          (let ((readers (rw-lock-rdqueue gate)))
                            (if readers
                                (progn
                                  (setf (rw-lock-rdqueue gate) nil)
				 (appendf (rw-lock-rdowners gate) readers)
                                  (incf (rw-lock-rdlocks gate)  
(length readers))
                                  (dolist (reader readers)
                                    (spawn #'send reader t))
                                  t)
                              nil))))
                     (when (zerop (decf (rw-lock-wrlocks gate)))
		      ;; no more writers (was only me anyway...)
                       (setf (rw-lock-wrowner gate) nil)
		      (if (zerop (rw-lock-rdlocks gate))
			  ;; if no active readers either
			  ;; then it is a toss up whether to
			  ;; start writers or readers
			  (if (zerop (random 2)) ;; add some non-determinism
			      (unless (run-writer)
				(run-readers))
			    (unless (run-readers)
			      (run-writer)))
			;; but if I was a reader too,
			;; then it is only safe to start other
			;; readers.
			(run-readers)))
		    ))
               )))

(defun wrap-lockEvt (lock fn args req rel)
   (guard
    #'(lambda ()
        (let ((replyCh (make-channel)))
          (labels
              ((acquire-lock ()
                 (service-request req lock replyCh))
               (release-lock ()
                 (service-request rel lock replyCh)))
            (spawn #'acquire-lock)
            (wrap-abort
             (wrap (recvEvt replyCh)
                   #'(lambda (reply)
                       (declare (ignore reply))
                       (unwind-protect
                           (apply fn args)
                         (spawn #'release-lock))))
             #'(lambda ()
                 (spawn #'(lambda ()
                            (recv replyCh)
                            (release-lock)))))
            )))
    ))

(defmethod wrap-rdLockEvt ((lock rw-lock) fn &rest args)
   (wrap-lockEvt lock fn args :read :release-read))

(defmethod wrap-wrLockEvt ((lock rw-lock) fn &rest args)
   (wrap-lockEvt lock fn args :write :release-write))

(defmethod with-readlock ((lock rw-lock) fn &rest args)
   (sync (apply #'wrap-rdLockEvt lock fn args)))

(defmethod with-writelock ((lock rw-lock) fn &rest args)
   (sync (apply #'wrap-wrLockEvt lock fn args)))
-----------------------------------

Dr. David McClain
Chief Technical Officer
Refined Audiometrics Laboratory
4391 N. Camino Ferreo
Tucson, AZ  85750

email: dbm@refined-audiometrics.com
phone: 1.520.390.3995
web: http://refined-audiometrics.com



On Jul 15, 2010, at 11:24, Goswin von Brederlow wrote:

> Rich Neswold <rich.neswold@gmail.com> writes:
>
>> On Wed, Jul 14, 2010 at 11:09 AM, Goswin von Brederlow <goswin-v- 
>> b@web.de>
>> wrote:
>>
>>     4) Do some magic with Event.t?
>>
>>     Problem: never used this and I could use a small example how  
>> to use
>>     this.
>>
>>
>> Event.t (and its associated library) *is* magical in that it  
>> provides an
>> absolutely beautiful concurrent programming model. Forget about  
>> select() and
>> mutexes and other ugly threading concepts. Event.t and friends is  
>> how it should
>> be done.
>>
>> John H. Reppy's "Concurrent Programming in ML" provides a thorough
>> understanding of how to use this module effectively. This book  
>> presents the
>> material in a very understandable way: deficiencies in current  
>> threading
>> models are discussed as well as how CML solves the limitations and  
>> constraints.
>> The book can be purchased or downloaded free online.
>
> It is too bad I don't want to lear CML but use Ocaml. The CML examples
> from the book don't translate into ocaml since the interface is just a
> little bit different and those differences are what throws me off. I
> figue spawn becomes Thread.cread and I have to add Event.sync or
> Event.select and Event.wrap at some points. At which point the book
> becomes useless to understanding how Ocamls Event module is to be used
> I'm afraid. Also doesn't tell me what Ocamls limitations and  
> constraints
> are.
>
> So if you have used this in ocaml could you give a short example?
> E.g. the merge sort from the book.
>
> MfG
>         Goswin
>
> _______________________________________________
> Caml-list mailing list. Subscription management:
> http://yquem.inria.fr/cgi-bin/mailman/listinfo/caml-list
> Archives: http://caml.inria.fr
> Beginner's list: http://groups.yahoo.com/group/ocaml_beginners
> Bug reports: http://caml.inria.fr/bin/caml-bugs
>


[-- Attachment #2: Type: text/html, Size: 30795 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-15 18:24   ` Goswin von Brederlow
  2010-07-15 18:37     ` David McClain
@ 2010-07-15 18:40     ` David McClain
  2010-07-15 19:56     ` Rich Neswold
  2 siblings, 0 replies; 31+ messages in thread
From: David McClain @ 2010-07-15 18:40 UTC (permalink / raw)
  To: caml-list

[-- Attachment #1: Type: text/plain, Size: 8725 bytes --]

It is too bad I don't want to lear CML but use Ocaml. The CML examples
from the book don't translate into ocaml since the interface is just a
little bit different and those differences are what throws me off. I

That may appear to be the case from only a cursory review of CML. But  
I find that OCaml's notions of Events, Channels, etc, correspond  
quite closely to what John Reppy describes.

The whole point of Reppy's work was to show how "Events" could be  
made into functional objects, with operations for combination among  
them.

I don't have the "sort" routine translated, but here is some Lisp  
code that attempts to provide the multiple-readers / single-writer  
locks as might be used in a database application. It demonstrates the  
use of wrap, sync, etc...

-----------------------------------
;; rwgate.lisp -- Multiple Reader/Single Writer using Reppy's Channels
;;
;; DM/MCFA  01/00
;; ----------------------------------------------------

(defpackage "RWGATE"
   (:use "USEFUL-MACROS" "COMMON-LISP" "REPPY-CHANNELS" "SPM"  
"LISPWORKS")
   (:export "MAKE-LOCK"
            "WRAP-RDLOCKEVT"
            "WRAP-WRLOCKEVT"
            "WITH-READLOCK"
            "WITH-WRITELOCK"))

(in-package "RWGATE")

;; ---------------------------------------------------------------
;; This package implements a multiple-reader/single-writer lock
;; protocol using the amazing capabilities of the Reppy channels.
;;
;; Rule of engagement:
;;
;; 1. A lock is available for reading if no write locks are in place,
;;    or else the read lock requestor is equal to the write lock holder.
;;
;; 2. A lock is available for writing if no read locks and no write  
locks
;;    are in place,
;;    or else the write lock requestor is equal to the write lock  
holder,
;;    or else the write lock requestor is equal to every read lock  
holder.
;;
;; These rules ensure that multiple readers can run, while only one
;; writer can run. No requirements for nesting of read/write lock
;; requests. That is, a writer can request a read lock and vice versa,
;; and issue lock releases in any order.
;;
;; A lock holder can request any number of additional locks. The lock  
will
;; actually be released when an equal number of releases of like kind
;; have been obtained. For every write lock there is a write release,
;; and for every read lock there is a read release.
;;
;; The Reppy protocol is protected with UNWIND-PROTECT to ensure that
;; locks held are released on exit from the function block being  
executed
;; within the province of a lock. Lock releases are handled  
transparently
;; to the user.
;;
;; ---------------------------------------------------------------
;; Lock server protocol with event combinators

(defclass rw-lock (<serviceable-protocol-mixin>)
   ((rdlocks :accessor  rw-lock-rdlocks  :initform 0)
    (wrlocks :accessor  rw-lock-wrlocks  :initform 0)
    (wrowner :accessor  rw-lock-wrowner  :initform nil)
    (rdqueue :accessor  rw-lock-rdqueue  :initform nil)
    (rdowners :accessor rw-lock-rdowners :initform nil)
    (wrqueue :accessor  rw-lock-wrqueue  :initform nil)))

(defun make-lock ()
   (make-instance
    'rw-lock
    :handlers (list

               :read
               #'(lambda (req gate who)
                   (declare (ignore req))
		  (labels ((take-it ()
			     (incf (rw-lock-rdlocks gate))
			     (push who (rw-lock-rdowners gate))
			     (spawn #'send who t)))
		    (cond ((eq who (rw-lock-wrowner gate))
			   ;; we own a write lock already so go ahead...
			   (take-it))
			
			  ((plusp (rw-lock-wrlocks gate))
			   ;; outstanding write lock so enqueue in
			   ;; pending readers queue...
			   (push who (rw-lock-rdqueue gate)))
			
			  (t
			   ;; no outstanding writer so take it...
			   (take-it))
			  )))
	
               :release-read
               #'(lambda (req gate who)
                   (declare (ignore req))
		  (removef (rw-lock-rdowners gate) who :count 1)
                   (if (and (zerop (decf (rw-lock-rdlocks gate)))
                            (zerop (rw-lock-wrlocks gate)))
		      ;; no more readers and no more writers
		      ;; (a writer might have been me...)
		      ;; so go ahead and start writers
		      ;; there should be no pending readers
		      ;; since there were no writers
                       (let ((writer (pop (rw-lock-wrqueue gate))))
                         (when writer
                           (incf (rw-lock-wrlocks gate))
                           (setf (rw-lock-wrowner gate) writer)
                           (spawn #'send writer t))
                         )))
	
               :write
               #'(lambda (req gate who)
                   (declare (ignore req))
		  (labels ((take-it ()
			     (incf (rw-lock-wrlocks gate))
			     (setf (rw-lock-wrowner gate) who)
			     (spawn #'send who t)))
		    (cond ((and (zerop (rw-lock-rdlocks gate))
				(zerop (rw-lock-wrlocks gate)))
			   ;; gate available so take it
			   (take-it))
			
			  ((eq who (rw-lock-wrowner gate))
			   ;; gate already owned by requestor
			   ;; so incr lock count and tell him its okay...
			   (take-it))
			
			  ((every #'(lambda (rdr)
				      (eq rdr who))
				  (rw-lock-rdowners gate))
			   ;; only one reader and it is me...
                            ;; but I may be in the list numerous  
times...
			   ;; so go ahead and grab a write lock.
			   (take-it))
			
			  (t
			   ;; gate not available -- put caller on
			   ;; waiting writers queue
			   (conc1f (rw-lock-wrqueue gate) who))
			  )))
	
               :release-write
               #'(lambda (req gate who)
                   (declare (ignore req who))
                   (labels
                       ((run-writer ()
                          (let ((writer (pop (rw-lock-wrqueue gate))))
                            (if writer
                                (progn
				 (incf (rw-lock-wrlocks gate))
				 (setf (rw-lock-wrowner gate) writer)
                                  (spawn #'send writer t)
                                  t)
			     nil)))
                        (run-readers ()
                          (let ((readers (rw-lock-rdqueue gate)))
                            (if readers
                                (progn
                                  (setf (rw-lock-rdqueue gate) nil)
				 (appendf (rw-lock-rdowners gate) readers)
                                  (incf (rw-lock-rdlocks gate)  
(length readers))
                                  (dolist (reader readers)
                                    (spawn #'send reader t))
                                  t)
                              nil))))
                     (when (zerop (decf (rw-lock-wrlocks gate)))
		      ;; no more writers (was only me anyway...)
                       (setf (rw-lock-wrowner gate) nil)
		      (if (zerop (rw-lock-rdlocks gate))
			  ;; if no active readers either
			  ;; then it is a toss up whether to
			  ;; start writers or readers
			  (if (zerop (random 2)) ;; add some non-determinism
			      (unless (run-writer)
				(run-readers))
			    (unless (run-readers)
			      (run-writer)))
			;; but if I was a reader too,
			;; then it is only safe to start other
			;; readers.
			(run-readers)))
		    ))
               )))

(defun wrap-lockEvt (lock fn args req rel)
   (guard
    #'(lambda ()
        (let ((replyCh (make-channel)))
          (labels
              ((acquire-lock ()
                 (service-request req lock replyCh))
               (release-lock ()
                 (service-request rel lock replyCh)))
            (spawn #'acquire-lock)
            (wrap-abort
             (wrap (recvEvt replyCh)
                   #'(lambda (reply)
                       (declare (ignore reply))
                       (unwind-protect
                           (apply fn args)
                         (spawn #'release-lock))))
             #'(lambda ()
                 (spawn #'(lambda ()
                            (recv replyCh)
                            (release-lock)))))
            )))
    ))

(defmethod wrap-rdLockEvt ((lock rw-lock) fn &rest args)
   (wrap-lockEvt lock fn args :read :release-read))

(defmethod wrap-wrLockEvt ((lock rw-lock) fn &rest args)
   (wrap-lockEvt lock fn args :write :release-write))

(defmethod with-readlock ((lock rw-lock) fn &rest args)
   (sync (apply #'wrap-rdLockEvt lock fn args)))

(defmethod with-writelock ((lock rw-lock) fn &rest args)
   (sync (apply #'wrap-wrLockEvt lock fn args)))
-----------------------------------

Dr. David McClain
Chief Technical Officer
Refined Audiometrics Laboratory
4391 N. Camino Ferreo
Tucson, AZ  85750

email: dbm@refined-audiometrics.com
phone: 1.520.390.3995
web: http://refined-audiometrics.com


[-- Attachment #2: Type: text/html, Size: 62623 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-15 17:46   ` Goswin von Brederlow
@ 2010-07-15 18:44     ` Romain Beauxis
  2010-07-16  3:52       ` Goswin von Brederlow
  0 siblings, 1 reply; 31+ messages in thread
From: Romain Beauxis @ 2010-07-15 18:44 UTC (permalink / raw)
  To: caml-list

Le jeudi 15 juillet 2010 12:46:53, Goswin von Brederlow a écrit :
> I don't see where that helps at all. I don't want to offload the IO into
> threads and schedule them and Duppy seems to only handle IO tasks.

I don't understand what you mean by IO tasks. Tasks in duppy are scheduled 
according to some events which, since it is select-based, are either an event 
on a socket or a timeout.

Once scheduled, the action that the task does is anything you programmed. Once 
finished, the tasks can return an array of new tasks which are then put in the 
queue.

In your case, you probably only need the timeout event, which would mean that 
as soon as you have a new tasks to perform, you submit it to the scheduler 
with timeout 0 and it will be processed by one of the threads as soon as 
possible..

> Except if I pick Solution 1 and then it still doesn't help anything
> since I can already run select in every thread. The IO should not be
> scheduled by priorities and isn't the bottleneck anyway. Seems this
> would just add overhead.

The idea of duppy was to have only one select running among the multiple queue 
threads.


Romain


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-15 18:24   ` Goswin von Brederlow
  2010-07-15 18:37     ` David McClain
  2010-07-15 18:40     ` David McClain
@ 2010-07-15 19:56     ` Rich Neswold
  2010-07-16  4:02       ` Goswin von Brederlow
  2 siblings, 1 reply; 31+ messages in thread
From: Rich Neswold @ 2010-07-15 19:56 UTC (permalink / raw)
  To: Goswin von Brederlow; +Cc: caml-list


[-- Attachment #1.1: Type: text/plain, Size: 850 bytes --]

On Thu, Jul 15, 2010 at 1:24 PM, Goswin von Brederlow <goswin-v-b@web.de>wrote:

> It is too bad I don't want to learn CML but use Ocaml. The CML examples
> from the book don't translate into ocaml since the interface is just a
> little bit different and those differences are what throws me off.
>


> So could you give a short example? E.g. the merge sort from the book.
>

You're right: OCaml's syntax differs enough from the text that it's annoying
to cut-n-paste the examples. Fortunately most example are a few lines of
code, so I didn't realize how difficult it could be on a larger example
(like the merge sort example.)

Attached is my translation of the mergeSort from the book. You get to play
with it and see if it works  :)

-- 
Rich

Google Reader: https://www.google.com/reader/shared/rich.neswold
Jabber ID: rich@neswold.homeunix.net

[-- Attachment #1.2: Type: text/html, Size: 1444 bytes --]

[-- Attachment #2: mergesort.ml --]
[-- Type: text/x-ocaml, Size: 1487 bytes --]

open Event

let mySend c d = sync (send c d)
and myRecv c = sync (receive c)

let spawn f =
  ignore (Thread.create f ())

let split (inCh, outCh1, outCh2) =
  let rec loop = function
    | (None, _, _) -> (mySend outCh1 None; mySend outCh2 None)
    | (x, out1, out2) -> (mySend out1 x; loop (myRecv inCh, out2, out1))
  in
    loop (myRecv inCh, outCh1, outCh2)

let merge (inCh1, inCh2, (outCh : int option channel)) =
  let rec copy (fromCh, toCh) =
    let rec loop v =
      begin
	mySend toCh v;
	match v with
	  | Some _ -> loop (myRecv fromCh)
	  | None -> ()
      end
    in
      loop (myRecv fromCh)
  and mergep (from1, from2) =
    match (from1, from2) with
      | (None, None) -> mySend outCh None
      | (_, None) -> (mySend outCh from1; copy (inCh1, outCh))
      | (None, _) -> (mySend outCh from2; copy (inCh2, outCh))
      | (Some a, Some b) ->
	  if a < b then
	    (mySend outCh from1; mergep (myRecv inCh1, from2))
	  else
	    (mySend outCh from2; mergep (from1, myRecv inCh2))
  in
    mergep (myRecv inCh1, myRecv inCh2)

let rec mergeSort () =
  let ch = new_channel() in
  let sort () =
    (match myRecv ch with
       | None -> ()
       | v1 ->
	   begin
	     (match myRecv ch with
		| None -> mySend ch v1
		| v2 ->
		    let ch1 = mergeSort()
		    and ch2 = mergeSort()
		    in
		      begin
			mySend ch1 v1;
			mySend ch2 v2;
			split (ch, ch1, ch2);
			merge (ch1, ch2, ch)
		      end);
	     mySend ch None
	   end)
  in
    (spawn sort; ch)

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-15 18:44     ` Romain Beauxis
@ 2010-07-16  3:52       ` Goswin von Brederlow
  2010-07-16  4:19         ` Romain Beauxis
  0 siblings, 1 reply; 31+ messages in thread
From: Goswin von Brederlow @ 2010-07-16  3:52 UTC (permalink / raw)
  To: Romain Beauxis; +Cc: caml-list

Romain Beauxis <toots@rastageeks.org> writes:

> Le jeudi 15 juillet 2010 12:46:53, Goswin von Brederlow a écrit :
>> I don't see where that helps at all. I don't want to offload the IO into
>> threads and schedule them and Duppy seems to only handle IO tasks.
>
> I don't understand what you mean by IO tasks. Tasks in duppy are scheduled 
> according to some events which, since it is select-based, are either an event 
> on a socket or a timeout.

But I don't have a socket. The main thread runs and at some point it has
a Buffer.t that it needs to have checksummed. So that leaves timeout.

> Once scheduled, the action that the task does is anything you programmed. Once 
> finished, the tasks can return an array of new tasks which are then put in the 
> queue.
>
> In your case, you probably only need the timeout event, which would mean that 
> as soon as you have a new tasks to perform, you submit it to the scheduler 
> with timeout 0 and it will be processed by one of the threads as soon as 
> possible..

So the code would be something like this?

let with_checksum buf sum _ =
  reply_request ();
  []

let do_checksum buf _ =
  let sum = ...
  in
    { priority = 0; events = [`Delay 0.]; handler = with_checksum buf sum; }

let main_task events =
  let rec loop tasks = function
      [] -> tasks
    | (`Read fd)::events ->
         let buf = parse_request fd in
         let task1 = { priority = 1; events = [`Delay 0.]; handler = do_checksum buf; } in
         let task2 = { priority = 0; events = [`Read fd]; handler = main_task; }
         in
           loop (task1::task2::tasks) events
  in
    loop events

let main () =
  let scheduler = Duppy.create ()
  in
    for i = 1 to num_cores do
      Thread.create (Duppy.queue ~priorities=(fun x -> x = 1) scheduler "worker") ();
    done;
    Duppy.Task.add scheduler
     { priority = 0; events = [`Read server_socket]; handler = main_task; }
    Duppy.queue ~priorities=(fun x -> x = 0) scheduler "main") ();


The main task will only process priority 0 events and bounce between
main_task and with_checksum while the worker threads process priority 1
events and do_checksum.

Correct?

>> Except if I pick Solution 1 and then it still doesn't help anything
>> since I can already run select in every thread. The IO should not be
>> scheduled by priorities and isn't the bottleneck anyway. Seems this
>> would just add overhead.
>
> The idea of duppy was to have only one select running among the multiple queue 
> threads.
>
>
> Romain

MfG
        Goswin


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-15 19:56     ` Rich Neswold
@ 2010-07-16  4:02       ` Goswin von Brederlow
  2010-07-16  4:23         ` Rich Neswold
  2010-07-17 18:34         ` Eray Ozkural
  0 siblings, 2 replies; 31+ messages in thread
From: Goswin von Brederlow @ 2010-07-16  4:02 UTC (permalink / raw)
  To: Rich Neswold; +Cc: Goswin von Brederlow, caml-list

Rich Neswold <rich.neswold@gmail.com> writes:

> On Thu, Jul 15, 2010 at 1:24 PM, Goswin von Brederlow <goswin-v-b@web.de>
> wrote:
>
>     It is too bad I don't want to learn CML but use Ocaml. The CML examples
>     from the book don't translate into ocaml since the interface is just a
>     little bit different and those differences are what throws me off.
>
>  
>
>     So could you give a short example? E.g. the merge sort from the book.
>
>
> You're right: OCaml's syntax differs enough from the text that it's annoying to
> cut-n-paste the examples. Fortunately most example are a few lines of code, so
> I didn't realize how difficult it could be on a larger example (like the merge
> sort example.)
>
> Attached is my translation of the mergeSort from the book. You get to play with
> it and see if it works  :)

Thanks. That is about what I got so I do seem to understand the
differences right.

For my use case this would then come down to implement solution 3 with
channels instead of my own queues. Well, channels are thread safe queues
just by another name. I think I see now how they make the code simpler
to write.

MfG
        Goswin


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-16  3:52       ` Goswin von Brederlow
@ 2010-07-16  4:19         ` Romain Beauxis
  2010-07-16 13:05           ` Goswin von Brederlow
  0 siblings, 1 reply; 31+ messages in thread
From: Romain Beauxis @ 2010-07-16  4:19 UTC (permalink / raw)
  To: Goswin von Brederlow; +Cc: caml-list

Le jeudi 15 juillet 2010 22:52:53, Goswin von Brederlow a écrit :
> The main task will only process priority 0 events and bounce between
> main_task and with_checksum while the worker threads process priority 1
> events and do_checksum.
> 
> Correct?

I think it should be like this:

let do_checksum buf _ =
  let sum = ...
  in
  reply_request ();
  []

let main () =
  let scheduler = Duppy.create ()
  in
    for i = 1 to num_cores do
      Thread.create (Duppy.queue ~priorities=(fun x -> x = 1) scheduler "worker") ();
    done;
    while (* New checksum need to be computed *) do
     Duppy.Task.add scheduler
       { priority = 1; events = [`Timeout 0.]; handler = fun _ -> do_checksum buf; }
     done

It seems in your case you do not need finer-grained priority and all the 
workers live at the same priority level..

Now, the main thread does not need to be a task.. 


Romain


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-16  4:02       ` Goswin von Brederlow
@ 2010-07-16  4:23         ` Rich Neswold
  2010-07-16 13:02           ` Goswin von Brederlow
  2010-07-17 18:34         ` Eray Ozkural
  1 sibling, 1 reply; 31+ messages in thread
From: Rich Neswold @ 2010-07-16  4:23 UTC (permalink / raw)
  To: Goswin von Brederlow; +Cc: caml-list

[-- Attachment #1: Type: text/plain, Size: 1065 bytes --]

On Thu, Jul 15, 2010 at 11:02 PM, Goswin von Brederlow <goswin-v-b@web.de>wrote:

> Rich Neswold <rich.neswold@gmail.com> writes:
>
> Thanks. That is about what I got so I do seem to understand the
> differences right.
>
> For my use case this would then come down to implement solution 3 with
> channels instead of my own queues. Well, channels are thread safe queues
> just by another name. I think I see now how they make the code simpler
> to write.
>

Channels are a thread-safe communication channel of depth one (i.e. you can
only pass one item at a time.) The channel is a primitive that allows
reliable synchronized communication between threads. The Reppy book
describes in later chapters how to use the channel primitive to build up
queues and other, more complicated constructs (like RPCs and multicasting to
many processes.)

In fact, you might use the RPC ideas to pass your checksumming requests to
worker tasks and receive the results.

-- 
Rich

Google Reader: https://www.google.com/reader/shared/rich.neswold
Jabber ID: rich@neswold.homeunix.net

[-- Attachment #2: Type: text/html, Size: 1570 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-16  4:23         ` Rich Neswold
@ 2010-07-16 13:02           ` Goswin von Brederlow
  2010-07-16 14:40             ` Dawid Toton
                               ` (2 more replies)
  0 siblings, 3 replies; 31+ messages in thread
From: Goswin von Brederlow @ 2010-07-16 13:02 UTC (permalink / raw)
  To: Rich Neswold; +Cc: Goswin von Brederlow, caml-list

Rich Neswold <rich.neswold@gmail.com> writes:

> On Thu, Jul 15, 2010 at 11:02 PM, Goswin von Brederlow <goswin-v-b@web.de>
> wrote:
>
>     Rich Neswold <rich.neswold@gmail.com> writes:
>
>     Thanks. That is about what I got so I do seem to understand the
>     differences right.
>
>     For my use case this would then come down to implement solution 3 with
>     channels instead of my own queues. Well, channels are thread safe queues
>     just by another name. I think I see now how they make the code simpler
>     to write.
>
>
> Channels are a thread-safe communication channel of depth one (i.e. you can
> only pass one item at a time.) The channel is a primitive that allows reliable

Urgs, so what happens if I call "sync (send ...)" twice without the
other end calling recieve? Lets test:

let ch = Event.new_channel ()

let reciever () =
  for i = 0 to 10 do
    Printf.printf "recieved %d\n" (Event.sync (Event.receive ch));
    flush_all ();
    Unix.sleep 2;
  done

let _ =
  ignore (Thread.create reciever ());
  for i = 0 to 10 do
    Printf.printf "sending %d\n" i;
    flush_all ();
    Event.sync (Event.send ch i);
    Unix.sleep 1;
  done

% ocamlopt -thread -o foo unix.cmxa threads.cmxa foo.ml && ./foo
sending 0
recieved 0
sending 1
recieved 1
sending 2
recieved 2
sending 3
recieved 3
...

So the send blocks until the event is recieved. That certainly isn't
helpfull for me. One could say I want asynchronous remote function calls
(and returns).

> synchronized communication between threads. The Reppy book describes in later
> chapters how to use the channel primitive to build up queues and other, more
> complicated constructs (like RPCs and multicasting to many processes.)
>
> In fact, you might use the RPC ideas to pass your checksumming requests to
> worker tasks and receive the results.

Yeah. But then why not build it around the simple Mutex and Condition
modules instead of Event? At first glance the blocking aspect of Events
seem to be more of a hindrance than help.


I find it odd that there is no Threaded_Queue module, a thread save
version of Queue with 2 extra functions:

val wait_and_take : 'a t -> 'a

   wait_and_take q waits for the queue q to be not empty and removes and
   returns the first element in queue q. Raises Empty when the queue is
   closed.

val close : 'a t -> unit

   close q closes the queue and wakes up waiting threads.


MfG
        Goswin


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-16  4:19         ` Romain Beauxis
@ 2010-07-16 13:05           ` Goswin von Brederlow
  2010-07-16 13:20             ` Romain Beauxis
  0 siblings, 1 reply; 31+ messages in thread
From: Goswin von Brederlow @ 2010-07-16 13:05 UTC (permalink / raw)
  To: Romain Beauxis; +Cc: Goswin von Brederlow, caml-list

Romain Beauxis <toots@rastageeks.org> writes:

> Le jeudi 15 juillet 2010 22:52:53, Goswin von Brederlow a écrit :
>> The main task will only process priority 0 events and bounce between
>> main_task and with_checksum while the worker threads process priority 1
>> events and do_checksum.
>> 
>> Correct?
>
> I think it should be like this:
>
> let do_checksum buf _ =
>   let sum = ...
>   in
>   reply_request ();
>   []
>
> let main () =
>   let scheduler = Duppy.create ()
>   in
>     for i = 1 to num_cores do
>       Thread.create (Duppy.queue ~priorities=(fun x -> x = 1) scheduler "worker") ();
>     done;
>     while (* New checksum need to be computed *) do
>      Duppy.Task.add scheduler
>        { priority = 1; events = [`Timeout 0.]; handler = fun _ -> do_checksum buf; }
>      done
>
> It seems in your case you do not need finer-grained priority and all the 
> workers live at the same priority level..
>
> Now, the main thread does not need to be a task.. 

But then how does the main thread notice when a checksum is finished
computing? The information has to flow both ways.

MfG
        Goswin


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-16 13:05           ` Goswin von Brederlow
@ 2010-07-16 13:20             ` Romain Beauxis
  2010-07-17  9:07               ` Goswin von Brederlow
  0 siblings, 1 reply; 31+ messages in thread
From: Romain Beauxis @ 2010-07-16 13:20 UTC (permalink / raw)
  To: Goswin von Brederlow; +Cc: caml-list

Le vendredi 16 juillet 2010 08:05:10, vous avez écrit :
> > Now, the main thread does not need to be a task.. 
> 
> But then how does the main thread notice when a checksum is finished
> computing? The information has to flow both ways.

I would say its implemented in the replay_request function. You can think of 
many structures, a shared Hashtbl, message sending.. The loop in the main 
thread can send tasks and check for completed tasks as well...


Romain


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Smart ways to implement worker threads
  2010-07-16 13:02           ` Goswin von Brederlow
@ 2010-07-16 14:40             ` Dawid Toton
  2010-07-16 16:18             ` [Caml-list] " Rich Neswold
  2010-07-20  4:54             ` Satoshi Ogasawara
  2 siblings, 0 replies; 31+ messages in thread
From: Dawid Toton @ 2010-07-16 14:40 UTC (permalink / raw)
  To: caml-list


> I find it odd that there is no Threaded_Queue module, a thread save
> version of Queue with 2 extra functions: (...)
>   
I use such a thread-safe queue a lot [1]. This is very simple yet
universal enough. Honestly I can hardly remember using more elaborate
constructs.

Dawid

[1] http://pfpleia.if.uj.edu.pl/projects/HLibrary/browser/HQueue.ml


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-16 13:02           ` Goswin von Brederlow
  2010-07-16 14:40             ` Dawid Toton
@ 2010-07-16 16:18             ` Rich Neswold
  2010-07-17 17:53               ` Eray Ozkural
  2010-07-20  4:54             ` Satoshi Ogasawara
  2 siblings, 1 reply; 31+ messages in thread
From: Rich Neswold @ 2010-07-16 16:18 UTC (permalink / raw)
  To: Goswin von Brederlow; +Cc: caml-list

[-- Attachment #1: Type: text/plain, Size: 1563 bytes --]

On Fri, Jul 16, 2010 at 8:02 AM, Goswin von Brederlow <goswin-v-b@web.de>wrote:

> Yeah. But then why not build it around the simple Mutex and Condition
> modules instead of Event? At first glance the blocking aspect of Events
> seem to be more of a hindrance than help.
>

The problem with Mutex and Condition is all the management that goes with
them. You must make sure your unlocks match your locks -- even in the
presence of exceptions. None of this bookkeeping is required from the
programmer when using Event.

The bigger win for Events, though, is that they're composable.

Let's say you decide to have two queues to communicate with two workers.
With the mutex/queue scenario, you'd have to resort to polling each queue --
locking and unlocking each mutex in turn (or decide that a single mutex
protects both queues ... but this approach doesn't scale well.)

With events, you pass a list of events to 'Event.choose' and the process
will be awakened when one of the events returns a value. No need to widen
the protection of mutexes; adding more events to the list is enough.
Granted, if you mix and match event types (sends and receives, for instance)
you may have to use 'Event.wrap' to make sure they return a compatible type.

Event may not be the best solution for your problem, or it may even lie
outside your comfort zone. Hopefully this discussion has, at the very least,
made some OCaml users take another look at the Event module.

-- 
Rich

Google Reader: https://www.google.com/reader/shared/rich.neswold
Jabber ID: rich@neswold.homeunix.net

[-- Attachment #2: Type: text/html, Size: 2091 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-16 13:20             ` Romain Beauxis
@ 2010-07-17  9:07               ` Goswin von Brederlow
  2010-07-17 13:51                 ` Romain Beauxis
  0 siblings, 1 reply; 31+ messages in thread
From: Goswin von Brederlow @ 2010-07-17  9:07 UTC (permalink / raw)
  To: Romain Beauxis; +Cc: Goswin von Brederlow, caml-list

Romain Beauxis <toots@rastageeks.org> writes:

> Le vendredi 16 juillet 2010 08:05:10, vous avez écrit :
>> > Now, the main thread does not need to be a task.. 
>> 
>> But then how does the main thread notice when a checksum is finished
>> computing? The information has to flow both ways.
>
> I would say its implemented in the replay_request function. You can think of 
> many structures, a shared Hashtbl, message sending.. The loop in the main 
> thread can send tasks and check for completed tasks as well...
>
>
> Romain

No, it couldn't. The main thread must be blocked waiting for something.
That something would either be waiting for select to return or the main
thread runs a queue and a seperate select thread and worker threads
throw tasks at it. I'm not having a main thread that checks every 0.001s
if a task happens to be done.

MfG
        Goswin


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-14 16:09 Smart ways to implement worker threads Goswin von Brederlow
  2010-07-15 15:58 ` [Caml-list] " Rich Neswold
  2010-07-15 16:32 ` Romain Beauxis
@ 2010-07-17  9:52 ` Goswin von Brederlow
  2010-07-17 14:20   ` Romain Beauxis
  2 siblings, 1 reply; 31+ messages in thread
From: Goswin von Brederlow @ 2010-07-17  9:52 UTC (permalink / raw)
  To: caml-list

Hi,

so I've been thinking and testing some possibilities.

It is now clear that I need 2 queues. One queue for jobs the worker
threads are to do and one queue for results from the worker threads.

The queue for the worker thread seems easy. Combine a Queue.t, Mutex.t
and Condition.t:

type 'a t = {
  data : 'a Queue.t;
  mutex : Mutext.t;
  condition : Condition.t;
}

let take q =
  Mutex.lock q.mutex;
  while Queue.is_empty q.data do
    Condition.wait q.condition q.mutex
  done;
  let d = Queue.take q.data
  in
    Mutex.unlock q.mutex;
    d

let add q d =
  Mutex.lock q.mutex;
  Queue.add q.data d;
  Condition.signal q.condition;
  Mutex.unlock q.mutex

and so on. The standard mechanism for a thread safe queue used in most
languages.


Now I have been thinking about the other queue, for reporting the
results from the worker thread. The thing is that the main thread is
stuck in a select waiting for new requests to come in or for the send
buffer to clear so it can send out more replies (or more of a large
reply). So wouldn't it be nice if I could use select on a queue?

Well, in Linux there is the eventfd() system call.

So instead of using Condition.t to signal something was put into the
queue I use the eventfd.

type 'a t = {
  data : 'a Queue.t;
  mutex : Mutext.t;
  fd : EventFD.t;
}

let take q =
  let num = EventFD.read q.fd
  in
    Mutex.lock q.mutex;
    let res = Queue.take q.data
    in
      Mutex.unlock q.mutex;
      (* We only take one element, restore counter *)
      EventFD.write q.fd (Int64.pred num);
      res

let take_all q =
  let num = EventFD.read q.fd in
  let rec loop acc = function
      0 -> List.rev acc
    | n -> loop ((Queue.take q.data) :: acc) (n - 1)
  in
    Mutex.lock q.mutex;
    let res = loop num
    in
      Mutex.unlock q.mutex;
      res

let process q f =
  let num = EventFD.read q.fd in
  let rec loop  = function
      0 -> ()
    | n ->
        Mutex.lock q.mutex;
        let res = Queue.take q.data
        in
          Mutex.unlock q.mutex;
          f res;
          loop (n - 1)
  in
    loop num

let add q d =
    Mutex.lock q.mutex;
    Queue.add q.data d;
    EventFD.write q.fd 1;
    Mutex.unlock q.mutex;


Now I have a queue that I can include in select. The take function can
be used by the worker threads. The main thread can use either take_all
or process to save on syscalls or stick with take to ensure the queue
does not starve the other FDs.

Also I think I can use the EventFD to terminate a queue and wake up all
listeners. Closing the EventFD should wake up any thread stuck in read
and prevent any threads from becoming stuck in the future. Haven't
tested that yet though.

MfG
        Goswin


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-17  9:07               ` Goswin von Brederlow
@ 2010-07-17 13:51                 ` Romain Beauxis
  2010-07-17 14:08                   ` Goswin von Brederlow
  0 siblings, 1 reply; 31+ messages in thread
From: Romain Beauxis @ 2010-07-17 13:51 UTC (permalink / raw)
  To: Goswin von Brederlow; +Cc: caml-list

Le samedi 17 juillet 2010 05:07:48, Goswin von Brederlow a écrit :
> No, it couldn't. The main thread must be blocked waiting for something.
> That something would either be waiting for select to return or the main
> thread runs a queue and a seperate select thread and worker threads
> throw tasks at it. I'm not having a main thread that checks every 0.001s
> if a task happens to be done.

Then use a Condition.wait on the main thread and Condition.signal in each 
tasks. This is meant for that.

I mean, there's not gonna be a ready-to-use solution for you. At some point 
you'll have to be a little bit creative.


Romain


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-17 13:51                 ` Romain Beauxis
@ 2010-07-17 14:08                   ` Goswin von Brederlow
  0 siblings, 0 replies; 31+ messages in thread
From: Goswin von Brederlow @ 2010-07-17 14:08 UTC (permalink / raw)
  To: Romain Beauxis; +Cc: Goswin von Brederlow, caml-list

Romain Beauxis <toots@rastageeks.org> writes:

> Le samedi 17 juillet 2010 05:07:48, Goswin von Brederlow a écrit :
>> No, it couldn't. The main thread must be blocked waiting for something.
>> That something would either be waiting for select to return or the main
>> thread runs a queue and a seperate select thread and worker threads
>> throw tasks at it. I'm not having a main thread that checks every 0.001s
>> if a task happens to be done.
>
> Then use a Condition.wait on the main thread and Condition.signal in each 
> tasks. This is meant for that.
>
> I mean, there's not gonna be a ready-to-use solution for you. At some point 
> you'll have to be a little bit creative.
>
>
> Romain

Sure. I was just explaining why I made the main thread run a queue and
wait for tasks as well, not just the worker threads.

MfG
        Goswin


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-17  9:52 ` Goswin von Brederlow
@ 2010-07-17 14:20   ` Romain Beauxis
  2010-07-17 15:52     ` Goswin von Brederlow
  0 siblings, 1 reply; 31+ messages in thread
From: Romain Beauxis @ 2010-07-17 14:20 UTC (permalink / raw)
  To: caml-list; +Cc: Goswin von Brederlow

Le samedi 17 juillet 2010 05:52:31, Goswin von Brederlow a écrit :
> Now I have a queue that I can include in select. The take function can
> be used by the worker threads. The main thread can use either take_all
> or process to save on syscalls or stick with take to ensure the queue
> does not starve the other FDs.
> 
> Also I think I can use the EventFD to terminate a queue and wake up all
> listeners. Closing the EventFD should wake up any thread stuck in read
> and prevent any threads from becoming stuck in the future. Haven't
> tested that yet though.

I don't see why you want to use EventFD and not Condition.wait/signal. In 
particular, Condition.broadcast does exactly the last thing you mention.

Now, I don't understand why you asked for related modules and and the end 
discard all of them and post your own implementation. Your issue is not that 
complex you could have done that in the first place...


Romain


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-17 14:20   ` Romain Beauxis
@ 2010-07-17 15:52     ` Goswin von Brederlow
  0 siblings, 0 replies; 31+ messages in thread
From: Goswin von Brederlow @ 2010-07-17 15:52 UTC (permalink / raw)
  To: Romain Beauxis; +Cc: caml-list, Goswin von Brederlow

Romain Beauxis <toots@rastageeks.org> writes:

> Le samedi 17 juillet 2010 05:52:31, Goswin von Brederlow a écrit :
>> Now I have a queue that I can include in select. The take function can
>> be used by the worker threads. The main thread can use either take_all
>> or process to save on syscalls or stick with take to ensure the queue
>> does not starve the other FDs.
>> 
>> Also I think I can use the EventFD to terminate a queue and wake up all
>> listeners. Closing the EventFD should wake up any thread stuck in read
>> and prevent any threads from becoming stuck in the future. Haven't
>> tested that yet though.
>
> I don't see why you want to use EventFD and not Condition.wait/signal. In 
> particular, Condition.broadcast does exactly the last thing you mention.

Not quite. With a Condition.wait I can only wait on exactly one
condition. With the EventFD I can use the Unix.file_descr together with
other Unix.file_descr in Unix.select. That means I can wait for both new
input to come in over the sockets or for results to become available in
the queue with a single thread, which gets rid of quite some Mutexes.

> Now, I don't understand why you asked for related modules and and the end 
> discard all of them and post your own implementation. Your issue is not that 
> complex you could have done that in the first place...

I was fishing for solutions. There were some good ones and tanks for all
the replies. They were not wasted. But none of them seem like a really
good fit to the code as I have it in mind. Some reasons why that is so I
only see now, after tasting the waters and testing things out.

For example, and the reason why EventFD seem like the way to go NOW, I
noticed that replies to requests are sometimes too large to be written
out in a single go or simply come too fast to be written out without
blocking. That means I have to write them out in multiple chunks and
then sockets need to be protected so chunks from multiple replies don't
get mixed together from different threads and so on.

If I use a queue with EventFD instead of Condition.t then I can add the
queue to the select loop and any thread can drop a reply job into that
queue. The select thread will wake up, take the reply job, add the data
to the sockets outgoing buffer and add the sockets FD to the select call
and wait for it to be ready for writing. All without needing Mutexes to
protect the socket or signals to wake up and restart the select since
the select thread and only the select thread will be handling the
sockets.

MfG
        Goswin

PS: I still haven't 100% decided yet what the best solution is but
EventFD seems to lead currently. So if anyone has other ideas keep them
coming.


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-16 16:18             ` [Caml-list] " Rich Neswold
@ 2010-07-17 17:53               ` Eray Ozkural
  0 siblings, 0 replies; 31+ messages in thread
From: Eray Ozkural @ 2010-07-17 17:53 UTC (permalink / raw)
  To: Rich Neswold; +Cc: Goswin von Brederlow, caml-list

It does encourage me to take a shot at using the Event module.

One of the advantages of object orientation was that event-driven programming and concurrent processes would/could fit well.

Yet years later event driven programming exists mostly as the main loop of GUI libraries and implemented in cumbersome imperative languages. 

It has been suggested that traditional synchronization primitives are deficient. Like with message passing libraries it's too easy to write incorrect code with them (since they assume all programmers must be naive enough to still code in C.). I think it's so much easier to make a buggy implementation in posix threads that I advise against (having to) use it in any language.

In my experience with shared memory programming I found it a better option to write code in openmp. Charm++ like spawn directives aren't too bad either and they fit functional languages. 

I suppose with some compiler help we could have such parallelization; maybe just some camlp4 macros are enough. After all, it's not a bad idea to isolate synchronization in the program and I bet it can be done in a safe way. What would be needed to adapt the openmp idea to ocaml? Assuming we have proper multicore support (lock-free parallel garbage collector, real threads etc.)

Considering that architectural details cannot be so easily ignored on a multicore architecture I wonder how to do this best. Perhaps some way to specify fine-grain parallelism would be more flexible (like fine-grain kernels in cuda) I think architectural details would be important for parallel code optimizations.

We had used a thread/critical region based intermediate program representation for C but I think we could do better with functional  languages, towards a fully implicit parallel PL.

Of course there are various existing approaches to implementing explicit parallelism on top of ocaml. I wonder if the INRIA team would consider adapting those to a shared memory architecture. Short of the high technology we need (implicit parallelism) we can be satisfied with cool functional and architecture agnostic parallelism primitives.

So my wishlist is:

1) low level shared memory programming primitives and lang support
2) high level parallel programming facility that is completely independent of target architecture
3) implicit parallelism that uses an automatic parallelization approach

Cheers,

Eray


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-16  4:02       ` Goswin von Brederlow
  2010-07-16  4:23         ` Rich Neswold
@ 2010-07-17 18:34         ` Eray Ozkural
  2010-07-17 19:35           ` Goswin von Brederlow
  1 sibling, 1 reply; 31+ messages in thread
From: Eray Ozkural @ 2010-07-17 18:34 UTC (permalink / raw)
  To: Goswin von Brederlow; +Cc: Rich Neswold, caml-list

When I'm implementing a parallel/dist algorithm I take care of making the communication code abstract enough to be re-used. Since abstraction in C derivative languages (function pointers, templates etc) are a joke; one need not bother with this as expected future code re-use isn't much.

On the other hand in ocaml we have the best module system which can be used to create advanced parallel data structures and algorithms. A shared mem queue would be among the most trivial of such constructs. If we think of world domination we have to see the whole picture. Parallel programming is considered difficult because common  programmers don't understand enough algebra to see that most problems could be solved by substituting an operator in a generic parallel algorithm template. Or that optimal algorithms could be re-used combinatorially (consider the relation of a mesh topology to a linear topology with s/f routing)

The fact is that an efficient parallel algorithm need not be long (theory suggests that fastest is shortest). It is our collective lack of creativity that usually makes it long.

Cheers,

Eray

Ps: parallelism is not tangential here. I believe it is unnecessary to implement asynchronous processes just for the sake of handling overlapping I/O and computation. That's like parallelism for high school programming class.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-17 18:34         ` Eray Ozkural
@ 2010-07-17 19:35           ` Goswin von Brederlow
  2010-07-17 22:00             ` Eray Ozkural
  0 siblings, 1 reply; 31+ messages in thread
From: Goswin von Brederlow @ 2010-07-17 19:35 UTC (permalink / raw)
  To: Eray Ozkural; +Cc: Goswin von Brederlow, Rich Neswold, caml-list

Eray Ozkural <examachine@gmail.com> writes:

> When I'm implementing a parallel/dist algorithm I take care of making
> the communication code abstract enough to be re-used. Since
> abstraction in C derivative languages (function pointers, templates
> etc) are a joke; one need not bother with this as expected future code
> re-use isn't much.
>
> On the other hand in ocaml we have the best module system which can be
> used to create advanced parallel data structures and algorithms. A
> shared mem queue would be among the most trivial of such
> constructs. If we think of world domination we have to see the whole
> picture. Parallel programming is considered difficult because common
> programmers don't understand enough algebra to see that most problems
> could be solved by substituting an operator in a generic parallel
> algorithm template. Or that optimal algorithms could be re-used
> combinatorially (consider the relation of a mesh topology to a linear
> topology with s/f routing)

Yeah. I would love to have all the basic ocaml modules also as a thread
safe flavour.

> The fact is that an efficient parallel algorithm need not be long
> (theory suggests that fastest is shortest). It is our collective lack
> of creativity that usually makes it long.
>
> Cheers,
>
> Eray
>
> Ps: parallelism is not tangential here. I believe it is unnecessary to
> implement asynchronous processes just for the sake of handling
> overlapping I/O and computation. That's like parallelism for high
> school programming class.

I'm a big fan of doing IO asynchronously in a single thread. Given that
ocaml can only run one thread at a time anyway there is usualy no speed
gain in using multiple threads. The overhead of making the code thread
save is big in complexity (if only we hade thread save modules :) and
all you end up doing is waiting on more cores.

The exception is when you offload work to C code that can run within
enter_blocking_section() / leave_blocking_section() and you have enough
work to keep multiple cores busy. For example doing blockwise sha256
sums with 4 cores is 3-3.8 times as fast as single threaded depending on
blocksize.

MfG
        Goswin


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-17 19:35           ` Goswin von Brederlow
@ 2010-07-17 22:00             ` Eray Ozkural
  0 siblings, 0 replies; 31+ messages in thread
From: Eray Ozkural @ 2010-07-17 22:00 UTC (permalink / raw)
  To: Goswin von Brederlow; +Cc: Goswin von Brederlow, Rich Neswold, caml-list

It looks like the multi core architectures will be around for a while only to be superseded by more advanced parallel micro-architectures.

Of course computation-bound apps will benefit more but this does not mean that memory intensive apps will not run well on multi core architectures. Carefully optimized code will run great on multi core architectures with high memory bandwidth. Depending on the application of course.

Right now it seems to me that shared mem is more suitable for data mining apps (and many other including scientific computing apps). Data mining has never been embarrassingly parallel and it's difficult to get good speedup on distributed architectures. I should know I've worked on the parallel solution of two such problems. Provided that the space complexity is manageable, I think such network-bound cluster algorithms have more efficient counter-parts on multi-core. In data mining algos some data may have to be replicated or communicated. If the space doesn't blow up, NVIDIA Fermi would be ideal for those problems. No more copying. On the new Tesla max memory will be 6x4=24gb.

For all I care Intel is obsolete until they give me a thousand cores.

Shared memory support is essential at any rate. I don't see why we can't have it in an excellent way!

Best,

Eray


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Caml-list] Smart ways to implement worker threads
  2010-07-16 13:02           ` Goswin von Brederlow
  2010-07-16 14:40             ` Dawid Toton
  2010-07-16 16:18             ` [Caml-list] " Rich Neswold
@ 2010-07-20  4:54             ` Satoshi Ogasawara
  2 siblings, 0 replies; 31+ messages in thread
From: Satoshi Ogasawara @ 2010-07-20  4:54 UTC (permalink / raw)
  To: Goswin von Brederlow; +Cc: Rich Neswold, caml-list


On 2010/07/16, at 22:02, Goswin von Brederlow wrote:
> Urgs, so what happens if I call "sync (send ...)" twice without the
> other end calling recieve? Lets test:
> 
> let ch = Event.new_channel ()
> ...

That's not good use of synchronous channels. If you want to asynchronous,
try Mbox module in concurrent cell.
https://forge.ocamlcore.org/scm/viewvc.php/trunk/mbox.mli?view=markup&root=ccell

open Printf
open Ccell
open Event

let mbox = Mbox.make ()

let receiver () =
  for i = 0 to 10 do
    printf "received %d\n%!" (sync (Mbox.pop mbox));
    Thread.delay 2.;
  done
    
let _ =
  ignore (Thread.create receiver ());
  for i = 0 to 10 do
    printf "sending %d\n%!" i;
    sync (Mbox.push mbox i);
    Thread.delay 1.;
  done

wednesday:tmp osiire$ ocamlc -thread unix.cma threads.cma -I +site-lib/ccell ccell.cma async.ml && ./a.out
sending 0
received 0
sending 1
received 1
sending 2
sending 3
received 2
sending 4
sending 5
received 3
sending 6
sending 7
received 4
sending 8
sending 9
received 5
sending 10

    
With Mbox module, you can also wait and select long calculation results like this.

open Printf
open Ccell
open Event

let rec forever f x = 
  let v = f x in forever f v

let spawn_loop f x =
  ignore (Thread.create (forever f) x)

let make_worker f =
  let input, output = Mbox.make (), Mbox.make () in
  let work () =
    sync (Mbox.push output (f (sync (Mbox.pop input))));
  in
  spawn_loop work ();
  input, output

let request (input, _) p =
  sync (Mbox.push input p)

let response (_, output) =
  Mbox.pop output

let worker1 = make_worker (printf "action worker1 %d\n%!")
let worker2 = make_worker (printf "action worker2 %d\n%!")
    
let after_long_calc (e1, e2) =
  select [
    wrap e1 (fun _ -> printf "after work1\n%!"; (response worker1, e2));
    wrap e2 (fun _ -> printf "after work2\n%!"; (e1, response worker2));
  ]

let _ =
  spawn_loop after_long_calc (response worker1, response worker2);
  request worker1 1;
  Thread.delay 1.;
  request worker2 2;
  Thread.delay 1.;
  request worker2 3;
  Thread.delay 1.;
  request worker1 4;
  Thread.delay 2.

wednesday:tmp osiire$ ocamlc -thread unix.cma threads.cma -I +site-lib/ccell ccell.cma worker.ml && ./a.out
action worker1 1
after work1
action worker2 2
after work2
action worker2 3
after work2
action worker1 4
after work1


I hope this will be helpful for you.
  
---
  satoshi ogasawara


^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2010-07-20  4:54 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-07-14 16:09 Smart ways to implement worker threads Goswin von Brederlow
2010-07-15 15:58 ` [Caml-list] " Rich Neswold
2010-07-15 16:19   ` David McClain
2010-07-15 17:16   ` Ashish Agarwal
2010-07-15 18:24   ` Goswin von Brederlow
2010-07-15 18:37     ` David McClain
2010-07-15 18:40     ` David McClain
2010-07-15 19:56     ` Rich Neswold
2010-07-16  4:02       ` Goswin von Brederlow
2010-07-16  4:23         ` Rich Neswold
2010-07-16 13:02           ` Goswin von Brederlow
2010-07-16 14:40             ` Dawid Toton
2010-07-16 16:18             ` [Caml-list] " Rich Neswold
2010-07-17 17:53               ` Eray Ozkural
2010-07-20  4:54             ` Satoshi Ogasawara
2010-07-17 18:34         ` Eray Ozkural
2010-07-17 19:35           ` Goswin von Brederlow
2010-07-17 22:00             ` Eray Ozkural
2010-07-15 16:32 ` Romain Beauxis
2010-07-15 17:46   ` Goswin von Brederlow
2010-07-15 18:44     ` Romain Beauxis
2010-07-16  3:52       ` Goswin von Brederlow
2010-07-16  4:19         ` Romain Beauxis
2010-07-16 13:05           ` Goswin von Brederlow
2010-07-16 13:20             ` Romain Beauxis
2010-07-17  9:07               ` Goswin von Brederlow
2010-07-17 13:51                 ` Romain Beauxis
2010-07-17 14:08                   ` Goswin von Brederlow
2010-07-17  9:52 ` Goswin von Brederlow
2010-07-17 14:20   ` Romain Beauxis
2010-07-17 15:52     ` Goswin von Brederlow

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).