caml-list - the Caml user's mailing list
 help / color / mirror / Atom feed
From: "Jon Harrop" <jon@ffconsultancy.com>
To: <frederic.gava@univ-paris-est.fr>, <caml-list@inria.fr>
Subject: RE: [Caml-list] Parallel n-queens solver
Date: Mon, 25 Apr 2011 20:30:34 +0100	[thread overview]
Message-ID: <017001cc037f$3fdb5350$bf91f9f0$@ffconsultancy.com> (raw)
In-Reply-To: <4DB5A327.90409@univ-paris-est.fr>

Frédéric wrote:
> It is the well known "skeleton paradigm" and BSP model (twice, since 1990)
> 
> Skeletons:
> http://en.wikipedia.org/wiki/Algorithmic_skeleton
> http://homepages.inf.ed.ac.uk/mic/Skeletons/index.html
> 
> BSP:
> http://en.wikipedia.org/wiki/Bulk_Synchronous_Parallel
> http://www.bsp-worldwide.org/

Very true. Subsequent fiddling turned up the following F# solution that uses a sequential pair of calls to Parallel.init (i.e. direct bulk synchronous parallelism) instead of the data flow parallelism:

  let run n =
    let timer = System.Diagnostics.Stopwatch.StartNew()
    let solve k =
      let sols = Array.init (n*n) (fun _ -> ResizeArray())
      solve k n (fun b ->
        let b = transformations b
        sols.[b.[1]*n + b.[0]].Add b)
      sols |> Array.map (fun sols -> sols.ToArray())
    let sols =
      let sols = Array.Parallel.init n solve
      Array.Parallel.init (n*n) (fun i ->
        let s = System.Collections.Generic.HashSet(HashIdentity.Structural)
        for sols in sols do
          let sols = sols.[i]
          for i=0 to sols.Length-1 do
            s.Add sols.[i] |> ignore
        s.Count)
      |> Array.sum
    printfn "%d solutions %d-queens problem in %gs" sols n timer.Elapsed.TotalSeconds

That old "invoke" combinator that evaluates a function application on a forked process and marshals the result back could be used to implement a naïve Parallel.init and, therefore, this solution could be translated to OCaml quite easily. However, it would still lack load balancing. That turns out to be ok in this particular case because the generated data is quite evenly distributed (and we assume that nothing else is running on the machine) but real applications might not be so lucky...

Turns out you can also use data flow parallelism entirely by replacing Array.Parallel.init with PSeq.init. In theory, this allows the second phase to occur concurrently with the first phase, being updated as new results are generated. In practice, I don't see a speedup.

Cheers,
Jon.




  reply	other threads:[~2011-04-25 19:30 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-04-24 23:32 Jon Harrop
2011-04-25 10:36 ` Gerd Stolpmann
2011-04-25 16:36   ` Frédéric Gava
2011-04-25 19:30     ` Jon Harrop [this message]
2011-04-25 23:33       ` Eray Ozkural

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='017001cc037f$3fdb5350$bf91f9f0$@ffconsultancy.com' \
    --to=jon@ffconsultancy.com \
    --cc=caml-list@inria.fr \
    --cc=frederic.gava@univ-paris-est.fr \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).