On Sat, Apr 23, 2011 at 4:47 PM, Alexy Khrabrov <deliverable@gmail.com> wrote:

On Apr 23, 2011, at 6:17 AM, Eray Ozkural wrote:

> I don't really care what others say, but to prove that this has any performance value you should do the following:
>
>
> Compare your most "parallel" algorithm with the performance of a corresponding well-written MPI application using openmpi's shared memory transport. If there is a difference, then your system has some value.
>
> Of course openmpi's shared memory transport is terribly buggy, but it should give a baseline acceptable performance.
>
> If there is no comparison, we have no idea.

The problem with "implement in MPI and compare" is that you have to rearchitect a sequential program for a totally different model.  By contrast, using shared memory parallelism, it's often a question of using pmap.

Incorrect. We always compare to sequential code in parallel computing. It's called "speedup".

And doubly incorrect because we are not comparing to sequential code but a claimed shared memory parallelism. It's only logical to compare two approaches on the same hardware.
 

I really recommend everybody interested in parallelism to learn and try Clojure on a small problem.  You can replace a single map by pmap in a suitable setting and observe a not-quite-linear, but proportional speedup.

Of course functional programming fits such parallelism very well. It's a shame that ocaml does not have parallel functional primitives.
 

I'd be really happy if OCaml gets the mechanisms from Clojure. 


It'd be even better if such explicit parallelism had good compiler support, too :) I don't know much about Clojure, but I wouldn't use anything that runs on JVM for a parallel program. That might be like first turning your computer to Commodore 64 and then getting some speedup.


Best, 


--
Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
http://groups.yahoo.com/group/ai-philosophy
http://myspace.com/arizanesil http://myspace.com/malfunct