caml-list - the Caml user's mailing list
 help / color / mirror / Atom feed
From: Xavier Leroy <Xavier.Leroy@inria.fr>
To: Erick Matsen <matsen@berkeley.edu>
Cc: caml-list@inria.fr
Subject: Re: [Caml-list] speeding up matrix multiplication (newbie question)
Date: Fri, 20 Feb 2009 19:46:22 +0100	[thread overview]
Message-ID: <499EFA7E.7010400@inria.fr> (raw)
In-Reply-To: <243054520902200740j1d1874adib27645f6e090b073@mail.gmail.com>

> I'm working on speeding up some code, and I wanted to check with
> someone before implementation.
> 
> As you can see below, the code primarily spends its time multiplying
> relatively small matrices. Precision is of course important but not
> an incredibly crucial issue, as the most important thing is relative
> comparison between things which *should* be pretty different.

You need to post your matrix multiplication code so that the regulars
on this list can tear it to pieces :-)

>From the profile you gave, it looks like you parameterized your matrix
multiplication code over the + and * operations over matrix elements.
This is good for genericity but not so good for performance, as it
will result in more boxing (heap allocation) of floating-point values.
The first thing you should try is write a version of matrix
multiplication that is specialized for type "float".

Then, there are several ways to write the textbook matrix
multiplication algorithm, some of which perform less boxing than
others.  Again, post your code and we'll let you know.

> Currently I'm just using native (double-precision) ocaml floats and
> the native ocaml arrays for a first pass on the problem.  Now I'm
> thinking about moving to using float32 bigarrays, and I'm hoping
> that the code will double in speed. I'd like to know: is that
> realistic? Any other suggestions?

It won't double in speed: arithmetic operations will take exactly the
same time in single or double precision.  What single-precision
bigarrays buy you is halving the memory footprint of your matrices.
That could result in better cache behavior and therefore slightly
better speed, but it depends very much on the sizes and number of your
matrices.

- Xavier Leroy


  parent reply	other threads:[~2009-02-20 18:46 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-02-20 15:40 Erick Matsen
2009-02-20 15:45 ` [Caml-list] " RABIH.ELCHAAR
2009-02-20 17:46 ` Jon Harrop
2009-02-20 18:46 ` Xavier Leroy [this message]
2009-02-20 19:53   ` Erick Matsen
2009-02-20 21:21     ` Will M. Farr
2009-02-20 21:37     ` Martin Jambon
2009-02-20 22:23     ` Mike Lin
2009-02-20 22:30       ` Will M. Farr
2009-02-20 22:43         ` Markus Mottl
2009-02-23 22:59           ` Erick Matsen
2009-02-20 22:43         ` Mike Lin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=499EFA7E.7010400@inria.fr \
    --to=xavier.leroy@inria.fr \
    --cc=caml-list@inria.fr \
    --cc=matsen@berkeley.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).