caml-list - the Caml user's mailing list
 help / color / mirror / Atom feed
From: Christophe TROESTLER <debian00@tiscalinet.be>
To: caml-list@inria.fr
Subject: [Caml-list] Re: float boxing (was: matrix-matrix multiply)
Date: Wed, 23 Oct 2002 12:16:26 +0200 (CEST)	[thread overview]
Message-ID: <20021023.121626.64579491.debian00@tiscalinet.be> (raw)
In-Reply-To: <20021020215847.GA8750@beech>

On Sun, 20 Oct 2002, Issac Trotts <ijtrotts@ucdavis.edu> wrote:
> 
> You might try converting your references to mutable fields.
>
> let x = ref 1.0 in
> let n = int_of_string Sys.argv.(1) in
> for i = 1 to n do x := !x +. 1.0 done
> 
> ./ref 100000000  2.51s user 0.00s system 99% cpu 2.515 total
> 
> type t = { mutable f:float };;
> let x = { f = 1.0 } in
> let n = int_of_string Sys.argv.(1) in
> for i = 1 to n do x.f <- x.f +. 1.0 done
>
> ./ref2 100000000  1.54s user 0.01s system 100% cpu 1.542 total

A few questions in view of this.  First, on my machine (AMD Athlon
1GHz running GNU/Linux), the timings give a preference to ref.ml

time ./ref 100000000
real    0m1.279s user    0m1.280s sys     0m0.000s
time ./ref2 100000000
real    0m1.411s user    0m1.380s sys     0m0.000s

What could be a reason for that?

Second, ain't references be optimized when their type is statically
known to be a float ref (I thought so, please confirm or correct)?

It seems to me there are three main issues concerning floats:
* storing       (avoid unnecessary indirections but take care of GC)
* comparisons   (= is not reflexive in IEEE 754 arithmetic)
* conversions

About "conversions", float : int -> float seems to be slow (compared
to a cast in C at least).  Is there any way to optimize it when it
intervene in an algebraic expression?  (Frequently on has to write
things like: for i=0 to n do let x = float i / float n in ... done)

I understand that float values need to be boxed to "dialog" with
polymorphic functions.  Let me picture it as

    +--------+  f : float -> float   +--------+
    |        | --------------------> |        |
    +--------+                       +--------+
        |                                |
        V                                V
    +--------+                       +--------+
    | double |                       | double |
    +--------+                       +--------+

However, couldn't we imagine that functions with float arguments or
return value have "two interfaces": the standard one (where one knows
the pointer) and another one (which gives the value) :

    +--------+  f : float -> float   +--------+
    |        | --------------------> |        |
    +--------+                       +--------+
        |                                |
        V                                V
    +--------+  f': float -> float   +--------+
    | double | --------------------> | double |
    +--------+                       +--------+

The _idea_, in C parlance, is to declare f(x) as &(f'(*x)).  Now, the
boxing should allow the GC to take care of these values.  But, if a
function returning a float feeds another function expecting a float,
the compiler could connect the "bottom lines" instead of passing
through the pointers:

     f : 'a -> float   +--------+      +--------+  g : float -> 'b
    -----------------> |        |      |        | -----------------> 
                       +--------+      +--------+                   
                           |               |                        
                           V               V                        
     f': 'a -> float   +--------+      +--------+  g': float -> 'b   
    -----------------> | double |----->| double | -----------------> 
                       +--------+      +--------+ 

This kind of idea could also apply to recursive functions passing
float values along the recursion...

My question is: is this type of idea workable?  Is it difficult to
implement?  (In a way it just generalize the special treatment of
arithmetic expressions.)  Maybe this can be generalized further to put
float references & equalities under the same umbrella?

Bear in mind I am not a compiler expert (and people even giving
compiling courses here are not very helpful), so my questions are also
a way for me to learn a little something...

Cheers,
ChriS
-------------------
To unsubscribe, mail caml-list-request@inria.fr Archives: http://caml.inria.fr
Bug reports: http://caml.inria.fr/bin/caml-bugs FAQ: http://caml.inria.fr/FAQ/
Beginner's list: http://groups.yahoo.com/group/ocaml_beginners


  reply	other threads:[~2002-10-23 10:13 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-10-18 20:09 [Caml-list] matrix-matrix multiply - O'Caml is 6 times slower than C Paul Stodghill
2002-10-20  9:42 ` Xavier Leroy
2002-10-20 18:06   ` Paul Stodghill
2002-10-20 21:58     ` Issac Trotts
2002-10-23 10:16       ` Christophe TROESTLER [this message]
2002-10-23 12:08         ` [Caml-list] Re: float boxing (was: matrix-matrix multiply) malc
2002-10-24  0:00           ` malc
2002-10-21 12:53     ` [Caml-list] matrix-matrix multiply - O'Caml is 6 times slower than C Xavier Leroy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20021023.121626.64579491.debian00@tiscalinet.be \
    --to=debian00@tiscalinet.be \
    --cc=caml-list@inria.fr \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).