caml-list - the Caml user's mailing list
 help / color / mirror / Atom feed
From: Brian Hurt <bhurt@spnz.org>
To: Jacques Garrigue <garrigue@kurims.kyoto-u.ac.jp>,
	Ocaml Mailing List <caml-list@inria.fr>
Subject: Re: [Caml-list] Object-oriented access bottleneck
Date: Mon, 8 Dec 2003 11:51:35 -0600 (CST)	[thread overview]
Message-ID: <Pine.LNX.4.44.0312081136040.5009-100000@localhost.localdomain> (raw)
In-Reply-To: <20031208100725E.garrigue@kurims.kyoto-u.ac.jp>

On Mon, 8 Dec 2003, Jacques Garrigue wrote:

> OK, let's make a few things clear.
> 
> First to Brian Hurt:
> I do not fully understand the details of your hashing approach, but be
> assured that virtual method calls are already fast enough. 

There's less there than meets the eye.  It's basically a single-level hash 
table.  The compiler can compute the hash of the function call at compile 
time.  Every object has a vtable pointer stored in it.  The vtable is the
size of the vtable, plus the hashtable, an array of tuples of hash values 
plus function pointers.  Note that I require all function tables to be a 
power of two, so that I store the size - 1, and the modulo becomes an and.
An object having "extra" functions is no problem- the extra functions are 
just ignored.  A lot of error conditions are avoided by the type checker, 
and so simply not checked for.  Actually, instead of storing the hash 
value, the name as a string should be stored- this deals with the case of 
two member functions which hash to the same value.

> Indeed
> there is some hashing going around, and currently a two-level sparse
> array vtable, such that you can find the method code just with 4 load
> instructions starting from the object (one of them is to get the
> method label from the environment). This is pretty optimal in terms of
> performance.
> 
> When I said no serious optimization, I meant no known function call
> (meaning that all method calls with arguments must go through an
> indirection through caml_applyN, which adjusts the number of arguments
> to the closure, and of course a big loss in path prediction), and no
> inlining.

How much slower is a member function call compared to a non-member 
function call?  I was given the impression that it was signifigantly 
slower, but you're making it sound like it isn't.

> Depending on the architecture, a micro-benchmark gives between 30 and
> 50 cycles for a one-argument call (i.e. including the call to
> caml_apply2).
> 
> And inlining matters, if you think of all these methods that are just
> getting or setting an instance variable.

That's about the only place inlining matters.

> That's where we are a bit in a chicken and egg problem: if there is no
> program, there is nothing to optimize, but if it is not optimized,
> nobody will write performance critical code to start with...
> (Some people have sent me pointers to code I should look at)

I'm writting a place and route program in Ocaml, and hitting a lot of 
places where I'd like to express things as objects.  The code isn't very 
sensitive to the cost of member function calls, but it is mildly 
sensitive.  If member function calls aren't much more expensive (and 4 
loads qualifies as not much more expensive), then I'm going to be using a 
lot more objects.

But it sounds like this is already the case.

-- 
"Usenet is like a herd of performing elephants with diarrhea -- massive,
difficult to redirect, awe-inspiring, entertaining, and a source of
mind-boggling amounts of excrement when you least expect it."
                                - Gene Spafford 
Brian

-------------------
To unsubscribe, mail caml-list-request@inria.fr Archives: http://caml.inria.fr
Bug reports: http://caml.inria.fr/bin/caml-bugs FAQ: http://caml.inria.fr/FAQ/
Beginner's list: http://groups.yahoo.com/group/ocaml_beginners


  parent reply	other threads:[~2003-12-08 16:51 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-12-07  2:39 Nuutti Kotivuori
2003-12-07  2:59 ` Nicolas Cannasse
2003-12-07 11:22   ` Benjamin Geer
2003-12-07 14:12     ` Nicolas Cannasse
2003-12-07 18:04   ` Nuutti Kotivuori
2003-12-07 10:27 ` Jacques Garrigue
2003-12-07 19:46   ` Nuutti Kotivuori
2003-12-08  1:07     ` Jacques Garrigue
2003-12-08 15:08       ` Nuutti Kotivuori
2003-12-08 15:42         ` Richard Jones
2003-12-09  0:26           ` Nicolas Cannasse
2003-12-09 12:10             ` Nuutti Kotivuori
2003-12-09 13:17               ` Olivier Andrieu
2003-12-09 13:53                 ` Nuutti Kotivuori
2003-12-08 17:51       ` Brian Hurt [this message]
2003-12-08 18:19         ` brogoff
2003-12-08 20:09           ` Brian Hurt
2003-12-08 19:02         ` Xavier Leroy
2003-12-08 21:37           ` Brian Hurt
2003-12-08 21:06             ` Nuutti Kotivuori
2003-12-08 22:30             ` malc
2003-12-07 18:23 ` Brian Hurt
2003-12-07 18:14   ` Nuutti Kotivuori
2003-12-07 19:30     ` Brian Hurt
2003-12-07 23:50       ` Abdulaziz Ghuloum
2003-12-08 17:29         ` Brian Hurt
2003-12-08 18:48           ` Nuutti Kotivuori
2003-12-08 10:17       ` Nuutti Kotivuori
2003-12-08 19:51       ` skaller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.44.0312081136040.5009-100000@localhost.localdomain \
    --to=bhurt@spnz.org \
    --cc=caml-list@inria.fr \
    --cc=garrigue@kurims.kyoto-u.ac.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).