caml-list - the Caml user's mailing list
 help / color / mirror / Atom feed
From: Robert Roessler <roessler@rftp.com>
To: Caml-list <caml-list@inria.fr>
Subject: Re: [Caml-list] caml_oldify_local_roots takes 50% of the total runtime
Date: Fri, 27 Oct 2006 13:05:14 -0700	[thread overview]
Message-ID: <4542667A.4040700@rftp.com> (raw)
In-Reply-To: <454206CF.1080306@janestcapital.com>

Brian Hurt wrote:
> With respect to 4146 (minor heap size adjusts to memory size)- I'm not 
> sure this is a good idea.  32K is small enough to fit into L1 cache with 
> space left over on pretty much all systems these days (64K L1 cache 
> seems to be standard).  Having the minor heap small enough to fit into 
> L1 cache, and thus having the minor heap live in L1 cache, seems to me 
> to be a big advantage.  If for no other reason than it makes minor 
> collections a lot faster- the collector doesn't have to fault memory 
> into cache, it's already there.
> 
> If anything, I'd be inclined to add more generations.  I haven't done 
> any timings, but my first guess for reasonable values would be: Gen1 
> (youngest): 32K (fits into L1 cache), Gen2: 256K (fits into L2 cache), 
> Gen3: 8M (fits into memory), Gen4 (oldest): as large as necessary.

I have not yet looked at the details of OCaml's memory management, but 
in my commercial Smalltalk-80 implementation, I used a simple scheme:

Twin 32K "young" spaces... the allocations come from one of these 
(like OCaml's, using trivial and fast code) until it is exhausted. 
Then all of the reachable [young] objects are copied to the other 
[empty] "young" space.  Eventually, a more or less conventional 
mark-and-sweep is performed on the "old" space.

Adjustable parameters control 1) how many times an object ping-pongs 
between the two "young" spaces before it is promoted to the "old" 
space (4 was a reasonable number), and 2) what size an allocation 
needs to be before it is created as "old" (this somewhat controls the 
consumption of the "young" spaces at the expense of some improper 
"old" promotions).

This approach allowed an 8MHz 286 to perform at close to Xerox 
"Dolphin" speeds, while using [typically] less than 1% of total time 
for memory management.

Humorously, the 32K was chosen not because it would fit into cache 
(there wasn't any), but because it would allow both "young" spaces to 
fit into a single x86 64K "segment" - remember those? :)

Robert Roessler
robertr@rftp.com
http://www.rftp.com


  reply	other threads:[~2006-10-27 20:05 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-10-25 15:49 Hendrik Tews
2006-10-25 18:21 ` [Caml-list] " Markus Mottl
2006-10-25 19:01   ` skaller
2006-10-26  9:50     ` Hendrik Tews
2006-10-26 13:48     ` Gerd Stolpmann
2006-10-27 11:36       ` Hendrik Tews
2006-10-27 13:17         ` Brian Hurt
2006-10-27 20:05           ` Robert Roessler [this message]
2006-10-27 21:16           ` Pierre Etchemaïté
2006-10-30  7:50           ` Hendrik Tews
2006-10-26  9:47   ` Hendrik Tews
2006-10-26 14:54     ` Markus Mottl
2006-12-15 10:56 ` Hendrik Tews

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4542667A.4040700@rftp.com \
    --to=roessler@rftp.com \
    --cc=caml-list@inria.fr \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).