caml-list - the Caml user's mailing list
 help / color / mirror / Atom feed
From: Christopher Kauffman <kauffman@cs.umn.edu>
To: caml-list@yquem.inria.fr
Subject: Garbage collection and caml_adjust_gc_speed
Date: Mon, 28 Aug 2006 23:16:39 -0500	[thread overview]
Message-ID: <44F3BFA7.20202@cs.umn.edu> (raw)

I am finishing up my first major research application written in OCaml. The code is a scientific 
application which does a moderate amount of floating point computations for which I have employed 
the Bigarray library and the Lacaml package.

I am attempting to tune the performance of the code and to that end, I have examined the native code 
performance using gprof (OCaml manual section 17.4). The first thing that struck me on analyzing the 
profile is that the function 'caml_adjust_gc_speed' is called a lot. The first few lines of the 
profile are:

Flat profile:

Each sample counts as 0.01 seconds.
   %   cumulative   self              self     total
  time   seconds   seconds    calls   s/call   s/call  name
  14.55      6.80     6.80  4841768     0.00     0.00  caml_adjust_gc_speed
  14.41     13.53     6.73                             bigarray_fill
   6.87     16.74     3.21                             lacaml_Dssqr_zero_stub
   6.30     19.68     2.94                             bigarray_offset
   4.30     21.69     2.01                             bigarray_slice
   4.03     23.57     1.88                             bigarray_get_N
   2.68     24.82     1.25     1612     0.00     0.00  sweep_slice
   2.63     26.05     1.23 24714254     0.00     0.00  caml_c_call
   2.10     27.03     0.98  4972572     0.00     0.00  caml_alloc_shr
...

The actual runtime of the program is about 18 seconds so the gprof cumulative time is off by quite a 
bit. What concerns me is the large overhead I seem to be getting from the first function, 
'caml_adjust_gc_speed', which I assume is related to the garbage collector. Over 4 million calls to 
this function seems a little much. I attempted to play with a garbage collection parameter, the 
value of control.space_overhead in the Gc module. According to the manual, this affects the major GC 
speed and increasing the value is supposed to cut down on the aggressiveness of the GC. Setting 
space_overhead to 99 did not change number of calls to 'caml_adjust_gc_speed'.

I'm looking for someone with a bit more knowledge of the garbage collection in OCaml to enlighten me 
on whether this overhead can be reduced or if it is an unavoidable side-effect of relying on the 
garbage collector. I'd be happy to provide more details on the code if this would be helpful.

Cheers,
Chris


             reply	other threads:[~2006-08-29  4:16 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-08-29  4:16 Christopher Kauffman [this message]
2006-08-29 15:20 ` [Caml-list] " Damien Doligez
2006-08-29 18:37   ` Christopher Kauffman
2006-08-31 15:34     ` Christopher Kauffman
2006-08-31 21:08       ` [Caml-list] Garbage collection and caml_adjust_gc_speed - Problem in Bigarray Christopher Kauffman
2006-09-01 17:06         ` David MENTRE

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=44F3BFA7.20202@cs.umn.edu \
    --to=kauffman@cs.umn.edu \
    --cc=caml-list@yquem.inria.fr \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).