caml-list - the Caml user's mailing list
 help / color / mirror / Atom feed
From: Anthony Tavener <anthony.tavener@gmail.com>
To: oliver <oliver@first.in-berlin.de>
Cc: "caml-list@inria.fr" <caml-list@inria.fr>
Subject: Re: [Caml-list] Early GC'ing
Date: Sun, 18 Aug 2013 15:14:26 -0600	[thread overview]
Message-ID: <CAN=ouMQKr10L87_xWzR5j+YNH63Vzw+RiWciEEOhA9BQDzrUPw@mail.gmail.com> (raw)
In-Reply-To: <20130818205305.GA7841@siouxsie>

[-- Attachment #1: Type: text/plain, Size: 2043 bytes --]

I run tight loops (game engine -- 60fps with a lot of memory activity and
allocations each frame), and the GC works remarkably well at keeping things
sane. I did have a problem with runaway allocations once, and tracked it
down to a source of allocations which was effectively never
un-referenced... so a legitimate leak.

If you do a Gc.full_major (), is your memory returned? If not, then I think
that's evidence that there's still some handle on it -- be sure the
appropriate values have fallen out of scope and aren't referenced in some
other way!

On the other hand, if this is just the GC not cleaning up quick enough for
your case... I'm sorry I have no help for tuning the GC. The documentation
in gc.mli seemed pretty sensible when I looked at it (a while ago) though!



On Sun, Aug 18, 2013 at 2:53 PM, oliver <oliver@first.in-berlin.de> wrote:

> With the subset of the files I want to read,
> I get these two GC-stats-outputs (Gc.print_stats) at the beginning and
> the end of the reading loop:
>
>
> --------------------
> minor_words: 176917
> promoted_words: 28265
> major_words: 83313
> minor_collections: 5
> major_collections: 1
> heap_words: 126976
> heap_chunks: 1
> top_heap_words: 126976
> live_words: 82608
> live_blocks: 7325
> free_words: 44368
> free_blocks: 33
> largest_free: 43663
> fragments: 0
> compactions: 0
> --------------------
> minor_words: 119944974
> promoted_words: 18927217
> major_words: 55119706
> minor_collections: 3672
> major_collections: 71
> heap_words: 14221824
> heap_chunks: 112
> top_heap_words: 14221824
> live_words: 10194963
> live_blocks: 1184994
> free_words: 4024031
> free_blocks: 46685
> largest_free: 65811
> fragments: 2830
> compactions: 6
> --------------------
>
>
> What does this tell you?
> How to clean the mem?
>
>
> Ciao,
>    Oliver
>
> --
> Caml-list mailing list.  Subscription management and archives:
> https://sympa.inria.fr/sympa/arc/caml-list
> Beginner's list: http://groups.yahoo.com/group/ocaml_beginners
> Bug reports: http://caml.inria.fr/bin/caml-bugs
>

[-- Attachment #2: Type: text/html, Size: 2883 bytes --]

  reply	other threads:[~2013-08-18 21:14 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-08-18 20:42 oliver
2013-08-18 20:53 ` oliver
2013-08-18 21:14   ` Anthony Tavener [this message]
2013-08-18 22:40     ` oliver
2013-08-19  0:20       ` oliver
2013-08-19  6:43         ` Anthony Tavener
2013-08-19 10:34           ` oliver
2013-08-19 11:39           ` Mark Shinwell
2013-08-19 11:51 ` Adrien Nader
2013-08-19 12:36   ` oliver

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAN=ouMQKr10L87_xWzR5j+YNH63Vzw+RiWciEEOhA9BQDzrUPw@mail.gmail.com' \
    --to=anthony.tavener@gmail.com \
    --cc=caml-list@inria.fr \
    --cc=oliver@first.in-berlin.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).