From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Original-To: caml-list@sympa.inria.fr Delivered-To: caml-list@sympa.inria.fr Received: from mail2-relais-roc.national.inria.fr (mail2-relais-roc.national.inria.fr [192.134.164.83]) by sympa.inria.fr (Postfix) with ESMTPS id E1D36820A1 for ; Mon, 19 Aug 2013 00:40:26 +0200 (CEST) Received-SPF: None (mail2-smtp-roc.national.inria.fr: no sender authenticity information available from domain of oliver@first.in-berlin.de) identity=pra; client-ip=192.109.42.8; receiver=mail2-smtp-roc.national.inria.fr; envelope-from="oliver@first.in-berlin.de"; x-sender="oliver@first.in-berlin.de"; x-conformance=sidf_compatible Received-SPF: None (mail2-smtp-roc.national.inria.fr: no sender authenticity information available from domain of oliver@first.in-berlin.de) identity=mailfrom; client-ip=192.109.42.8; receiver=mail2-smtp-roc.national.inria.fr; envelope-from="oliver@first.in-berlin.de"; x-sender="oliver@first.in-berlin.de"; x-conformance=sidf_compatible Received-SPF: None (mail2-smtp-roc.national.inria.fr: no sender authenticity information available from domain of postmaster@einhorn.in-berlin.de) identity=helo; client-ip=192.109.42.8; receiver=mail2-smtp-roc.national.inria.fr; envelope-from="oliver@first.in-berlin.de"; x-sender="postmaster@einhorn.in-berlin.de"; x-conformance=sidf_compatible X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: Ag0CAGxMEVLAbSoIlGdsb2JhbABahya2V4U9gSAWDgEBAQEJCwkJFAQkgiQBAQQBI1YQCwkPAgIFIQICDwUYMYgdBgSkUpBQFoETjXSBQweCaDN3A48fiESUdIFm X-IPAS-Result: Ag0CAGxMEVLAbSoIlGdsb2JhbABahya2V4U9gSAWDgEBAQEJCwkJFAQkgiQBAQQBI1YQCwkPAgIFIQICDwUYMYgdBgSkUpBQFoETjXSBQweCaDN3A48fiESUdIFm X-IronPort-AV: E=Sophos;i="4.89,909,1367964000"; d="scan'208";a="29675550" Received: from einhorn.in-berlin.de ([192.109.42.8]) by mail2-smtp-roc.national.inria.fr with ESMTP/TLS/DHE-RSA-AES256-SHA; 19 Aug 2013 00:40:26 +0200 X-Envelope-From: oliver@first.in-berlin.de Received: from first (e178029176.adsl.alicedsl.de [85.178.29.176]) (authenticated bits=0) by einhorn.in-berlin.de (8.13.6/8.13.6/Debian-1) with ESMTP id r7IMePfF024786 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT); Mon, 19 Aug 2013 00:40:25 +0200 Received: by first (Postfix, from userid 1000) id B76691540135; Mon, 19 Aug 2013 00:40:24 +0200 (CEST) Date: Mon, 19 Aug 2013 00:40:24 +0200 From: oliver To: Anthony Tavener Cc: "caml-list@inria.fr" Message-ID: <20130818224024.GA10193@siouxsie> References: <20130818204213.GA7482@siouxsie> <20130818205305.GA7841@siouxsie> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.20 (2009-06-14) X-Scanned-By: MIMEDefang_at_IN-Berlin_e.V. on 192.109.42.8 Subject: Re: [Caml-list] Early GC'ing Hi, thanks for your hints. On Sun, Aug 18, 2013 at 03:14:26PM -0600, Anthony Tavener wrote: > I run tight loops (game engine -- 60fps with a lot of memory activity and > allocations each frame), and the GC works remarkably well at keeping things > sane. I did have a problem with runaway allocations once, and tracked it > down to a source of allocations which was effectively never > un-referenced... so a legitimate leak. > > If you do a Gc.full_major (), is your memory returned? If not, then I think > that's evidence that there's still some handle on it -- be sure the > appropriate values have fallen out of scope and aren't referenced in some > other way! [...] It did not helped much. Not sure if it's a Gc-issue at all, or overhead of the data structures. I used my full dataset now and top showed me mem usage above 50% (about 50...55). >From 1.9 GB free mem and a resulting file of 254 MB, it means 1000 MB mem usage for 254 MB output data. Maybe thats just the normal overhead I have to accept. (?) The input data was 1.4 GB. (It seemed, the Gc-invokation did made the program run faster. But I'm not sure if it's an artifact (caching stuff by the kernel or so or my unprecise measurment). It's about 9 minutes vs. 10.5 minutes, but rather unprecise measured with "top" and eye.) > > On the other hand, if this is just the GC not cleaning up quick enough for > your case... I'm sorry I have no help for tuning the GC. The documentation > in gc.mli seemed pretty sensible when I looked at it (a while ago) though! I had no problem with fast cleaning; rather my limited mem on my machine was an issue. Looks like I need a new / bigger machine... The expected worst case of swapping did not occur at all. Ciao, Oliver