On Mon, Nov 22, 2010 at 6:42 PM, Sylvain Le Gall wrote: > On 22-11-2010, Damien Doligez wrote: > > > > On 2010-11-21, at 20:26, Eray Ozkural wrote: > > > >> I've been thinking whether some kind of doubling strategy would work for > the minor heap size. What do you think? > > > > Sounds like an interesting idea, but what heuristic would you use? > > When everything is smooth, the running time decreases something like > > exponentially with the minor heap size, so you'd always want to > > increase the size. How do you tell when to stop? And then, if the > > program is not behaving uniformly, when do you decide to reduce > > the size? > > > > How do you tell when to stop? > -> > > Maybe you can stop when you reach (the size of the L2/L3 cache of the > processor) / number of core. > > Both information are quite straight to read from /proc/cpuinfo. > Yeah that's what I had in mind, determine a kind of sensible upper bound to grow to. Cache size makes some sense, though I think as recently mentioned "working set size" is relevant. If the garbage collector could deduce that it could be used, the other suggestion is also sensible. You could also set it to something like 1/4 of physical RAM. That kind of logic is used in some out-of-core data mining algorithms. The objective here is to amortize the cost of copying until the working set size is reached, otherwise there will be disk thrashing anyway! Best, -- Eray Ozkural, PhD candidate. Comp. Sci. Dept., Bilkent University, Ankara http://groups.yahoo.com/group/ai-philosophy http://myspace.com/arizanesil http://myspace.com/malfunct