From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: From: erik quanstrom Date: Mon, 3 Sep 2007 18:01:54 -0400 To: 9fans@cse.psu.edu Subject: Re: [9fans] plan 9 overcommits memory? In-Reply-To: <462e5c2c6d902e4a2f54b6f41c2e4101@plan9.bell-labs.com> MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Topicbox-Message-UUID: b6cb5c50-ead2-11e9-9d60-3106f5b1d025 On Mon Sep 3 16:48:59 EDT 2007, geoff@plan9.bell-labs.com wrote: > If your machines are regularly running out of VM, something is wrong > in your environment. I would argue that we'd be better off fixing > upas/fs to be less greedy with memory than contorting the system to > try to avoid overcommitting memory. well, yes. the problem is 400MB mailboxes. but i'll let you tell folk with mailboxes that large, that that's too large. ;-) it'd be nice to be able to use more than 3.75-pcispace GB of memory. but i don't see this as a "fix upasfs" problem. i see this as a general problem that upas/fs's huge memory usage highlights. this can happen to any process. suppose i start a program that allocates 8k but between the malloc and the memset, another program uses the last available page in memory, then my original program faults. > If one did change the system to > enforce a limit of 16MB for the aggregate of all system stacks, what > would happen when a process needed to grow its stack and the 16MB were > full? Checking malloc returns cannot suffice. no, it wouldn't. obviously one doesn't malloc the stack -- at least not today. but this is no worse than the current situation for stacks. and an improvement for the heap. if one made the limit settable at runtime, one could verify reasonable stack usage while testing. here i think ron's idea of pre-faulting makes even more sense for the stack than the heap, as stack allocation is implicit. - erik