From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: <2c68bbb34242a932e4b3e9be554c6f18@quanstro.net> To: 9fans@cse.psu.edu Subject: Re: [9fans] plan 9 overcommits memory? From: erik quanstrom Date: Mon, 3 Sep 2007 16:17:32 -0400 In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Topicbox-Message-UUID: b6733b10-ead2-11e9-9d60-3106f5b1d025 > One might allocate at least 3.2GB of swap for a 4GB machine, but many > of our machines run with no swap, and we're probably not alone. And > 200 processes are not a lot. Would you really have over 32GB of swap > allocated for a 4GB machine with 2,000 processes? > > Programs can use a surprising amount of stack space. A recent notable > example is venti/copy when copying from a nightly fossil dump score. > I think we want to be generous about maximum stack sizes. > > I don't think that estimates of VM usage would be an improvement. If > we can't get it exactly right, there will always be complaints. if venti/copy's current behavior could be worked around by allocating stuff instead of using the stack. we don't have to base design around what venti/copy does today. why would it be unacceptable to have a maximum stack allocation system-wide? say 16MB. this would allow is not to overcommit memory. if we allow overcomitted memory, *any* access of brk'd memory might page fault. this seems like a real step backwards in error recovery as most programs assume that malloc either returns n bytes of valid memory or fails. since this assumption is false, either we need to make it true or fix most programs. upas/fs fails in this way for us all the time. this would have more serious consequences if, say, venti or fossil suffered a similar fate. - erik