From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: <45ca7714d57ed4cc0c590e031559678d@quanstro.net> To: 9fans@cse.psu.edu Subject: Re: [9fans] plan 9 overcommits memory? From: erik quanstrom Date: Mon, 3 Sep 2007 09:21:09 -0400 In-Reply-To: <01b719eaabe004a9073ccb4b3425e1d0@plan9.bell-labs.com> MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Topicbox-Message-UUID: b52468f6-ead2-11e9-9d60-3106f5b1d025 > If system calls were the only way to change memory allocation, one > could probably keep a strict accounting of pages allocated and fail > system calls that require more VM than is available. But neither Plan > 9 nor Unix works that way. The big exception is stack growth. The > kernel automatically extends a process's stack segment as needed. On > the pc, Plan 9 currently limits user-mode stacks to 16MB. On a CPU > server with 200 processes (fairly typical), that's 3.2GB of VM one > would have to commit just for stacks. With 2,000 processes, that > would rise to 32GB just for stacks. 16MB for stacks seems awful high to me. are there any programs that need even 1/32th of that? 512k is still 32k levels of recursion of a function needing 4 long arguments. a quick count on my home machine and some coraid servers don't show any processes using more than 1 page of stack. doing strict accounting on the pages allocated i think would be an improvement. i also don't see a reason not to shrink the maximum stack size. the current behavior seems pretty exploitable to me. even remotely, if one can force stack/brk allocation via smtp, telnet or whatnot. - erik