From mboxrd@z Thu Jan 1 00:00:00 1970 From: erik quanstrom Date: Sun, 25 Jan 2015 09:41:35 -0800 To: 9fans@9fans.net Message-ID: <44900c0d4896622fa8a9411b05efe730@brasstown.quanstro.net> In-Reply-To: <0F748B67-FB11-464C-84FA-66B4E0B29918@9.offblast.org> References: <0F748B67-FB11-464C-84FA-66B4E0B29918@9.offblast.org> MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Subject: Re: [9fans] protection against resource exhaustion Topicbox-Message-UUID: 3b758d62-ead9-11e9-9d60-3106f5b1d025 On Sat Jan 24 23:16:07 PST 2015, mischief@9.offblast.org wrote: > You would need to implement something like ulimits to fix process > exhaustion, or some kind of rate limit. There are several forms of > resource exhaustion in plan 9. One other that comes to mind is cat > /dev/random prevents other clients from making progress reading > /dev/random, such as cpu. this is the same route as oom killer, which can be depended on to kill exactly the wrong thing at exactly the wrong time. fixing things to be oomkiller-proof tends to be an exercise in wack-a-mole. about tail-calls mentioned elsewhere. rc simple executes the file through a system call and does not inspect its shebang line. so there is no possibility of optimization; rc doesn't know it's an rc script. (and this problem doesn't depend on the executible recursively executing itself being an rc script.) i have eliminated the resrcwait() in the process allocation loop in my kernels. this sounds much worse than it is. the advantage is it breaks the loop and gets the machine running again in a bounded amount of time. the disadvantage is one loses program state. i've found it a win over all. instead of getting paged in the middle of the night, a few days later someone might mention that the machine rebooted. the logs are almost always intact, so debugging the issue isn't too hard /during the middle of the day/. :-) - erik