From mboxrd@z Thu Jan 1 00:00:00 1970 From: erik quanstrom Date: Wed, 28 Jan 2015 22:42:33 -0800 To: 9fans@9fans.net Message-ID: In-Reply-To: References: <0F748B67-FB11-464C-84FA-66B4E0B29918@9.offblast.org> <44900c0d4896622fa8a9411b05efe730@brasstown.quanstro.net> <7A132462-4747-471A-A4BF-D9381E38A4EA@ar.aichi-u.ac.jp> <4c37cf728d5b0e7ae4ebd3c2e0c2cee4@brasstown.quanstro.net> <3a6fb8671dae34eb5b4e0ebe3992bcfd@brasstown.quanstro.net> MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Subject: Re: [9fans] protection against resource exhaustion Topicbox-Message-UUID: 3d6e267e-ead9-11e9-9d60-3106f5b1d025 > You might also be running services on it, and it's reasonable that it > should manage its resources. It's best to refuse a new load if it > won't fit, but that requires a fairly careful system design and works > best in closed systems. In a capability system you'd have to present > a capability to create a process, and it's easy to ration them. > Inside Plan 9, one could have a nested resource allocation scheme for > the main things without too much trouble. Memory is a little > different because you either under-use (by reserving space) or > over-commit, as usually happens with file-system quotas. > > With memory, I had some success with a hybrid scheme, still not > intended for multi-user, but it could be extended to do that. The > system accounts in advance for probable demands of things like > fork/exec and segment growth (including stack growth). If that > preallocation is then not used (eg, because of copy-on-write fork > before exec) it is gradually written off. When that fails, the system > looks at the most recently created process groups (note groups) and > kills them. It differs from the OOM because it doesn't assume that > the biggest thing is actually the problem. Instead it looks back > through the most recently-started applications since that corresponded > to my usage. It's quite common to have long-running components that > soak up lots of memory (eg, a cache process or fossil) and they often > aren't the ones that caused the problem. Instead I assume something > will have started recently that was ill-advised. It would be better > to track allocation history across process groups and use that instead > but I needed something simple, and it was reasonably effective. > > For resources such as process (slots), network connections, etc I thanks, charles. i hope i haven't overplayed my argument. i am for real solutions to this issue. i'm not for the current solution, or more complicanted variants. how does one account/adjust for long-running, important processes, say forking for a small but important task? perhaps an edf-like scheme predeclaring the expected usage for "important" processes might work? - erik