From mboxrd@z Thu Jan 1 00:00:00 1970 MIME-Version: 1.0 In-Reply-To: References: <0F748B67-FB11-464C-84FA-66B4E0B29918@9.offblast.org> <44900c0d4896622fa8a9411b05efe730@brasstown.quanstro.net> <7A132462-4747-471A-A4BF-D9381E38A4EA@ar.aichi-u.ac.jp> <4c37cf728d5b0e7ae4ebd3c2e0c2cee4@brasstown.quanstro.net> <3a6fb8671dae34eb5b4e0ebe3992bcfd@brasstown.quanstro.net> <1fccf1df5e46d5fa5235a40a90a001e3@brasstown.quanstro.net> Date: Wed, 28 Jan 2015 17:28:03 +0000 Message-ID: From: Charles Forsyth To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Content-Type: multipart/alternative; boundary=001a11c37e2e9d7e75050db9b138 Subject: Re: [9fans] protection against resource exhaustion Topicbox-Message-UUID: 3d4a5cee-ead9-11e9-9d60-3106f5b1d025 --001a11c37e2e9d7e75050db9b138 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable On 28 January 2015 at 06:50, arisawa wrote: > Eric=E2=80=99s users are all gentleman, all careful people and all skillf= ul > programmers. > If your system is served for university students, you will have different > thought. > You might also be running services on it, and it's reasonable that it should manage its resources. It's best to refuse a new load if it won't fit, but that requires a fairly careful system design and works best in closed systems. In a capability system you'd have to present a capability to create a process, and it's easy to ration them. Inside Plan 9, one could have a nested resource allocation scheme for the main things without too much trouble. Memory is a little different because you either under-use (by reserving space) or over-commit, as usually happens with file-system quotas. With memory, I had some success with a hybrid scheme, still not intended for multi-user, but it could be extended to do that. The system accounts in advance for probable demands of things like fork/exec and segment growth (including stack growth). If that preallocation is then not used (eg, because of copy-on-write fork before exec) it is gradually written off. When that fails, the system looks at the most recently created process groups (note groups) and kills them. It differs from the OOM because it doesn't assume that the biggest thing is actually the problem. Instead it looks back through the most recently-started applications since that corresponded to my usage. It's quite common to have long-running components that soak up lots of memory (eg, a cache process or fossil) and they often aren't the ones that caused the problem. Instead I assume something will have started recently that was ill-advised. It would be better to track allocation history across process groups and use that instead but I needed something simple, and it was reasonably effective. For resources such as process (slots), network connections, etc I think a --001a11c37e2e9d7e75050db9b138 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable

= On 28 January 2015 at 06:50, arisawa <arisawa@ar.aichi-u.ac.jp&= gt; wrote:
Eric=E2=80=99s users are all gentleman, = all careful people and all skillful programmers.
If your system is served for university students, you will have different t= hought.

You might also be running services = on it, and it's reasonable that it should manage its resources.
It's best to refuse a new load if it won't= fit, but that requires a fairly careful system design and works
best in closed systems. In a capability system you= 9;d have to present a capability to create a process,
and it's easy to ration them. Inside Plan 9, one could have = a nested resource allocation scheme
for the= main things without too much trouble. Memory is a little different because= you either under-use
(by reserving space) = or over-commit, as usually happens with file-system quotas.

With memory, I had so= me success with a hybrid scheme, still not intended for multi-user,
but it could be extended to do that. The system ac= counts in advance for probable demands of
t= hings like fork/exec and segment growth (including stack growth). If that p= reallocation is then not
used (eg, because = of copy-on-write fork before exec) it is gradually written off. When that f= ails,
the system looks at the most recently= created process groups (note groups) and kills them.
It differs from the OOM because it doesn't assume that the b= iggest thing is actually the problem.
Inste= ad it looks back through the most recently-started applications since that = corresponded to my usage.
It's quite co= mmon to have long-running components that
s= oak up lots of memory (eg, a cache process or fossil) and they often aren&#= 39;t the ones that caused
the problem. Inst= ead I assume something will have started recently that was ill-advised. It = would be better
to track allocation history= across process groups and use that instead but I needed something simple,<= /div>
and it was reasonably effective.

For resources su= ch as process (slots), network connections, etc I think a=C2=A0
--001a11c37e2e9d7e75050db9b138--