9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
From: Charles Forsyth <charles.forsyth@gmail.com>
To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
Subject: Re: [9fans] protection against resource exhaustion
Date: Wed, 28 Jan 2015 17:28:03 +0000	[thread overview]
Message-ID: <CAOw7k5hocVshTBBvXA86x91PpZ1hBBdAbMUBao+vCGL2wH9ryw@mail.gmail.com> (raw)
In-Reply-To: <A97D4AC3-3160-448A-BAAE-562FD1529B6C@ar.aichi-u.ac.jp>

[-- Attachment #1: Type: text/plain, Size: 2032 bytes --]

On 28 January 2015 at 06:50, arisawa <arisawa@ar.aichi-u.ac.jp> wrote:

> Eric’s users are all gentleman, all careful people and all skillful
> programmers.
> If your system is served for university students, you will have different
> thought.
>

You might also be running services on it, and it's reasonable that it
should manage its resources.
It's best to refuse a new load if it won't fit, but that requires a fairly
careful system design and works
best in closed systems. In a capability system you'd have to present a
capability to create a process,
and it's easy to ration them. Inside Plan 9, one could have a nested
resource allocation scheme
for the main things without too much trouble. Memory is a little different
because you either under-use
(by reserving space) or over-commit, as usually happens with file-system
quotas.

With memory, I had some success with a hybrid scheme, still not intended
for multi-user,
but it could be extended to do that. The system accounts in advance for
probable demands of
things like fork/exec and segment growth (including stack growth). If that
preallocation is then not
used (eg, because of copy-on-write fork before exec) it is gradually
written off. When that fails,
the system looks at the most recently created process groups (note groups)
and kills them.
It differs from the OOM because it doesn't assume that the biggest thing is
actually the problem.
Instead it looks back through the most recently-started applications since
that corresponded to my usage.
It's quite common to have long-running components that
soak up lots of memory (eg, a cache process or fossil) and they often
aren't the ones that caused
the problem. Instead I assume something will have started recently that was
ill-advised. It would be better
to track allocation history across process groups and use that instead but
I needed something simple,
and it was reasonably effective.

For resources such as process (slots), network connections, etc I think a

[-- Attachment #2: Type: text/html, Size: 2983 bytes --]

  parent reply	other threads:[~2015-01-28 17:28 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-01-25  6:16 arisawa
2015-01-25  6:59 ` mischief
2015-01-25 17:41   ` erik quanstrom
2015-01-26 11:47     ` arisawa
2015-01-26 12:46       ` cinap_lenrek
2015-01-26 14:13       ` erik quanstrom
2015-01-27  0:33         ` arisawa
2015-01-27  1:30           ` Lyndon Nerenberg
2015-01-27  4:13             ` erik quanstrom
2015-01-27  4:22           ` erik quanstrom
2015-01-27  7:03             ` arisawa
2015-01-27  7:10               ` Ori Bernstein
2015-01-27  7:15                 ` lucio
2015-01-27 14:05                 ` erik quanstrom
2015-01-27  7:12               ` lucio
2015-01-27 14:10               ` erik quanstrom
2015-01-28  0:10                 ` arisawa
2015-01-28  3:38                   ` erik quanstrom
2015-01-28  6:50                     ` arisawa
2015-01-28  7:22                       ` lucio
2015-01-28  7:48                       ` Quintile
2015-01-28 13:13                       ` cinap_lenrek
2015-01-28 14:03                         ` erik quanstrom
2015-01-28 14:09                           ` lucio
2015-01-28 14:14                             ` erik quanstrom
2015-01-28 14:53                               ` lucio
2015-01-28 17:02                                 ` Skip Tavakkolian
2015-01-28 14:16                       ` erik quanstrom
2015-01-28 17:28                       ` Charles Forsyth [this message]
2015-01-28 17:39                         ` cinap_lenrek
2015-01-28 18:51                           ` Charles Forsyth
2015-01-29  3:57                             ` arisawa
2015-01-29  6:34                               ` erik quanstrom
2015-01-29  6:42                         ` erik quanstrom
2015-01-29  8:11                           ` arisawa
2015-01-27 10:53             ` Charles Forsyth
2015-01-27 14:01               ` erik quanstrom
2015-01-25  9:04 ` arisawa
2015-01-25 11:06   ` Bence Fábián

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAOw7k5hocVshTBBvXA86x91PpZ1hBBdAbMUBao+vCGL2wH9ryw@mail.gmail.com \
    --to=charles.forsyth@gmail.com \
    --cc=9fans@9fans.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).