From mboxrd@z Thu Jan 1 00:00:00 1970 MIME-Version: 1.0 Date: Thu, 16 Apr 2009 13:47:13 -0400 Message-ID: <9ab217670904161047w56b70b74ke25a0280b0f70cc2@mail.gmail.com> From: "Devon H. O'Dell" To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: [9fans] security questions Topicbox-Message-UUID: dd166376-ead4-11e9-9d60-3106f5b1d025 In the interests of academia (and from the idea of setting up a public Plan 9 cluster) comes the following mail. I'm sure people will brush some of this off as a non-issue, but I'm curious what others think. It doesn't seem that Plan 9 does much to protect the kernel from memory / resource exhaustion. When it comes to kernel memory, the standard philosophy of ``add more memory'' doesn't quite cut it: there's a limited amount for the kernel, and if a user can exhaust that, it's not a Good Thing. (Another argument I heard today was ``deal with the offending user swiftly,'' but that does little against full disclosure). There are two potential ways to combat this (though there are additional advantages to the existence of both): 1) Introduce more memory pools with tunable limits. The idea here would be to make malloc() default to its current behavior: just allocate allocate space from available arenas in mainmem. An additional interface (talloc?) would be provided for type-based allocations. These would be additional pools that serve to store specific kernel data structures (Blocks, Chans, Procs, etc.). This provides two benefits: o Protection against kernel memory starvation by exhaustion of a specific resource o Some level of debugalloc-style memory information without all of the overhead I suppose it would be possible to allow for tunable settings as well by providing a FS to set e.g. minarea or maxsize. The benefit to this approach is that we would have an extremely easy way to add new constraints as needed (simply create another tunable pool), without changing the API or interfering with multiple subsystems, outside of changing malloc calls if needed. The limits could be checked on a per-process or per-user (or both) basis. We already have a pool for kernel memory, and a pool for kernel draw memory. Seems reasonable that we could have some for network buffers, files, processes and the like. 2) Introduce a `devlimit' device, which imposes limits on specific kernel resources. The limits would be set on either a per-process or per-user basis (or both, depending on the nature of the limit). #2 seems more like the unixy rlimit model, and the more I think about it, the less I like it. It's a bit more difficult to `get right', it doesn't `feel' very Plan 9-ish, and adding new limits requires more incestuous code. However, the limits are more finely tuned. Just wondering any thoughts on this, which seems more feasible, if anybody would feel it's a `good idea,' and the like. I got mixed (though mostly positive from those who understood the issue) feedback on IRC when I brought up the problem. I don't have any sample cases in which it would be possible to starve the kernel of memory. --dho