From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: From: erik quanstrom Date: Thu, 16 Apr 2009 14:30:24 -0400 To: 9fans@9fans.net In-Reply-To: <9ab217670904161047w56b70b74ke25a0280b0f70cc2@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit Subject: Re: [9fans] security questions Topicbox-Message-UUID: de1e7d3a-ead4-11e9-9d60-3106f5b1d025 > The benefit to this approach is that we would have an extremely easy > way to add new constraints as needed (simply create another tunable > pool), without changing the API or interfering with multiple > subsystems, outside of changing malloc calls if needed. The limits > could be checked on a per-process or per-user (or both) basis. > > We already have a pool for kernel memory, and a pool for kernel draw > memory. Seems reasonable that we could have some for network buffers, > files, processes and the like. the network buffers are limited by the size of the network queues, or they are pre-allocated. you're right though. there are a number of dynamicly allocated resources in the kernel that are not managed. in the old days, the kernel didn't dynamicly allocate anything. when things are staticly allocated you don't have this problem, but everyone complains about running out of $whatit structures. if you do dynamic allocation, then people complain about $unimportant resource starving $important resource. assuming a careful kernel and assuming that hostile users are booted, resource contention is just about picking who loses. and you're not going to get agreement that you're doing a good job when you're picking losers. ☺ - erik