* [9fans] security questions
@ 2009-04-16 17:47 Devon H. O'Dell
2009-04-16 18:30 ` erik quanstrom
0 siblings, 1 reply; 94+ messages in thread
From: Devon H. O'Dell @ 2009-04-16 17:47 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
In the interests of academia (and from the idea of setting up a public
Plan 9 cluster) comes the following mail. I'm sure people will brush
some of this off as a non-issue, but I'm curious what others think.
It doesn't seem that Plan 9 does much to protect the kernel from
memory / resource exhaustion. When it comes to kernel memory, the
standard philosophy of ``add more memory'' doesn't quite cut it:
there's a limited amount for the kernel, and if a user can exhaust
that, it's not a Good Thing. (Another argument I heard today was
``deal with the offending user swiftly,'' but that does little against
full disclosure). There are two potential ways to combat this (though
there are additional advantages to the existence of both):
1) Introduce more memory pools with tunable limits.
The idea here would be to make malloc() default to its current
behavior: just allocate allocate space from available arenas in
mainmem. An additional interface (talloc?) would be provided for
type-based allocations. These would be additional pools that serve to
store specific kernel data structures (Blocks, Chans, Procs, etc.).
This provides two benefits:
o Protection against kernel memory starvation by exhaustion of a
specific resource
o Some level of debugalloc-style memory information without all of the overhead
I suppose it would be possible to allow for tunable settings as well
by providing a FS to set e.g. minarea or maxsize.
The benefit to this approach is that we would have an extremely easy
way to add new constraints as needed (simply create another tunable
pool), without changing the API or interfering with multiple
subsystems, outside of changing malloc calls if needed. The limits
could be checked on a per-process or per-user (or both) basis.
We already have a pool for kernel memory, and a pool for kernel draw
memory. Seems reasonable that we could have some for network buffers,
files, processes and the like.
2) Introduce a `devlimit' device, which imposes limits on specific
kernel resources. The limits would be set on either a per-process or
per-user basis (or both, depending on the nature of the limit).
#2 seems more like the unixy rlimit model, and the more I think about
it, the less I like it. It's a bit more difficult to `get right', it
doesn't `feel' very Plan 9-ish, and adding new limits requires more
incestuous code. However, the limits are more finely tuned.
Just wondering any thoughts on this, which seems more feasible, if
anybody would feel it's a `good idea,' and the like. I got mixed
(though mostly positive from those who understood the issue) feedback
on IRC when I brought up the problem. I don't have any sample cases in
which it would be possible to starve the kernel of memory.
--dho
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-16 17:47 [9fans] security questions Devon H. O'Dell
@ 2009-04-16 18:30 ` erik quanstrom
2009-04-16 19:14 ` Venkatesh Srinivas
0 siblings, 1 reply; 94+ messages in thread
From: erik quanstrom @ 2009-04-16 18:30 UTC (permalink / raw)
To: 9fans
> The benefit to this approach is that we would have an extremely easy
> way to add new constraints as needed (simply create another tunable
> pool), without changing the API or interfering with multiple
> subsystems, outside of changing malloc calls if needed. The limits
> could be checked on a per-process or per-user (or both) basis.
>
> We already have a pool for kernel memory, and a pool for kernel draw
> memory. Seems reasonable that we could have some for network buffers,
> files, processes and the like.
the network buffers are limited by the size of the network queues,
or they are pre-allocated.
you're right though. there are a number of dynamicly allocated
resources in the kernel that are not managed. in the old days,
the kernel didn't dynamicly allocate anything. when things are
staticly allocated you don't have this problem, but everyone
complains about running out of $whatit structures. if you
do dynamic allocation, then people complain about $unimportant
resource starving $important resource.
assuming a careful kernel and assuming that hostile users are
booted, resource contention is just about picking who loses.
and you're not going to get agreement that you're doing a good
job when you're picking losers. ☺
- erik
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-16 18:30 ` erik quanstrom
@ 2009-04-16 19:14 ` Venkatesh Srinivas
2009-04-16 20:10 ` Devon H. O'Dell
0 siblings, 1 reply; 94+ messages in thread
From: Venkatesh Srinivas @ 2009-04-16 19:14 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
Devlimit / Rlimit is less than ideal - the resource limits aren't
adaptive to program needs and to resource availability. They would be
describing resources that user programs have very little visible
control over (kernel resources), except by changing their syscall mix
or giving up a segment or so. Or failing outright.
Prohibitions per-user are kinda bad in general - what if you want to
run a potentially hostile (or more likely buggy) program? You can
already run it in its own ns, but having it be able to stab you via
kernel resources, about which you can do nothing, is bad.
The typed allocator is worth looking at for speed reasons - the slab
allocator and customalloc have shown that its faster (from the
perspective of allocation time, fragmentation) to do things that way.
But I don't really see it addressing the problem? Now the constraints
are per-resource, but they're still magic constraints.
Something that might be interesting would be for each primitive pgroup
to be born with a maximum percentage 'under pressure' associated with
each interesting resource. Child pgroups would allocate out of their
parent's pgroup limits. Until there is pressure, limits could be
unenforced, leading to higher utilization than magic constants in
rlimit.
To give a chance for a process to give up some of its resources
(caches, recomputable data) under pressure, there could be a
per-process /dev/halp device, which a program could read; reads would
block until a process was over its limit and something else needed
some more soup. If the app responds, great. Otherwise, something like
culling it or swapping it out (assuming that swap still worked :)) or
even slowing down its allocation rate artificially might be answers...
If people are interested, there is some work going on in a research
kernel called Viengoos
(http://www.gnu.org/software/hurd/microkernel/viengoos.html) (gasp!
the hurd!) trying to address pretty much the same problem...
-- vs
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-16 19:14 ` Venkatesh Srinivas
@ 2009-04-16 20:10 ` Devon H. O'Dell
2009-04-16 20:19 ` Devon H. O'Dell
2009-04-16 20:51 ` [9fans] security questions erik quanstrom
0 siblings, 2 replies; 94+ messages in thread
From: Devon H. O'Dell @ 2009-04-16 20:10 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
2009/4/16 Venkatesh Srinivas <me@acm.jhu.edu>:
> Devlimit / Rlimit is less than ideal - the resource limits aren't
> adaptive to program needs and to resource availability. They would be
> describing resources that user programs have very little visible
> control over (kernel resources), except by changing their syscall mix
> or giving up a segment or so. Or failing outright.
Right, but that's part of the point. They have very little visible
control over said resources, but the kernel does need to impose
limitations on the allowable resources, more below. Either way, I
agree. This form of resource limitation sucks. My reasoning is
different, however: I feel that the number of places you need to add
new code and verify that any future changes respect these sorts of
limitations. This is much more difficult to do than when the
limitation is built into the allocator.
> Prohibitions per-user are kinda bad in general - what if you want to
> run a potentially hostile (or more likely buggy) program? You can
> already run it in its own ns, but having it be able to stab you via
> kernel resources, about which you can do nothing, is bad.
This depends on your perspective. If you are an administrator, and you
are running a system that provides access to a plethora of users, your
viewpoint is different than if you are a programmer, writing an
application with expensive needs. A user who wants to run a
potentially hostile program should not be able to affect the system to
the point that other users have their own negative experiences. A user
running a buggy program that hogs a ton of memory is not such a big
deal. Buy more memory. A user running a buggy program that runs the
kernel out of resources, causing it to halt is a big deal.
> The typed allocator is worth looking at for speed reasons - the slab
> allocator and customalloc have shown that its faster (from the
> perspective of allocation time, fragmentation) to do things that way.
> But I don't really see it addressing the problem? Now the constraints
> are per-resource, but they're still magic constraints.
This is the pitfall of all tunables. Unless someone pulls out a
calculator (or is mathematically brilliant), figuring out exact
numbers for X number of R resources spread between N users on a system
with Y amount of memory is silly.
Fortunately, an administrator makes educated best guesses based upon
these same factors:
1) The expected needs of the users of the system (and in other cases,
the absolute maximum needs of the users)
2) The limitations of the system itself. My laptop has 2GB of RAM, the
Plan 9 kernel only sits in 256MB of that. Now, assuming I have a
program that's able to allocate 10,000 64 byte structures a second, I
can panic the system in under two minutes. Does that mean I want to
limit my users to under 10,000 of those structures? Not necessarily,
but if I expect 40 users at a given time, I might want to make sure
that number is well under 100,000.
While it may not be perfectly ideal, it allows the administrator to
maintain control over the system. Additionally, there's typically
always a way to turn them off (echo 0 > /dev/constraint/resource,
hypothetically speaking). In this respect, I don't see how it doesn't
address the problem...
...Unless you consider that using Pools for granular limitations isn't
the best idea. In this light, perhaps additional pools aren't the
correct answer. While creating an individual pool per limited resource
serves to implement a hard limit on that resource, it's also required
to have a maximum amount of memory. So if you ditch that idea and just
make typed allocations, the problem is `solved':
When an allocation takes place, we check various heuristics to
determine whether or not the allocation is valid. Does the user have
over X Fids? Does the process hold open more than Y ports?
If these heuristics fail, the memory is not allocated, and the program
takes whatever direction it takes when it does not have resources.
(Crash, complain, happily move forward, whatever).
If they pass, the program gets what it needs, and {crashes, complains,
happily moves forward}.
One can indirectly (and more consistently) limit the number of
allocated resources in this fashion (indeed, the number of open file
descriptors) by determining the amount of memory consumed by that
resource as proportional to the size of the resource. If I as a user
have 64,000 allocations of type Foo, and struct Foo is 64 bytes, then
I hold 1,000 Foos.
The one unfortunate downside to this is that implementing this as an
allocation limit does not make it `provably safe.' That is to say, if
I create a kernel subsystem that allocates Foos, and I allocate
everything with T_MANAGED, there is no protection on the number of
Foos I allocate (assuming T_MANAGED means that this is a memory
allocation I manage myself. It's therefore not provably safer, in
terms of crashing the kernel, and I haven't been able to come up with
an idea that is provably safe (unless type determination is done using
getcallerpc, which would result in an insanely large amount of
tunables and would be completely impractical).
Extending the API in this fashion, however, almost ensures that this
case will not occur. Since the programmer must specify a type for
allocation, they must be aware of the API and the reasoning (or at
least we'd all like to hope so). If a malicious person is able to load
unsafe code into the kernel, you're screwed anyway. So really, this
project is more to protect the diligent and less to help the lazy.
(The lazy are all going to have for (i in `{ls /dev/constraint}) {
echo 0 > $i } in their {term,cpu}rc anyway.)
> Something that might be interesting would be for each primitive pgroup
> to be born with a maximum percentage 'under pressure' associated with
> each interesting resource. Child pgroups would allocate out of their
> parent's pgroup limits. Until there is pressure, limits could be
> unenforced, leading to higher utilization than magic constants in
> rlimit.
>
> To give a chance for a process to give up some of its resources
> (caches, recomputable data) under pressure, there could be a
> per-process /dev/halp device, which a program could read; reads would
> block until a process was over its limit and something else needed
> some more soup. If the app responds, great. Otherwise, something like
> culling it or swapping it out (assuming that swap still worked :)) or
> even slowing down its allocation rate artificially might be answers...
Programs consuming user memory aren't an issue, in case that wasn't
clear. It's programs that consume kernel memory indirectly due to
their own behavior. I think you get this based on some of the points
you raised earlier, but I just wanted to make sure. For instance,
reading a from a resource is extremely unlikely to cause issues inside
the kernel -- to read the data, all the structures for passing the
data must already be allocated. If the data came over a socket (as
erik pointed out), that memory is pre-allocated or of a fixed size.
> If people are interested, there is some work going on in a research
> kernel called Viengoos
> (http://www.gnu.org/software/hurd/microkernel/viengoos.html) (gasp!
> the hurd!) trying to address pretty much the same problem...
I read the paper, but it's hard to see how exactly this would apply to
this problem. There's a strong bias there towards providing programs
with better scheduling / more memory / more allowable resources if the
program is well behaved. This is interesting for improving user
experience of programs, but I feel like there are two drawbacks from
the outcome based on the problem I'm trying to solve:
1) Programmers are required to do more work to guarantee that their
programs won't be affected by the system
2) You still have to have hard limits (in this case, arbitrarily based
on percentage) to avoid a user program running the kernel out of
resources. At the end of the day, you must have a limit that is lower
than the maximum amount minus any overhead from managed resources. Any
solution will overcommit, but a percentage-based solution seems more
difficult to tune.
Additionally, it's much more complex (and thus much more prone to
error in multiple cases). Since the heuristics for determining the
resource limits would be automated, it's not necessarily provable that
someone couldn't find a way to subvert the limitations. It adds
complexity to the scheduler, to the slab allocator, and to any area of
code that would need to check resources (someone adding a new resource
then needs to do much more work than registering a new memory type and
using that for allocation). Quite frankly, the added complexity scares
me a bit.
Perhaps I am missing something, so if you can address those points,
that would be good.
> -- vs
--dho
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-16 20:10 ` Devon H. O'Dell
@ 2009-04-16 20:19 ` Devon H. O'Dell
2009-04-17 4:48 ` lucio
2009-04-16 20:51 ` [9fans] security questions erik quanstrom
1 sibling, 1 reply; 94+ messages in thread
From: Devon H. O'Dell @ 2009-04-16 20:19 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
> One can indirectly (and more consistently) limit the number of
> allocated resources in this fashion (indeed, the number of open file
> descriptors) by determining the amount of memory consumed by that
> resource as proportional to the size of the resource. If I as a user
> have 64,000 allocations of type Foo, and struct Foo is 64 bytes, then
> I hold 1,000 Foos.
And by this, I clearly mean 64,000 bytes of allocated Foos.
--dho
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-16 20:10 ` Devon H. O'Dell
2009-04-16 20:19 ` Devon H. O'Dell
@ 2009-04-16 20:51 ` erik quanstrom
2009-04-16 21:49 ` Devon H. O'Dell
1 sibling, 1 reply; 94+ messages in thread
From: erik quanstrom @ 2009-04-16 20:51 UTC (permalink / raw)
To: 9fans
have you taken a look at the protection measures already
built into the kernel like smalloc?
> While it may not be perfectly ideal, it allows the administrator to
> maintain control over the system.
being a system adminstrator, i dislike any ideas that require
extra adminstration. for the same reason, content-based email
filters are impossible to maintain yourself. you can't keep up
with the threats.
otoh, if you panic because you run out of resources or whatever, the
consequence should be a reboot. at most you loose some editor
state and need to spend a few minutes logging back in.
if you allow users to log into your fileserver, you're asking
for trouble.
the one resource that does need some fancier managment is
procs. the problem is that out-of-processes tends to act like
livelock. it would probablly be an improvement to set
ooprocs variable when out of procs and clear it when some
fraction (say 1/10) becomes free. then one could panic if
the out-of-processes condition persists for, say, 30 seconds.
(i haven't had this problem since i added some better spam
filtering.)
- erik
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-16 20:51 ` [9fans] security questions erik quanstrom
@ 2009-04-16 21:49 ` Devon H. O'Dell
2009-04-16 22:19 ` erik quanstrom
0 siblings, 1 reply; 94+ messages in thread
From: Devon H. O'Dell @ 2009-04-16 21:49 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
2009/4/16 erik quanstrom <quanstro@quanstro.net>:
> have you taken a look at the protection measures already
> built into the kernel like smalloc?
At least in FreeBSD, you can't sleep in an interrupt thread. I suppose
that's probably also the case in Plan 9 interrupt handlers, and this
would mitigate that situation.
>> While it may not be perfectly ideal, it allows the administrator to
>> maintain control over the system.
>
> being a system adminstrator, i dislike any ideas that require
> extra adminstration. for the same reason, content-based email
> filters are impossible to maintain yourself. you can't keep up
> with the threats.
There is that sentiment as well. I'm writing the code anyway, because
at least the API implementation is smaller than the amount of text in
emails I've already written about it (not to mention IRC discussion).
I tend to feel like there should be reasonably large defaults so that
you don't have to muck with it. Or maybe it's turned off by default. I
don't know. Either way, I'm not particularly expecting it to be
accepted as a patch, but I'm willing to be pleasantly surprised.
Besides, I haven't touched my sources tree for a good two years. It
needs something fresh :).
> otoh, if you panic because you run out of resources or whatever, the
> consequence should be a reboot. at most you loose some editor
> state and need to spend a few minutes logging back in.
Depends again on the application. If you're talking about a terminal,
yes. If you're talking about a CPU server where someone is working on
code, someone else is writing a presentation, and yet another person
is in the middle of a video transcode, you're talking about a lot of
wasted time, potentially.
> if you allow users to log into your fileserver, you're asking
> for trouble.
Fileserver, I agree. I don't think a sane person would allow
`untrusted' user access, let alone anonymous access.. On a CPU server,
you kind of have to allow access to potentially untrusted users.
That's kind of the definition. (Hell, even paying users aren't
necessarily trustworthy. Tons of web hosts get exploited each year by
paying customers).
> the one resource that does need some fancier managment is
> procs. the problem is that out-of-processes tends to act like
> livelock. it would probablly be an improvement to set
> ooprocs variable when out of procs and clear it when some
> fraction (say 1/10) becomes free. then one could panic if
> the out-of-processes condition persists for, say, 30 seconds.
Maybe I'll look at that after this.
> (i haven't had this problem since i added some better spam
> filtering.)
>
> - erik
Thanks for your insight so far, in any case :)
--dho
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-16 21:49 ` Devon H. O'Dell
@ 2009-04-16 22:19 ` erik quanstrom
2009-04-16 23:36 ` Devon H. O'Dell
2009-04-17 8:36 ` Richard Miller
0 siblings, 2 replies; 94+ messages in thread
From: erik quanstrom @ 2009-04-16 22:19 UTC (permalink / raw)
To: 9fans
On Thu Apr 16 17:51:42 EDT 2009, devon.odell@gmail.com wrote:
> 2009/4/16 erik quanstrom <quanstro@quanstro.net>:
> > have you taken a look at the protection measures already
> > built into the kernel like smalloc?
>
> At least in FreeBSD, you can't sleep in an interrupt thread. I suppose
> that's probably also the case in Plan 9 interrupt handlers, and this
> would mitigate that situation.
plan 9 doesn't have interrupt threads, but that's beside the point.
interrupts are driven by the hardware, not users. so smalloc, which
is used to allow user space to wait for memory if it is not currently
available doesn't make any sense.
having the potential for running out of memory in an interrupt
handler might be a sign that a little code reorg is in order, if you
are worried about this sort of thing. (and even if you're not.)
in any event, i think there is more code to deal with these problems
in the kernel that first meets the eye. much of it is small and, if you're
not looking for it, easy to miss.
> Depends again on the application. If you're talking about a terminal,
> yes. If you're talking about a CPU server where someone is working on
> code, someone else is writing a presentation, and yet another person
> is in the middle of a video transcode, you're talking about a lot of
> wasted time, potentially.
and potentially, i could win the lottery. ☺.
i have had exactly 1 out-of-resource reboot in the last 18 months.
without real data on what and where the problems are, i would
think this would become a difficult issue.
- erik
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-16 22:19 ` erik quanstrom
@ 2009-04-16 23:36 ` Devon H. O'Dell
2009-04-17 0:00 ` erik quanstrom
2009-04-17 8:36 ` Richard Miller
1 sibling, 1 reply; 94+ messages in thread
From: Devon H. O'Dell @ 2009-04-16 23:36 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
2009/4/16 erik quanstrom <quanstro@quanstro.net>:
> On Thu Apr 16 17:51:42 EDT 2009, devon.odell@gmail.com wrote:
>> 2009/4/16 erik quanstrom <quanstro@quanstro.net>:
>> > have you taken a look at the protection measures already
>> > built into the kernel like smalloc?
>>
>> At least in FreeBSD, you can't sleep in an interrupt thread. I suppose
>> that's probably also the case in Plan 9 interrupt handlers, and this
>> would mitigate that situation.
>
> plan 9 doesn't have interrupt threads, but that's beside the point.
>
> interrupts are driven by the hardware, not users. so smalloc, which
> is used to allow user space to wait for memory if it is not currently
> available doesn't make any sense.
My misunderstanding then, as smalloc is available in port/alloc.c,
which is also compiled into the kernel. I'm not concerned about oom
conditions in userland.
> having the potential for running out of memory in an interrupt
> handler might be a sign that a little code reorg is in order, if you
> are worried about this sort of thing. (and even if you're not.)
The potential for running out of memory in an interrupt handler exists
if a user has found a way to consume kernel resources from userland
and the interrupt needs to allocate that extra 1 byte.
> in any event, i think there is more code to deal with these problems
> in the kernel that first meets the eye. much of it is small and, if you're
> not looking for it, easy to miss.
I don't think so. You can get very specific about the problem or you
can get very generic. This is a generic implementation that would
allow for dealing with such problems as they arise, and actually
dealing with them in an easy, extensible fashion, without having to
add huge support code diffs for calculation of every resource you're
tracking (as is the case with rlimits, which require you to add
functions for each limit, hooks everywhere you want to track them,
shell support, and sysctls).
>From a codebase perspective, it's not a lot more code than you'd
think. It's very little code so far, and I'm about halfway finished
with the support part. Initial implementation will probably be
suboptimal, but I think useful for proof of concept.
Identifying the areas that need attention is the difficult part, but
I'll address that further down.
>> Depends again on the application. If you're talking about a terminal,
>> yes. If you're talking about a CPU server where someone is working on
>> code, someone else is writing a presentation, and yet another person
>> is in the middle of a video transcode, you're talking about a lot of
>> wasted time, potentially.
>
> and potentially, i could win the lottery. ☺.
And potentially, this code would go into Plan 9. ☺ ☺ ☺.
I think those chances are a little smaller. Fileservers and terminal
servers, I don't think this is a big issue. CPU servers, I think it
is. I'm considering setting up a public Plan 9 cluster, though however
useless/useful it is to be remains to be seen. Part of the purpose of
this endeavor is to find exactly these kind of conditions. Whether
anybody is interested remains to be seen, but perhaps some incentives
can be provided. I don't know.
> i have had exactly 1 out-of-resource reboot in the last 18 months.
> without real data on what and where the problems are, i would
> think this would become a difficult issue.
I don't know what your use case is, though I know that you probably
use the system more heavily than I. I think with people trying to find
issues, it could be a much easier endeavor, and I think it's a fun one
to address.
The fact that there was one proves that these issues can occur
theoretically. All it needs is identification and reproduction.
We're shielded in part by our small userbase and relative lack of
interest in examining code and auditing. But that's not security.
> - erik
--dho
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-16 23:36 ` Devon H. O'Dell
@ 2009-04-17 0:00 ` erik quanstrom
2009-04-17 1:25 ` Devon H. O'Dell
0 siblings, 1 reply; 94+ messages in thread
From: erik quanstrom @ 2009-04-17 0:00 UTC (permalink / raw)
To: 9fans
> > plan 9 doesn't have interrupt threads, but that's beside the point.
> >
> > interrupts are driven by the hardware, not users. so smalloc, which
> > is used to allow user space to wait for memory if it is not currently
> > available doesn't make any sense.
>
> My misunderstanding then, as smalloc is available in port/alloc.c,
> which is also compiled into the kernel. I'm not concerned about oom
> conditions in userland.
smalloc is used in the kernel, but only when running with up (user
process) and only when dealing with a "user's" data. for example,
when gathering up the response to a read on /proc/%d/regs or
whatever.
interrupts are quite different. there are lots of things that are
a bad idea in interrupt context. but one can wakeup a kernel
proc that's sitting there waiting to deal with all the hair.
> > having the potential for running out of memory in an interrupt
> > handler might be a sign that a little code reorg is in order, if you
> > are worried about this sort of thing. (and even if you're not.)
>
> The potential for running out of memory in an interrupt handler exists
> if a user has found a way to consume kernel resources from userland
> and the interrupt needs to allocate that extra 1 byte.
doctor, doctor it hurts when i do this! there's a simple solution.
never allocate in interrupt context. i haven't ever felt the need
to allocate in interrupt context. in fact drivers like 82598 or
the myricom driver never allocate during operation.
> > in any event, i think there is more code to deal with these problems
> > in the kernel that first meets the eye. much of it is small and, if you're
> > not looking for it, easy to miss.
>
> I don't think so.
are you telling me this code is not there, or that you've seen all of it?
> > i have had exactly 1 out-of-resource reboot in the last 18 months.
> > without real data on what and where the problems are, i would
> > think this would become a difficult issue.
>
> I don't know what your use case is, though I know that you probably
> use the system more heavily than I. I think with people trying to find
> issues, it could be a much easier endeavor, and I think it's a fun one
> to address.
i have 35 people working hard to kill the systems with 700MB mailboxes,
and other tricks.
but what they aren't doing is writing fork bomb programs or programs
that fuzz device drivers.
> We're shielded in part by our small userbase and relative lack of
> interest in examining code and auditing. But that's not security.
i could argue that what you're looking into isn't the type of
security that plan 9 has ever provided. i think the assumption
has been that the environment has been a work group and
there aren't any hostile users. there are lots of problems when
you move away from that assumption.
i would think the *first* order of business would be to encrypt
all traffic to the fs so users can't use snoopy to captures other
user's fs traffic
- erik
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 0:00 ` erik quanstrom
@ 2009-04-17 1:25 ` Devon H. O'Dell
2009-04-17 1:54 ` erik quanstrom
` (2 more replies)
0 siblings, 3 replies; 94+ messages in thread
From: Devon H. O'Dell @ 2009-04-17 1:25 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
2009/4/16 erik quanstrom <quanstro@quanstro.net>:
>>
>> My misunderstanding then, as smalloc is available in port/alloc.c,
>> which is also compiled into the kernel. I'm not concerned about oom
>> conditions in userland.
>
> smalloc is used in the kernel, but only when running with up (user
> process) and only when dealing with a "user's" data. for example,
> when gathering up the response to a read on /proc/%d/regs or
> whatever.
>
> interrupts are quite different. there are lots of things that are
> a bad idea in interrupt context. but one can wakeup a kernel
> proc that's sitting there waiting to deal with all the hair.
Right, we're saying the same thing backwards. I just am not sure why
smalloc was brought up. Yes, it is able to sleep until memory is
available for the operation, but it's not used *everywhere*.
>> > having the potential for running out of memory in an interrupt
>> > handler might be a sign that a little code reorg is in order, if you
>> > are worried about this sort of thing. (and even if you're not.)
>>
>> The potential for running out of memory in an interrupt handler exists
>> if a user has found a way to consume kernel resources from userland
>> and the interrupt needs to allocate that extra 1 byte.
>
> doctor, doctor it hurts when i do this! there's a simple solution.
> never allocate in interrupt context. i haven't ever felt the need
> to allocate in interrupt context. in fact drivers like 82598 or
> the myricom driver never allocate during operation.
It was a point, not an argument. You're dismissing it altogether, I'm
saying that it's a potential issue. No, I haven't read the interrupt
code for every driver. It could be a complete non issue, and it
probably is. I'm just discussing it :).
>> > in any event, i think there is more code to deal with these problems
>> > in the kernel that first meets the eye. much of it is small and, if you're
>> > not looking for it, easy to miss.
>>
>> I don't think so.
>
> are you telling me this code is not there, or that you've seen all of it?
I think we have a miscommunication, because you cut out the rest of
the context of my explanation, so I don't know how to address your
question. Neither; it wasn't the point I was making.
>> > i have had exactly 1 out-of-resource reboot in the last 18 months.
>> > without real data on what and where the problems are, i would
>> > think this would become a difficult issue.
>>
>> I don't know what your use case is, though I know that you probably
>> use the system more heavily than I. I think with people trying to find
>> issues, it could be a much easier endeavor, and I think it's a fun one
>> to address.
>
> i have 35 people working hard to kill the systems with 700MB mailboxes,
> and other tricks.
Which is more heavily than I, as I thought :)
> but what they aren't doing is writing fork bomb programs or programs
> that fuzz device drivers.
Right, and that's a real threat.
>> We're shielded in part by our small userbase and relative lack of
>> interest in examining code and auditing. But that's not security.
>
> i could argue that what you're looking into isn't the type of
> security that plan 9 has ever provided. i think the assumption
> has been that the environment has been a work group and
> there aren't any hostile users. there are lots of problems when
> you move away from that assumption.
Yes, there are, but it doesn't mean that it's an invalid assumption.
If you're arguing that my point is invalid because it's not a proper
application of Plan 9, I'd argue that Plan 9 isn't fit for the
Internet, where there are malicious users and script kiddies.
That said, I don't disagree. Perhaps Plan 9's environment hasn't been
assumed to contain malicious users. Which brings up the question: Can
Plan 9 be safely run in a potentially malicious environment? Based on
this argument, no, it cannot. Since I want to run Plan 9 in this sort
of environment (and thus move away from that assumption), I want to
address these problems, and I kind of feel like it's weird to be
essentially told, ``Don't do that.''
If you don't want to run Plan 9 there, ok. Maybe I'm the only one on
the list who does. Maybe someone will come out later who also wants
to.
Either way, the code is in development, and I'm going to finish it.
Don't use it if you don't want to :)
> i would think the *first* order of business would be to encrypt
> all traffic to the fs so users can't use snoopy to captures other
> user's fs traffic
This is another serious issue. Perhaps I'll tackle it next.
> - erik
--dho
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 1:25 ` Devon H. O'Dell
@ 2009-04-17 1:54 ` erik quanstrom
2009-04-17 2:17 ` Devon H. O'Dell
2009-04-17 1:59 ` Russ Cox
2009-04-17 2:07 ` Bakul Shah
2 siblings, 1 reply; 94+ messages in thread
From: erik quanstrom @ 2009-04-17 1:54 UTC (permalink / raw)
To: 9fans
> > interrupts are quite different. there are lots of things that are
> > a bad idea in interrupt context. but one can wakeup a kernel
> > proc that's sitting there waiting to deal with all the hair.
>
> Right, we're saying the same thing backwards. I just am not sure why
> smalloc was brought up. Yes, it is able to sleep until memory is
> available for the operation, but it's not used *everywhere*.
that's part of my point. sometimes smalloc is appropriate,
sometimes it is not. it depends on other things than just
what you are going to use the allocation for.
do you have a particular, concrete example?
> > but what they aren't doing is writing fork bomb programs or programs
> > that fuzz device drivers.
>
> Right, and that's a real threat.
but you can't defend against it unless you decide you
can have exactly n processes per system and each one can
have 1/n * userlandsize bytes of memory. you could divvy
up memory by user to be a little bit more flexable, but
providing hard guarantees is going to be very limiting.
don't forget about the stack overcommitment issue.
> Yes, there are, but it doesn't mean that it's an invalid assumption.
> If you're arguing that my point is invalid because it's not a proper
> application of Plan 9, I'd argue that Plan 9 isn't fit for the
> Internet, where there are malicious users and script kiddies.
i just stated what i thought the historical situation was. the
point was only that changing direction will be difficult.
i don't think the second part of your argument holds.
defending from a local threat (like a fork bomb) is much different
from defending against script kiddies. it's also much harder.
also, script kiddies don't do a good job of targeting plan 9.
> If you don't want to run Plan 9 there, ok. Maybe I'm the only one on
> the list who does. Maybe someone will come out later who also wants
> to.
if i didn't think this could be useful, i wouldn't
bother replying.
- erik
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 1:25 ` Devon H. O'Dell
2009-04-17 1:54 ` erik quanstrom
@ 2009-04-17 1:59 ` Russ Cox
2009-04-17 12:07 ` maht
2009-04-17 2:07 ` Bakul Shah
2 siblings, 1 reply; 94+ messages in thread
From: Russ Cox @ 2009-04-17 1:59 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
> That said, I don't disagree. Perhaps Plan 9's environment hasn't been
> assumed to contain malicious users. Which brings up the question: Can
> Plan 9 be safely run in a potentially malicious environment? Based on
> this argument, no, it cannot. Since I want to run Plan 9 in this sort
> of environment (and thus move away from that assumption), I want to
> address these problems, and I kind of feel like it's weird to be
> essentially told, ``Don't do that.''
If you were trying to run Plan 9 on systems that were allowed
to flip 1% of the bits in memory at random each day, we'd tell
you "don't do that" too.
Linux and FreeBSD and OS X can't be run in the kind of
environment you describe either. If people are being malicious
and trying to take down the system, the right fix is to kick them off.
If you want true isolation between the users you should give
them each a VM, not a Plan 9 account.
Russ
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 1:25 ` Devon H. O'Dell
2009-04-17 1:54 ` erik quanstrom
2009-04-17 1:59 ` Russ Cox
@ 2009-04-17 2:07 ` Bakul Shah
2009-04-17 2:19 ` Devon H. O'Dell
2009-04-17 5:06 ` Eris Discordia
2 siblings, 2 replies; 94+ messages in thread
From: Bakul Shah @ 2009-04-17 2:07 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
On Thu, 16 Apr 2009 21:25:06 EDT "Devon H. O'Dell" <devon.odell@gmail.com> wrote:
> That said, I don't disagree. Perhaps Plan 9's environment hasn't been
> assumed to contain malicious users. Which brings up the question: Can
> Plan 9 be safely run in a potentially malicious environment? Based on
> this argument, no, it cannot. Since I want to run Plan 9 in this sort
> of environment (and thus move away from that assumption), I want to
> address these problems, and I kind of feel like it's weird to be
> essentially told, ``Don't do that.''
Why not give each user a virtual plan9? Not like vmware/qemu
but more like FreeBSD's jail(8), "done more elegantly"[TM]!
To deal with potentially malicious users you can virtualize
resources, backed by limited/configurable real resources.
The other thought that comes to mind is to consider something
like class based queuing (from the networking world). That
is, allow choice of different allocation/scheduling/resource
use policies and allow further subdivision. Then you can give
preferential treatment to known good guys. Other users can
still experiment to their heart's content within the
resources allowed them.
My point being think of a consistent high level model that
you like and then worry about implementation details.
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 1:54 ` erik quanstrom
@ 2009-04-17 2:17 ` Devon H. O'Dell
2009-04-17 2:23 ` erik quanstrom
0 siblings, 1 reply; 94+ messages in thread
From: Devon H. O'Dell @ 2009-04-17 2:17 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
2009/4/16 erik quanstrom <quanstro@quanstro.net>:
>> Right, we're saying the same thing backwards. I just am not sure why
>> smalloc was brought up. Yes, it is able to sleep until memory is
>> available for the operation, but it's not used *everywhere*.
>
> that's part of my point. sometimes smalloc is appropriate,
> sometimes it is not. it depends on other things than just
> what you are going to use the allocation for.
>
> do you have a particular, concrete example?
No, you brought it up first, asking if I had looked at it. I think
this particular thread of discussion is agreed upon and tangential.
>> > but what they aren't doing is writing fork bomb programs or programs
>> > that fuzz device drivers.
>>
>> Right, and that's a real threat.
>
> but you can't defend against it unless you decide you
> can have exactly n processes per system and each one can
> have 1/n * userlandsize bytes of memory. you could divvy
> up memory by user to be a little bit more flexable, but
> providing hard guarantees is going to be very limiting.
Yeah, that's part of the reason I posted here, was to see if other
people had different ideas. This usually isn't a bad place to pull in
thoughts and solutions. And of course processes aren't the only
`potential evil.'
I agree that my solution may not be the best one. But I can't come up
with anything better right now, and at least this seems
semi-Plan-9-ish.
> don't forget about the stack overcommitment issue.
Heh.
>> Yes, there are, but it doesn't mean that it's an invalid assumption.
>> If you're arguing that my point is invalid because it's not a proper
>> application of Plan 9, I'd argue that Plan 9 isn't fit for the
>> Internet, where there are malicious users and script kiddies.
>
> i just stated what i thought the historical situation was. the
> point was only that changing direction will be difficult.
This thread certainly proves that :)
> i don't think the second part of your argument holds.
> defending from a local threat (like a fork bomb) is much different
> from defending against script kiddies. it's also much harder.
>
> also, script kiddies don't do a good job of targeting plan 9.
Point taken.
>> If you don't want to run Plan 9 there, ok. Maybe I'm the only one on
>> the list who does. Maybe someone will come out later who also wants
>> to.
>
> if i didn't think this could be useful, i wouldn't
> bother replying.
Point taken :). So are there other / better ideas? Did anybody else
read that paper about Viengoos?
> - erik
--dho
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 2:07 ` Bakul Shah
@ 2009-04-17 2:19 ` Devon H. O'Dell
2009-04-17 6:33 ` Bakul Shah
2009-04-17 5:06 ` Eris Discordia
1 sibling, 1 reply; 94+ messages in thread
From: Devon H. O'Dell @ 2009-04-17 2:19 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
2009/4/16 Bakul Shah <bakul+plan9@bitblocks.com>:
> On Thu, 16 Apr 2009 21:25:06 EDT "Devon H. O'Dell" <devon.odell@gmail.com> wrote:
>> That said, I don't disagree. Perhaps Plan 9's environment hasn't been
>> assumed to contain malicious users. Which brings up the question: Can
>> Plan 9 be safely run in a potentially malicious environment? Based on
>> this argument, no, it cannot. Since I want to run Plan 9 in this sort
>> of environment (and thus move away from that assumption), I want to
>> address these problems, and I kind of feel like it's weird to be
>> essentially told, ``Don't do that.''
>
> Why not give each user a virtual plan9? Not like vmware/qemu
> but more like FreeBSD's jail(8), "done more elegantly"[TM]!
> To deal with potentially malicious users you can virtualize
> resources, backed by limited/configurable real resources.
I saw a talk about Mult at DCBSDCon. I think it's a much better idea
than FreeBSD jail(8), and its security is provable.
See also: http://mult.bsd.lv/
I do like this idea.
> The other thought that comes to mind is to consider something
> like class based queuing (from the networking world). That
> is, allow choice of different allocation/scheduling/resource
> use policies and allow further subdivision. Then you can give
> preferential treatment to known good guys. Other users can
> still experiment to their heart's content within the
> resources allowed them.
>
> My point being think of a consistent high level model that
> you like and then worry about implementation details.
That's also another interesting idea I hadn't considered.
Anybody else with thoughts on these?
--dho
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 2:17 ` Devon H. O'Dell
@ 2009-04-17 2:23 ` erik quanstrom
2009-04-17 2:33 ` Devon H. O'Dell
0 siblings, 1 reply; 94+ messages in thread
From: erik quanstrom @ 2009-04-17 2:23 UTC (permalink / raw)
To: 9fans
On Thu Apr 16 22:18:35 EDT 2009, devon.odell@gmail.com wrote:
> > i just stated what i thought the historical situation was. the
> > point was only that changing direction will be difficult.
>
> This thread certainly proves that :)
a 9fans thread proves nothing.
- erik
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 2:23 ` erik quanstrom
@ 2009-04-17 2:33 ` Devon H. O'Dell
2009-04-17 2:43 ` J.R. Mauro
2009-04-17 9:26 ` Charles Forsyth
0 siblings, 2 replies; 94+ messages in thread
From: Devon H. O'Dell @ 2009-04-17 2:33 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
2009/4/16 erik quanstrom <quanstro@quanstro.net>:
> On Thu Apr 16 22:18:35 EDT 2009, devon.odell@gmail.com wrote:
>> > i just stated what i thought the historical situation was. the
>> > point was only that changing direction will be difficult.
>>
>> This thread certainly proves that :)
>
> a 9fans thread proves nothing.
Conceptually, anyway. Why is everyone always so hell-bent on hair-splitting? :P
> - erik
>
>
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 2:33 ` Devon H. O'Dell
@ 2009-04-17 2:43 ` J.R. Mauro
2009-04-17 5:48 ` john
2009-04-17 9:26 ` Charles Forsyth
1 sibling, 1 reply; 94+ messages in thread
From: J.R. Mauro @ 2009-04-17 2:43 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
On Thu, Apr 16, 2009 at 10:33 PM, Devon H. O'Dell <devon.odell@gmail.com> wrote:
> 2009/4/16 erik quanstrom <quanstro@quanstro.net>:
>> On Thu Apr 16 22:18:35 EDT 2009, devon.odell@gmail.com wrote:
>>> > i just stated what i thought the historical situation was. the
>>> > point was only that changing direction will be difficult.
>>>
>>> This thread certainly proves that :)
>>
>> a 9fans thread proves nothing.
>
> Conceptually, anyway. Why is everyone always so hell-bent on hair-splitting? :P
So much code, so much precision, so much specificity in dealing with
computers. It often spreads outside of the computational context and
into one's personality.
>
>> - erik
>>
>>
>
>
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-16 20:19 ` Devon H. O'Dell
@ 2009-04-17 4:48 ` lucio
2009-04-17 5:03 ` Eris Discordia
0 siblings, 1 reply; 94+ messages in thread
From: lucio @ 2009-04-17 4:48 UTC (permalink / raw)
To: 9fans
>> One can indirectly (and more consistently) limit the number of
>> allocated resources in this fashion (indeed, the number of open file
>> descriptors) by determining the amount of memory consumed by that
>> resource as proportional to the size of the resource. If I as a user
>> have 64,000 allocations of type Foo, and struct Foo is 64 bytes, then
>> I hold 1,000 Foos.
>
> And by this, I clearly mean 64,000 bytes of allocated Foos.
>From purely a spectator's perspective, I believe that if one needs to
add considerable complexity to Plan 9 in the form of user-based kernel
resource management, one may as well look carefully at the option of
adding self-virtualisation to the Plan 9 kernel and manage resources
in the virtualisation layer.
Plan 9 has provided a wide range of sophisticated, yet simple
techniques to solve a wide range of computer/system problems, but I'm
of the opinion that it missed virtualisation as one of these
techniques. I may be dreaming, but I've long been of the opinion that
Plan 9 itself makes a great platfrom on which to construct
virtualisation.
++L
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 4:48 ` lucio
@ 2009-04-17 5:03 ` Eris Discordia
2009-04-17 9:47 ` lucio
0 siblings, 1 reply; 94+ messages in thread
From: Eris Discordia @ 2009-04-17 5:03 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
> Plan 9 itself makes a great platfrom on which to construct
> virtualisation.
I don't know what Inferno is but the phrase 'virtual machine' appears
somewhere in the product description. Isn't Inferno the 'it' you're
searching for?
--On Friday, April 17, 2009 6:48 AM +0200 lucio@proxima.alt.za wrote:
>>> One can indirectly (and more consistently) limit the number of
>>> allocated resources in this fashion (indeed, the number of open file
>>> descriptors) by determining the amount of memory consumed by that
>>> resource as proportional to the size of the resource. If I as a user
>>> have 64,000 allocations of type Foo, and struct Foo is 64 bytes, then
>>> I hold 1,000 Foos.
>>
>> And by this, I clearly mean 64,000 bytes of allocated Foos.
>
> From purely a spectator's perspective, I believe that if one needs to
> add considerable complexity to Plan 9 in the form of user-based kernel
> resource management, one may as well look carefully at the option of
> adding self-virtualisation to the Plan 9 kernel and manage resources
> in the virtualisation layer.
>
> Plan 9 has provided a wide range of sophisticated, yet simple
> techniques to solve a wide range of computer/system problems, but I'm
> of the opinion that it missed virtualisation as one of these
> techniques. I may be dreaming, but I've long been of the opinion that
> Plan 9 itself makes a great platfrom on which to construct
> virtualisation.
>
> ++L
>
>
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 2:07 ` Bakul Shah
2009-04-17 2:19 ` Devon H. O'Dell
@ 2009-04-17 5:06 ` Eris Discordia
1 sibling, 0 replies; 94+ messages in thread
From: Eris Discordia @ 2009-04-17 5:06 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
> The other thought that comes to mind is to consider something
> like class based queuing (from the networking world). That
> is, allow choice of different allocation/scheduling/resource
> use policies and allow further subdivision.
As with jail, this is also present in FreeBSD, I believe. It's called
'login classes.' Although it's probably not as flexible as you'd want it to
be.
--On Thursday, April 16, 2009 7:07 PM -0700 Bakul Shah
<bakul+plan9@bitblocks.com> wrote:
> On Thu, 16 Apr 2009 21:25:06 EDT "Devon H. O'Dell"
> <devon.odell@gmail.com> wrote:
>> That said, I don't disagree. Perhaps Plan 9's environment hasn't been
>> assumed to contain malicious users. Which brings up the question: Can
>> Plan 9 be safely run in a potentially malicious environment? Based on
>> this argument, no, it cannot. Since I want to run Plan 9 in this sort
>> of environment (and thus move away from that assumption), I want to
>> address these problems, and I kind of feel like it's weird to be
>> essentially told, ``Don't do that.''
>
> Why not give each user a virtual plan9? Not like vmware/qemu
> but more like FreeBSD's jail(8), "done more elegantly"[TM]!
> To deal with potentially malicious users you can virtualize
> resources, backed by limited/configurable real resources.
>
> The other thought that comes to mind is to consider something
> like class based queuing (from the networking world). That
> is, allow choice of different allocation/scheduling/resource
> use policies and allow further subdivision. Then you can give
> preferential treatment to known good guys. Other users can
> still experiment to their heart's content within the
> resources allowed them.
>
> My point being think of a consistent high level model that
> you like and then worry about implementation details.
>
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 2:43 ` J.R. Mauro
@ 2009-04-17 5:48 ` john
2009-04-17 5:52 ` Bruce Ellis
2009-04-17 5:52 ` andrey mirtchovski
0 siblings, 2 replies; 94+ messages in thread
From: john @ 2009-04-17 5:48 UTC (permalink / raw)
To: 9fans
> On Thu, Apr 16, 2009 at 10:33 PM, Devon H. O'Dell <devon.odell@gmail.com> wrote:
>> 2009/4/16 erik quanstrom <quanstro@quanstro.net>:
>>> On Thu Apr 16 22:18:35 EDT 2009, devon.odell@gmail.com wrote:
>>>> > i just stated what i thought the historical situation was. the
>>>> > point was only that changing direction will be difficult.
>>>>
>>>> This thread certainly proves that :)
>>>
>>> a 9fans thread proves nothing.
>>
>> Conceptually, anyway. Why is everyone always so hell-bent on hair-splitting? :P
>
> So much code, so much precision, so much specificity in dealing with
> computers. It often spreads outside of the computational context and
> into one's personality.
>
I'd think less that than the well-documented 9fans discussion process:
1. Someone brings up an idea
2. Half the list explains why it's bad
3. Trolling
4. Deciding on what is most "plan 9-ish"
5. No code is ever implemented by anyone
John
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 5:48 ` john
@ 2009-04-17 5:52 ` Bruce Ellis
2009-04-17 5:52 ` andrey mirtchovski
1 sibling, 0 replies; 94+ messages in thread
From: Bruce Ellis @ 2009-04-17 5:52 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
Not productive huh? That why not even Tiger reads the list anymore.
But I read mail from you.
brucee
On Fri, Apr 17, 2009 at 3:48 PM, <john@csplan9.rit.edu> wrote:
>> On Thu, Apr 16, 2009 at 10:33 PM, Devon H. O'Dell <devon.odell@gmail.com> wrote:
>>> 2009/4/16 erik quanstrom <quanstro@quanstro.net>:
>>>> On Thu Apr 16 22:18:35 EDT 2009, devon.odell@gmail.com wrote:
>>>>> > i just stated what i thought the historical situation was. the
>>>>> > point was only that changing direction will be difficult.
>>>>>
>>>>> This thread certainly proves that :)
>>>>
>>>> a 9fans thread proves nothing.
>>>
>>> Conceptually, anyway. Why is everyone always so hell-bent on hair-splitting? :P
>>
>> So much code, so much precision, so much specificity in dealing with
>> computers. It often spreads outside of the computational context and
>> into one's personality.
>>
>
> I'd think less that than the well-documented 9fans discussion process:
> 1. Someone brings up an idea
> 2. Half the list explains why it's bad
> 3. Trolling
> 4. Deciding on what is most "plan 9-ish"
> 5. No code is ever implemented by anyone
>
>
>
> John
>
>
>
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 5:48 ` john
2009-04-17 5:52 ` Bruce Ellis
@ 2009-04-17 5:52 ` andrey mirtchovski
2009-04-17 5:57 ` Bruce Ellis
1 sibling, 1 reply; 94+ messages in thread
From: andrey mirtchovski @ 2009-04-17 5:52 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
> 5. No code is ever implemented by anyone
extremely efficient, from a SLOC point of view, no?
it also leaves a lot of time for drinking belgian beer, which is nice.
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 5:52 ` andrey mirtchovski
@ 2009-04-17 5:57 ` Bruce Ellis
0 siblings, 0 replies; 94+ messages in thread
From: Bruce Ellis @ 2009-04-17 5:57 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
As a another data point I'll offer IW9P2009-Bondi - involved a lot of
beer and beach/camping but we wrote a shit-load of code. And it was
fun. Not much sleep. Had to eat too but time sharing coding and
cooking went well.
brucee
On Fri, Apr 17, 2009 at 3:52 PM, andrey mirtchovski
<mirtchovski@gmail.com> wrote:
>> 5. No code is ever implemented by anyone
>
> extremely efficient, from a SLOC point of view, no?
>
> it also leaves a lot of time for drinking belgian beer, which is nice.
>
>
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 2:19 ` Devon H. O'Dell
@ 2009-04-17 6:33 ` Bakul Shah
2009-04-17 9:51 ` lucio
` (2 more replies)
0 siblings, 3 replies; 94+ messages in thread
From: Bakul Shah @ 2009-04-17 6:33 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
On Thu, 16 Apr 2009 22:19:21 EDT "Devon H. O'Dell" <devon.odell@gmail.com> wrote:
> 2009/4/16 Bakul Shah <bakul+plan9@bitblocks.com>:
> > Why not give each user a virtual plan9? Not like vmware/qemu
> > but more like FreeBSD's jail(8), "done more elegantly"[TM]!
> > To deal with potentially malicious users you can virtualize
> > resources, backed by limited/configurable real resources.
>
> I saw a talk about Mult at DCBSDCon. I think it's a much better idea
> than FreeBSD jail(8), and its security is provable.
>
> See also: http://mult.bsd.lv/
But is it elegant?
[Interviewer: What do you think the analog for software is?
Arthur Whiteny: Poetry.
Interviewer: Poetry captures the aesthetics, but not the precision.
Arthur Whiteny: I don't know, may be it does.
-- ACM Queue Feb/Mar 2009, page 18.
http://mags.acm.org/queue/20090203]
Perhaps Plan9's model would be easier (and more fun) to
extend to accomplish this. One can already have a private
namespace. How about changing proc(3) to show only your
login process and its descendents? What if each user can have
a separate IP stack, separate (virtualized) interfaces and so
on? But you'd have to implement some sort of limits on
oversubcribing (ratio of virtual to real resources). Unlike
securitization in the hedge fund world.
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-16 22:19 ` erik quanstrom
2009-04-16 23:36 ` Devon H. O'Dell
@ 2009-04-17 8:36 ` Richard Miller
1 sibling, 0 replies; 94+ messages in thread
From: Richard Miller @ 2009-04-17 8:36 UTC (permalink / raw)
To: 9fans
> having the potential for running out of memory in an interrupt
> handler might be a sign that a little code reorg is in order, if you
> are worried about this sort of thing. (and even if you're not.)
To begin with:
grep -n '.((iallocb)|(qproduce))' /sys/src/9/^(port pc)^/*.c
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 2:33 ` Devon H. O'Dell
2009-04-17 2:43 ` J.R. Mauro
@ 2009-04-17 9:26 ` Charles Forsyth
2009-04-17 10:29 ` Steve Simon
1 sibling, 1 reply; 94+ messages in thread
From: Charles Forsyth @ 2009-04-17 9:26 UTC (permalink / raw)
To: 9fans
>Conceptually, anyway. Why is everyone always so hell-bent on hair-splitting? :P
probably the other options suggested by the careers advisor were theology and hairdressing.
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 5:03 ` Eris Discordia
@ 2009-04-17 9:47 ` lucio
2009-04-17 10:24 ` Eris Discordia
2009-04-17 16:32 ` [9fans] VMs, etc. (was: Re: security questions) blstuart
0 siblings, 2 replies; 94+ messages in thread
From: lucio @ 2009-04-17 9:47 UTC (permalink / raw)
To: 9fans
> I don't know what Inferno is but the phrase 'virtual machine' appears
> somewhere in the product description. Isn't Inferno the 'it' you're
> searching for?
No, Inferno resembles - very superficially, as you will discover if
you study the literature - a JAVA interpreter surrounded by its own
operating system. There are so many clever things about Inferno, it
is hard to do it justice. But it is not a virtualiser. More's the
pity, of course. A virtualiser with Inferno's good features would be
a very useful device.
Actually, I have long had a feeling that there is a convergence of
VNC, Drawterm, Inferno and the many virtualising tools (VMware, Xen,
Lguest, etc.), but it's one of these intuition things that I cannot
turn into anything concrete.
++L
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 6:33 ` Bakul Shah
@ 2009-04-17 9:51 ` lucio
2009-04-17 11:34 ` erik quanstrom
2009-04-17 11:59 ` Devon H. O'Dell
2 siblings, 0 replies; 94+ messages in thread
From: lucio @ 2009-04-17 9:51 UTC (permalink / raw)
To: 9fans
> Unlike
> securitization in the hedge fund world.
Actually, it is a lot safer to provide something like securitisation
(hm, make that "s" a "z", it is no doubt a native, American word) in a
virtualised environment, you're much less likely to bring down the
entire system's economy, then.
++L
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 9:47 ` lucio
@ 2009-04-17 10:24 ` Eris Discordia
2009-04-17 11:55 ` lucio
2009-04-17 16:32 ` [9fans] VMs, etc. (was: Re: security questions) blstuart
1 sibling, 1 reply; 94+ messages in thread
From: Eris Discordia @ 2009-04-17 10:24 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
I see. Thanks for the edification :-) I found--still find--it hard to
understand what Inferno is/does. Actually read
<http://www.vitanuova.com/inferno/papers/bltj.html> but it isn't very
direct about what it is that Inferno does for a user or what a user can do
with it; what distinguishes it from other (operating?) systems. I've
decided to try it because documentation says it will readily run on Windows.
As a side note, I found a short passage in the Inferno paper that confirmed
something I had pointed out previously on this list in almost identical
wording (and been ridiculed for):
> The Styx protocol lies above and is independent of the communications
> transport layer; it is readily carried over TCP/IP, PPP, ATM or various
> modem transport protocols.
--On Friday, April 17, 2009 11:47 AM +0200 lucio@proxima.alt.za wrote:
>> I don't know what Inferno is but the phrase 'virtual machine' appears
>> somewhere in the product description. Isn't Inferno the 'it' you're
>> searching for?
>
> No, Inferno resembles - very superficially, as you will discover if
> you study the literature - a JAVA interpreter surrounded by its own
> operating system. There are so many clever things about Inferno, it
> is hard to do it justice. But it is not a virtualiser. More's the
> pity, of course. A virtualiser with Inferno's good features would be
> a very useful device.
>
> Actually, I have long had a feeling that there is a convergence of
> VNC, Drawterm, Inferno and the many virtualising tools (VMware, Xen,
> Lguest, etc.), but it's one of these intuition things that I cannot
> turn into anything concrete.
>
> ++L
>
>
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 9:26 ` Charles Forsyth
@ 2009-04-17 10:29 ` Steve Simon
2009-04-17 11:04 ` Mechiel Lukkien
` (3 more replies)
0 siblings, 4 replies; 94+ messages in thread
From: Steve Simon @ 2009-04-17 10:29 UTC (permalink / raw)
To: 9fans
I am interested in the idea of adding some kind of resource limits
to plan9. If they existsed I would probably open it up to external
users, however different things would worry me:
CPU use
Implement the Fair share scheduler
User memory
Working swap would do me to fix this, but sadly rlimits would probably
be easier to implement.
Network bandwidth
Again a FSS type algorithm delaying or dropping packets could rate
control the network well I think.
Dialing remote ports
I don't become a spam relay so some restriction must be in place,
I guess this would require a minor modification to the IP stack.
Fork bombs
Erik's mod would help, but add a seccond threshold where after 15 secconds
you kill the proc failed the most fork() calls - the danger here is a spam
storm may cause listen(1) to be killed.
Running out of kernel memory
I don't perceive this as a problem, though this could be my lack of vision.
My 2¢ worth.
-Steve
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 10:29 ` Steve Simon
@ 2009-04-17 11:04 ` Mechiel Lukkien
2009-04-17 11:36 ` lucio
` (2 subsequent siblings)
3 siblings, 0 replies; 94+ messages in thread
From: Mechiel Lukkien @ 2009-04-17 11:04 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
On Fri, Apr 17, 2009 at 11:29:47AM +0100, Steve Simon wrote:
> I am interested in the idea of adding some kind of resource limits
> to plan9. If they existsed I would probably open it up to external
> users, however different things would worry me:
>
> CPU use
> Implement the Fair share scheduler
>
> User memory
> Working swap would do me to fix this, but sadly rlimits would probably
> be easier to implement.
>
> Network bandwidth
> Again a FSS type algorithm delaying or dropping packets could rate
> control the network well I think.
>
> Dialing remote ports
> I don't become a spam relay so some restriction must be in place,
> I guess this would require a minor modification to the IP stack.
>
> Fork bombs
> Erik's mod would help, but add a seccond threshold where after 15 secconds
> you kill the proc failed the most fork() calls - the danger here is a spam
> storm may cause listen(1) to be killed.
>
> Running out of kernel memory
> I don't perceive this as a problem, though this could be my lack of vision.
of all the resource capping on a public plan 9 server, i would say the
limits should be per user. not per-process (group) limits or similar.
i don't know how feasable that (accounting) is.
e.g. make sure a single user gets at most e.g. 50% of all available
resources (memory, procs, cpu time). seems fairest to me. leftover cpu
time can be given to active users. leftover memory should probably just
go unused (unless you want to start with swap, which lets you scale a
bit further but has limits too). if the per-user memory is too low,
just add more memory so it won't be. then at least multiple users can
use the system and a single one cannot lock it up.
dialing to the outside is perhaps easiest with an external firewall
(e.g. on adsl modem, they all have one nowadays). same for bandwidth
limiting. that won't fairly share the network bandwidth among the users
though of the cpu servers, but will leave your home connection usable.
then there is "none". anyone can become none, and services run as
none (at least initially). with per-user limits, anyone can hog none's
resources, leaving none left for network services (which other users
need to login). perhaps this is the reason per-user limits won't work?
or what would be the impact of disallowing becoming none for
non-hostowners? normal users might not need it?
mjl
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 6:33 ` Bakul Shah
2009-04-17 9:51 ` lucio
@ 2009-04-17 11:34 ` erik quanstrom
2009-04-17 12:14 ` Devon H. O'Dell
2009-04-17 11:59 ` Devon H. O'Dell
2 siblings, 1 reply; 94+ messages in thread
From: erik quanstrom @ 2009-04-17 11:34 UTC (permalink / raw)
To: 9fans
> What if each user can have a separate IP stack, separate
> (virtualized) interfaces and so on?
already possible, but you do need 1 physical ethernet
per ip stack if you want to talk to the outside world.
> But you'd have to implement some sort of limits on
> oversubcribing (ratio of virtual to real resources). Unlike
> securitization in the hedge fund world.
this would add a lot of code and result in the same problem
as today — you can be run out of a criticial resource.
- erik
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 10:29 ` Steve Simon
2009-04-17 11:04 ` Mechiel Lukkien
@ 2009-04-17 11:36 ` lucio
2009-04-17 11:40 ` lucio
2009-04-17 12:06 ` erik quanstrom
3 siblings, 0 replies; 94+ messages in thread
From: lucio @ 2009-04-17 11:36 UTC (permalink / raw)
To: 9fans
> Erik's mod would help, but add a seccond threshold where after 15 secconds
> you kill the proc failed the most fork() calls - the danger here is a spam
> storm may cause listen(1) to be killed.
You could put the rate limiting in listen(8) first, you may have
noticed that inetd(8) has this feature, at least in NetBSD, enabled by
default, in contravention of the POLA.
++L
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 10:29 ` Steve Simon
2009-04-17 11:04 ` Mechiel Lukkien
2009-04-17 11:36 ` lucio
@ 2009-04-17 11:40 ` lucio
2009-04-17 11:51 ` erik quanstrom
2009-04-17 12:06 ` erik quanstrom
3 siblings, 1 reply; 94+ messages in thread
From: lucio @ 2009-04-17 11:40 UTC (permalink / raw)
To: 9fans
> Working swap would do me to fix this, but sadly rlimits would probably
> be easier to implement.
There's an intrinsic belief that there cannot be anything wrong with
Plan 9's swap. Having encountered the rather tightly embedded use of
swap/segmentation/etc. in the Plan 9 kernel, but without having
explored it to any extent, I'm beginning to see where the faith
principle comes from. Before anyone can be convinced to "fix" swap,
it is imperative to be able to supply a reproducible error case. The
virtual memory management is too persuasive to be broken in any
significant way.
++L
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 11:40 ` lucio
@ 2009-04-17 11:51 ` erik quanstrom
0 siblings, 0 replies; 94+ messages in thread
From: erik quanstrom @ 2009-04-17 11:51 UTC (permalink / raw)
To: lucio, 9fans
> The
> virtual memory management is too persuasive to be broken in any
> significant way.
do you mean pervasive? if you do, i don't buy the argument.
it's easy to get lucky when doing concurrent programming with
locks, as in the plan 9 kernel. it's easy to get lucky in many cases,
and yet have completely bogus locking. (as i rediscovered this
morning.)
- erik
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 10:24 ` Eris Discordia
@ 2009-04-17 11:55 ` lucio
2009-04-17 13:08 ` Eris Discordia
[not found] ` <6FD675BC714D323BF959A53B@192.168.1.2>
0 siblings, 2 replies; 94+ messages in thread
From: lucio @ 2009-04-17 11:55 UTC (permalink / raw)
To: 9fans
> what it is that Inferno does for a user or what a user can do
> with it; what distinguishes it from other (operating?) systems. I've
> decided to try it because documentation says it will readily run on Windows.
Let's start with the fact that Inferno is a small-footprint, hosted
operating environment with its own, complete development tool set. As
such it is strictly portable across many architectures with all the
advantages of such portability as well as all the useful features
Inferno inherited from Plan 9. Not least of these is Limbo, a
programming language based on the mourned Alef and, conveniently,
interpreted by the Limbo virtual machine, not dissimilar from, but
much better thought out than the JAVA virtual machine.
You can pile on any number of additional great attributes of Inferno
and Limbo that make them highly useful. There is also the option to
run Inferno natively on some architectures (I've never dug any deeper
than the PC for this, so off the top of my head I can provide no
exciting examples) with all the drawbacks of needing device drivers
for all sorts of inconsiderate platforms.
In a way, I guess Inferno is a slightly different Plan 9 with built-in
virtualisation for a wide range of platforms. But the differences are
notable even if the philosophy is the same between the two
environment.
++L
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 6:33 ` Bakul Shah
2009-04-17 9:51 ` lucio
2009-04-17 11:34 ` erik quanstrom
@ 2009-04-17 11:59 ` Devon H. O'Dell
2 siblings, 0 replies; 94+ messages in thread
From: Devon H. O'Dell @ 2009-04-17 11:59 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
2009/4/17 Bakul Shah <bakul+plan9@bitblocks.com>:
> On Thu, 16 Apr 2009 22:19:21 EDT "Devon H. O'Dell" <devon.odell@gmail.com> wrote:
>> 2009/4/16 Bakul Shah <bakul+plan9@bitblocks.com>:
>> > Why not give each user a virtual plan9? Not like vmware/qemu
>> > but more like FreeBSD's jail(8), "done more elegantly"[TM]!
>> > To deal with potentially malicious users you can virtualize
>> > resources, backed by limited/configurable real resources.
>>
>> I saw a talk about Mult at DCBSDCon. I think it's a much better idea
>> than FreeBSD jail(8), and its security is provable.
>>
>> See also: http://mult.bsd.lv/
>
> But is it elegant?
Rather.
> [Interviewer: What do you think the analog for software is?
> Arthur Whiteny: Poetry.
> Interviewer: Poetry captures the aesthetics, but not the precision.
> Arthur Whiteny: I don't know, may be it does.
> -- ACM Queue Feb/Mar 2009, page 18.
> http://mags.acm.org/queue/20090203]
>
> Perhaps Plan9's model would be easier (and more fun) to
> extend to accomplish this. One can already have a private
> namespace. How about changing proc(3) to show only your
> login process and its descendents? What if each user can have
> a separate IP stack, separate (virtualized) interfaces and so
> on? But you'd have to implement some sort of limits on
> oversubcribing (ratio of virtual to real resources). Unlike
> securitization in the hedge fund world.
>
>
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 10:29 ` Steve Simon
` (2 preceding siblings ...)
2009-04-17 11:40 ` lucio
@ 2009-04-17 12:06 ` erik quanstrom
2009-04-17 13:52 ` Steve Simon
3 siblings, 1 reply; 94+ messages in thread
From: erik quanstrom @ 2009-04-17 12:06 UTC (permalink / raw)
To: 9fans
> Dialing remote ports
> I don't become a spam relay so some restriction must be in place,
> I guess this would require a minor modification to the IP stack.
does ip/hogports solve your problem?
- erik
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 1:59 ` Russ Cox
@ 2009-04-17 12:07 ` maht
0 siblings, 0 replies; 94+ messages in thread
From: maht @ 2009-04-17 12:07 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
> If you want true isolation between the users you should give
> them each a VM, not a Plan 9 account.
>
> Russ
>
>
So we chose to use a VM, now we have two problems
*http://tinyurl.com/cuul2m
or
*
http://www.computerworld.com/action/article.do?command=viewArticleBasic&taxonomyName=operating_systems&articleId=9131647&taxonomyId=89&intsrc=kc_top
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 11:34 ` erik quanstrom
@ 2009-04-17 12:14 ` Devon H. O'Dell
2009-04-17 18:29 ` Bakul Shah
0 siblings, 1 reply; 94+ messages in thread
From: Devon H. O'Dell @ 2009-04-17 12:14 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
2009/4/17 erik quanstrom <quanstro@quanstro.net>:
>> What if each user can have a separate IP stack, separate
>> (virtualized) interfaces and so on?
>
> already possible, but you do need 1 physical ethernet
> per ip stack if you want to talk to the outside world.
I'm sure it wouldn't be hard to add a virtual ``physical'' interface,
even though that seems a little bit pervasive, given the already
semi-virtual nature due to namespaces. Not sure how much of a hassle
it would be to make multiple stacks bindable to a single interface...
but perhaps that's the better way to go?
>> But you'd have to implement some sort of limits on
>> oversubcribing (ratio of virtual to real resources). Unlike
>> securitization in the hedge fund world.
>
> this would add a lot of code and result in the same problem
> as today — you can be run out of a criticial resource.
Oversubscribing is the root of the problem. In fact, even if it was
already done, on a terminal server, imagmem is also set to kpages. So
if someone found a way to blow up the kernel's draw buffer, boom. I
don't know how far reaching that is, as I've never really seen the
draw code.
Unfortunately, that's what you have to do unless you can afford to
invest in more hardware, or have a small userbase. Or find some middle
ground -- and maybe that's what the `virtualization' would address.
> - erik
--dho
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 11:55 ` lucio
@ 2009-04-17 13:08 ` Eris Discordia
2009-04-17 14:15 ` gdiaz
2009-04-17 16:39 ` lucio
[not found] ` <6FD675BC714D323BF959A53B@192.168.1.2>
1 sibling, 2 replies; 94+ messages in thread
From: Eris Discordia @ 2009-04-17 13:08 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
Very nice of you to go to lengths for describing Inferno to a non-techie.
Thank you. Just got the Fourth Edition ISO and will try it. Maybe even
learn some Limbo in long term.
--On Friday, April 17, 2009 1:55 PM +0200 lucio@proxima.alt.za wrote:
>> what it is that Inferno does for a user or what a user can do
>> with it; what distinguishes it from other (operating?) systems. I've
>> decided to try it because documentation says it will readily run on
>> Windows.
>
> Let's start with the fact that Inferno is a small-footprint, hosted
> operating environment with its own, complete development tool set. As
> such it is strictly portable across many architectures with all the
> advantages of such portability as well as all the useful features
> Inferno inherited from Plan 9. Not least of these is Limbo, a
> programming language based on the mourned Alef and, conveniently,
> interpreted by the Limbo virtual machine, not dissimilar from, but
> much better thought out than the JAVA virtual machine.
>
> You can pile on any number of additional great attributes of Inferno
> and Limbo that make them highly useful. There is also the option to
> run Inferno natively on some architectures (I've never dug any deeper
> than the PC for this, so off the top of my head I can provide no
> exciting examples) with all the drawbacks of needing device drivers
> for all sorts of inconsiderate platforms.
>
> In a way, I guess Inferno is a slightly different Plan 9 with built-in
> virtualisation for a wide range of platforms. But the differences are
> notable even if the philosophy is the same between the two
> environment.
>
> ++L
>
>
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 12:06 ` erik quanstrom
@ 2009-04-17 13:52 ` Steve Simon
0 siblings, 0 replies; 94+ messages in thread
From: Steve Simon @ 2009-04-17 13:52 UTC (permalink / raw)
To: 9fans
My understanding is that would prevent people listening and pretending to
offer services on my behalf, but would not stop them dialing SMTP ports
on other machines and sending them spam.
-Steve
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 13:08 ` Eris Discordia
@ 2009-04-17 14:15 ` gdiaz
2009-04-17 16:39 ` lucio
1 sibling, 0 replies; 94+ messages in thread
From: gdiaz @ 2009-04-17 14:15 UTC (permalink / raw)
To: 9fans
hello
you might want to take a look to vitanuova resources page for other inferno flavours than the official release.
inferno-os.googlecode.com
acme-sac.googlecode.com
slds.
gabi
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
[not found] ` <6FD675BC714D323BF959A53B@192.168.1.2>
@ 2009-04-17 16:15 ` Robert Raschke
2009-04-17 20:12 ` John Barham
0 siblings, 1 reply; 94+ messages in thread
From: Robert Raschke @ 2009-04-17 16:15 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
On Fri, Apr 17, 2009 at 2:08 PM, Eris Discordia
<eris.discordia@gmail.com> wrote:
> Very nice of you to go to lengths for describing Inferno to a non-techie.
> Thank you. Just got the Fourth Edition ISO and will try it. Maybe even learn
> some Limbo in long term.
Also note there's a new book out that includes Inferno as a major
example, essentially explaining OS principles in general, in Inferno,
and in Linux:
Principles of Operating Systems: Design and Applications
by Brian Stuart
( http://www.amazon.co.uk/exec/obidos/ASIN/1418837695 )
I've only just started reading it, so can't really comment on how good
it is yet. Looks promising so far though.
Robby
^ permalink raw reply [flat|nested] 94+ messages in thread
* [9fans] VMs, etc. (was: Re: security questions)
2009-04-17 9:47 ` lucio
2009-04-17 10:24 ` Eris Discordia
@ 2009-04-17 16:32 ` blstuart
2009-04-17 17:11 ` tlaronde
` (2 more replies)
1 sibling, 3 replies; 94+ messages in thread
From: blstuart @ 2009-04-17 16:32 UTC (permalink / raw)
To: 9fans
> Actually, I have long had a feeling that there is a convergence of
> VNC, Drawterm, Inferno and the many virtualising tools (VMware, Xen,
> Lguest, etc.), but it's one of these intuition things that I cannot
> turn into anything concrete.
This brings to mind something that's been rolling around
in the back of my head for a while. It was about 20 years
between the earliest UNIX and early Plan 9, and it's been
about 20 years since early Plan 9. One of the major drivers
of Plan 9 was the change in the computing landscape that
UNIX had adapted to at best clunkily. In particular, unlike
1969, 1) networks were nearly universal, 2) significant computing
was done at the terminal rather than back at the mini
at the other end of the RS-232 line, and 3) graphical interfaces
were quite common. None of these were part of the UNIX
model and the ways they were acommodated by 1989
were, in allusion to Kidder, like paper bags taped on the
side of the machine. Don't get me wrong, given the constraints
and uncertainties of the times, the early networking, GUI,
and distributed techniques were pretty good first cuts.
But by 1989, they were well-understood enough that
it made sense to reconsider them from scratch.
So what about today? It seems to me there are also three
major aspects of the milieu that have changed since 1989.
- First, the gap between the computational power at the
terminal and the computational power in the machine room
has shrunk to the point where it might no longer be significant.
It may be worth rethinking the separation of CPU and terminal.
For example, I'm typing this in acme running in a 9vx terminal
booted using using a combined fs/cpu/auth server for the
file system. But I rarely use the cpu server capability of
that machine.
- Second, network access has since become both ubituitous
and sporadic. In 1989 being on the network meant sitting
at a fixed machine tethered to the wall by Ethernet. Today,
one of the most common modes of use is the laptop that
we use to carry our computing world around with us. We
might be on the network at home, at work, at a hotel, at
Starbucks, or not at all, even all in the same day. So how
can a laptop and a file server play nice?
- Third, virtualization is no longer the domain of IBM big
iron (VM) and low-performance experiments (e.g. P-machines).
The current multi-core CPUs practically beg for virtualized
environments.
Am I suggesting another start-from-scratch project? Not
necessarily, but I don't want to reject that out of hand,
either. I tend to think that Plan 9 and Inferno can be a
good base that can adapt well to these changes. Though
I'm inclined to think that there's an opportunity to create
a better hypervisor, inspired by these systems we know
and love. As an example of the kind of rumination that
would be part of this process, is it possible to create a
hypervisor where the resources of one VM can be imported
by another with minimal (or better, no) modification to
the mainstream guys? This would allow any OS to
leverage the device drivers written for another.
I've gone on long enough. Those of you who have not
recently been laid off don't need to spend too much time
on my musings. But the question in my mind for a while
has been, is it time for another step back and rethinking
the big picture?
BLS
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 13:08 ` Eris Discordia
2009-04-17 14:15 ` gdiaz
@ 2009-04-17 16:39 ` lucio
1 sibling, 0 replies; 94+ messages in thread
From: lucio @ 2009-04-17 16:39 UTC (permalink / raw)
To: 9fans
> Very nice of you to go to lengths for describing Inferno to a non-techie.
> Thank you. Just got the Fourth Edition ISO and will try it. Maybe even
> learn some Limbo in long term.
My pleasure. I just hope no one decides to confront me on all the
inaccuracies that are likely to have crept in :-)
++L
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc. (was: Re: security questions)
2009-04-17 16:32 ` [9fans] VMs, etc. (was: Re: security questions) blstuart
@ 2009-04-17 17:11 ` tlaronde
2009-04-17 17:29 ` erik quanstrom
` (3 more replies)
2009-04-17 19:02 ` lucio
2009-04-17 19:16 ` [9fans] Plan9 - the next 20 years Steve Simon
2 siblings, 4 replies; 94+ messages in thread
From: tlaronde @ 2009-04-17 17:11 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
On Fri, Apr 17, 2009 at 11:32:33AM -0500, blstuart@bellsouth.net wrote:
> - First, the gap between the computational power at the
> terminal and the computational power in the machine room
> has shrunk to the point where it might no longer be significant.
> It may be worth rethinking the separation of CPU and terminal.
> For example, I'm typing this in acme running in a 9vx terminal
> booted using using a combined fs/cpu/auth server for the
> file system. But I rarely use the cpu server capability of
> that machine.
I'm afraid I don't quite agree with you.
The definition of a terminal has changed. In Unix, the graphical
interface (X11) was a graphical variant of the text terminal interface,
i.e. the articulation (link, network) was put on the wrong place,
the graphical terminal (X11 server) being a kind of dumb terminal (a
little above a frame buffer), leaving all the processing, including the
handling of the graphical interface (generating the image,
administrating the UI, the menus) on the CPU (Xlib and toolkits run on
the CPU, not the Xserver).
A terminal is not a no-processing capabilities (a dumb terminal):
it can be a full terminal, that is able to handle the interface,
the representation of data and commands (wandering in a menu shall
be terminal stuff; other users have not to be impacted by an user's
wandering through the UI).
More and more, for administration, using light terminals, without
software installations is a way to go (less ressources in TCO). "Green"
technology. Data less terminals for security (one looses a terminal, not
the data), and data less for safety (data is centralized and protected).
Secondly, one is accustomed to a physical user being several distinct
logical users (accounts), for managing different tasks, or accessing
different kind of data.
But (to my surprise), the converse is true: a collection of individuals
can be a single logical user, having to handle concurrently the very
same rw data. Terminals are then just distinct views of the same data
(imagine in a CAD program having different windows, different views of a
file ; this is the same, except that the windows are on different
terminals, with different "instances" of the logical user in front of
them).
The processing is then better kept on a single CPU, handling the
concurrency (and not the fileserver trying to accomodate). The views are
multiplexed, but not the handling of the data.
Thirdly, you can have a slow/loose link between a CPU and a terminal
since the commands are only a small fraction of the processing done.
You must have a fast or tight link between the CPU and the fileserver.
In some sense, logically (but not efficiently: read the caveats in the
Plan9 papers; a processor is nothing without tightly coupled memory, so
memory is not a remote pool sharable---Mach!), even today on an
average computer one has this articulation: a CPU (with a FPU
perhaps) ; tightly or loosely connected storage (?ATA or SAN) ;
graphical capacities (terminal) : GPU.
--
Thierry Laronde (Alceste) <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc. (was: Re: security questions)
2009-04-17 17:11 ` tlaronde
@ 2009-04-17 17:29 ` erik quanstrom
2009-04-17 18:18 ` tlaronde
2009-04-17 18:50 ` blstuart
2009-04-17 18:31 ` blstuart
` (2 subsequent siblings)
3 siblings, 2 replies; 94+ messages in thread
From: erik quanstrom @ 2009-04-17 17:29 UTC (permalink / raw)
To: 9fans
> In some sense, logically (but not efficiently: read the caveats in the
> Plan9 papers; a processor is nothing without tightly coupled memory, so
> memory is not a remote pool sharable---Mach!),
if you look closely enough, this kind of breaks down. numa
machines are pretty popular these days (opteron, intel qpi-based
processors). it's possible with a modest loss of performance to
share memory across processors and not worry about it.
there is such an enormous difference in network speeds
(4 orders of magnitude 1mbps dsl/wireless up to 10gbps)
that it's hard to generalize but i don't see why tightly
coupled memory is an absolutely necessary. you could
think of the network as 1/10th to 1/10000th speed quickpath.
it may still be a big win.
> even today on an
> average computer one has this articulation: a CPU (with a FPU
> perhaps) ; tightly or loosely connected storage (?ATA or SAN) ;
> graphical capacities (terminal) : GPU.
plan 9 can make the nas/dasd dichotomy disappear.
import -E ssl storage.coraid.com '#S' /n/bigdisks
- erik
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc. (was: Re: security questions)
2009-04-17 17:29 ` erik quanstrom
@ 2009-04-17 18:18 ` tlaronde
2009-04-17 19:00 ` erik quanstrom
2009-04-17 18:50 ` blstuart
1 sibling, 1 reply; 94+ messages in thread
From: tlaronde @ 2009-04-17 18:18 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
On Fri, Apr 17, 2009 at 01:29:09PM -0400, erik quanstrom wrote:
> > In some sense, logically (but not efficiently: read the caveats in the
> > Plan9 papers; a processor is nothing without tightly coupled memory, so
> > memory is not a remote pool sharable---Mach!),
>
> if you look closely enough, this kind of breaks down. numa
> machines are pretty popular these days (opteron, intel qpi-based
> processors). it's possible with a modest loss of performance to
> share memory across processors and not worry about it.
NUMA are, from my point of view, "tightly" connected.
By loosely, I mean a memory accessed by non dedicated processor
hardware means (if this makes sense). Moving data from different
memories via some IP based protocol or worse. But all in all,
finally a copy is put in the tightly connected memory, whether huge
caches, or dedicated main memory.
The disaster of Mach (I don't know if my bad english is responsible for
this, but in the Plan9 paper the "research" or "university" OS that is
implicitely gibed at is Mach) is a kind of example.
NUMA are sufficiently special beasts that the majority of huge computing
facilities have been done by clusters (because it was easier for
software only organizations).
This definitively doesn't mean NUMA has no raison d'être. On the
contrary, this is an argument supplementary to the distinction
between the UI (terminals) and the CPU.
--
Thierry Laronde (Alceste) <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 12:14 ` Devon H. O'Dell
@ 2009-04-17 18:29 ` Bakul Shah
0 siblings, 0 replies; 94+ messages in thread
From: Bakul Shah @ 2009-04-17 18:29 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
On Fri, 17 Apr 2009 08:14:12 EDT "Devon H. O'Dell" <devon.odell@gmail.com> wrote:
> 2009/4/17 erik quanstrom <quanstro@quanstro.net>:
> >> What if each user can have a separate IP stack, separate
> >> (virtualized) interfaces and so on?
> >
> > already possible, but you do need 1 physical ethernet
> > per ip stack if you want to talk to the outside world.
>
> I'm sure it wouldn't be hard to add a virtual ``physical'' interface,
> even though that seems a little bit pervasive, given the already
> semi-virtual nature due to namespaces. Not sure how much of a hassle
> it would be to make multiple stacks bindable to a single interface...
> but perhaps that's the better way to go?
You'd have to add a packet classifier of some sort. Packets
to host A get delivered to logical interface #1, host B get
delivered to #2 and so on. Going out is not a problem.
Alternatively put each virtual host on a different VLAN (if
your ethernet controller does VLANs).
> >> But you'd have to implement some sort of limits on
> >> oversubcribing (ratio of virtual to real resources). Unlike
> >> securitization in the hedge fund world.
> >
> > this would add a lot of code and result in the same problem
> > as today =97 you can be run out of a criticial resource.
>
> Oversubscribing is the root of the problem. In fact, even if it was
> already done, on a terminal server, imagmem is also set to kpages. So
> if someone found a way to blow up the kernel's draw buffer, boom. I
> don't know how far reaching that is, as I've never really seen the
> draw code.
If you are planning to open up a system to the public, then
provisioning for the peak use of your system will result in a
lot of waste (even if you had the resources to so provision).
Even your ISP uses oversubscription (probably by a factor of
100, if not more. If his upstream data pipes give him N bps,
he will give out 100N bps of total bandwidth to his
customers. If you want guaranteed bandwidth, you have to
shell out a lot more for a "gold" service level agreement).
What I meant is
a) you need to ensure that a single user can't exceed his resoucre limits,
b) enforce a sensible oversubscription limit (if you oversubscribe
by a factor of 30, don't let in the 31st concurrent user), and
c) very likely you also want to put these users in different
login classes (ala *BSD) and disallow each class to
cumulatively exceed configured resource limit (*BSD
doesn't do this) -- this is where I was thinking of CBQ.
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc. (was: Re: security questions)
2009-04-17 17:11 ` tlaronde
2009-04-17 17:29 ` erik quanstrom
@ 2009-04-17 18:31 ` blstuart
2009-04-17 18:45 ` erik quanstrom
2009-04-17 19:39 ` [9fans] VMs, etc. (was: Re: security questions) tlaronde
2009-04-17 18:59 ` Eris Discordia
[not found] ` <1322FA0842063D3D53C712DC@192.168.1.2>
3 siblings, 2 replies; 94+ messages in thread
From: blstuart @ 2009-04-17 18:31 UTC (permalink / raw)
To: 9fans
> The definition of a terminal has changed. In Unix, the graphical
In the broader sense of terminal, I don't disagree. I was
being somewhat clumsy in talking about terminals in
the Plan 9 sense of the processing power local to my
fingers.
> A terminal is not a no-processing capabilities (a dumb terminal):
> it can be a full terminal, that is able to handle the interface,
> the representation of data and commands (wandering in a menu shall
> be terminal stuff; other users have not to be impacted by an user's
> wandering through the UI).
Absolutly, but part of what has changed over the past 20
years is that the rate at which this local processing power
has grown has been faster than rate at which the processing
power of the rack-mount box in the machine room has
grown (large clusters not withstanding, that is). So the
gap between them has narrowed.
> The processing is then better kept on a single CPU, handling the
> concurrency (and not the fileserver trying to accomodate). The views are
> multiplexed, but not the handling of the data....
That is part of the conversation the question is meant
to raise. If cycles/second isn't as strong a justification
for separate CPU servers, then are there other reasons
we should still have the separation? If so, do we need
to think differently about the model?
> In some sense, logically (but not efficiently: read the caveats in the
> Plan9 papers; a processor is nothing without tightly coupled memory, so
The flip side is actually what intrigues me more, namely
machines where the connection to the file system is
even more loosly coupled than sharing Ethernet. I'd
like to have my usage on the laptop sitting in Starbucks
to be as much a part of the model as using one of
the BlueGene machines as an enormous CPU server
while sitting in the next room.
BLS
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc. (was: Re: security questions)
2009-04-17 18:31 ` blstuart
@ 2009-04-17 18:45 ` erik quanstrom
2009-04-17 18:59 ` blstuart
2009-04-17 19:39 ` [9fans] VMs, etc. (was: Re: security questions) tlaronde
1 sibling, 1 reply; 94+ messages in thread
From: erik quanstrom @ 2009-04-17 18:45 UTC (permalink / raw)
To: 9fans
> Absolutly, but part of what has changed over the past 20
> years is that the rate at which this local processing power
> has grown has been faster than rate at which the processing
> power of the rack-mount box in the machine room has
> grown (large clusters not withstanding, that is). So the
> gap between them has narrowed.
or, we have miserably failed as of late in putting ever cycle we
can dream about to good use; we'd care more about the cycles
of a cpu server if we were better at using them up.
every cycle's perfect, every cycle's great
if one cycle's wasted, god gets quite irate
that, plus the fact that the the mhz wars are dead and
gone.
- erik
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc. (was: Re: security questions)
2009-04-17 17:29 ` erik quanstrom
2009-04-17 18:18 ` tlaronde
@ 2009-04-17 18:50 ` blstuart
1 sibling, 0 replies; 94+ messages in thread
From: blstuart @ 2009-04-17 18:50 UTC (permalink / raw)
To: 9fans
> if you look closely enough, this kind of breaks down. numa
> machines are pretty popular these days (opteron, intel qpi-based
> processors). it's possible with a modest loss of performance to
> share memory across processors and not worry about it.
Way back in the dim times when hypercubes roamed
the earth, I played around a bit with parallel machines.
When I was writing my master's thesis, I tried to find
a way to dispell the idea that shared-memory vs interconnection
network was as bipolar as the terms multiprocessor and
multicomputer would suggest. One of the few things
in that work that I think still makes sense is characterizing
the degree of coupling as a continuum based on the ratio
of bytes transferred between CPUs to bytes accessed in
local memory. So C.mmp would have a very high degree
of coupling and SETI@home would have a very low degree
of coupling. The upshot is that if I have a fast enough
network, my degree of coupling is high enough that I
don't really care whether or how much memory is local
and how much is on the other side of the building.
Of course, until recently, the rate at which CPU fetches
must be to keep the pipeline full has grown much
faster than network speeds. So the idea of remote
memory hasn't been all that useful. However, I wouldn't
be surprised to see that change over the next 10 to 20
years. So maybe my local CPU will gain access to most
of its memory by importing /dev/memctl from a memory
server (1/2 :))
BLS
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc. (was: Re: security questions)
2009-04-17 18:45 ` erik quanstrom
@ 2009-04-17 18:59 ` blstuart
2009-04-17 19:05 ` erik quanstrom
0 siblings, 1 reply; 94+ messages in thread
From: blstuart @ 2009-04-17 18:59 UTC (permalink / raw)
To: 9fans
>> Absolutly, but part of what has changed over the past 20
>> years is that the rate at which this local processing power
>> has grown has been faster than rate at which the processing
>> power of the rack-mount box in the machine room has
>> grown (large clusters not withstanding, that is). So the
>> gap between them has narrowed.
>
> or, we have miserably failed as of late in putting ever cycle we
> can dream about to good use; we'd care more about the cycles
> of a cpu server if we were better at using them up.
What? Dancing icons and sound effects for menu selections
are good use of cycles? :)
> every cycle's perfect, every cycle's great
> if one cycle's wasted, god gets quite irate
I often tell my students that every cycle used by overhead
(kernel, UI, etc) is a cycle taken away from doing the work
of applications. I'd much rather have my DNA sequencing
application finish in 25 days instead of 30 than to have
the system look pretty during those 30 days.
> that, plus the fact that the the mhz wars are dead and
> gone.
Does that mean we're all playing core wars now? :)
BLS
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc. (was: Re: security questions)
2009-04-17 17:11 ` tlaronde
2009-04-17 17:29 ` erik quanstrom
2009-04-17 18:31 ` blstuart
@ 2009-04-17 18:59 ` Eris Discordia
2009-04-17 21:38 ` blstuart
[not found] ` <1322FA0842063D3D53C712DC@192.168.1.2>
3 siblings, 1 reply; 94+ messages in thread
From: Eris Discordia @ 2009-04-17 18:59 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
> even today on an average computer one has this articulation: a CPU (with
> a FPU perhaps) ; tightly or loosely connected storage (?ATA or SAN) ;
> graphical capacities (terminal) : GPU.
It happens so that a reversal of specialization has really taken place, as
Brian Stuart suggests. These "terminals" you speak of, GPUs, contain such
vast untapped general processing capabilities that new uses and a new
framework for using them are being defined: GPGPU and OpenCL.
<http://en.wikipedia.org/wiki/OpenCL>
<http://en.wikipedia.org/wiki/GPGPU>
Right now, the GPU on my low-end video card takes a huge burden off of the
CPU when leveraged by the right H.264 decoder. Two high definition AVC
streams would significantly slow down my computer before I began using a
CUDA-enabled decoder. Now I can easily play four in parallel.
Similarly, the GPUs in PS3 boxes are being integrated into one of the
largest loosely-coupled clusters on the planet.
<http://folding.stanford.edu/English/FAQ-highperformance>
Today, even a mere cellphone may contain enough processing power to run a
low-traffic web server or a 3D video game. This processing power comes
cheap so it is mostly wasted.
I'd like to add to Brian Stuart's comments the point that previous
specialization of various "boxes" is mostly disappearing. At some point in
near future all boxes may contain identical or very similar powerful
hardware--even probably all integrated into one "black box." So cheap that
it doesn't matter if one or another hardware resource is wasted. To put to
good use such a computational environment system software should stop
incorporating a role-based model of various installations. All boxes,
except the costliest most special ones, shall be peers.
--On Friday, April 17, 2009 7:11 PM +0200 tlaronde@polynum.com wrote:
> On Fri, Apr 17, 2009 at 11:32:33AM -0500, blstuart@bellsouth.net wrote:
>> - First, the gap between the computational power at the
>> terminal and the computational power in the machine room
>> has shrunk to the point where it might no longer be significant.
>> It may be worth rethinking the separation of CPU and terminal.
>> For example, I'm typing this in acme running in a 9vx terminal
>> booted using using a combined fs/cpu/auth server for the
>> file system. But I rarely use the cpu server capability of
>> that machine.
>
> I'm afraid I don't quite agree with you.
>
> The definition of a terminal has changed. In Unix, the graphical
> interface (X11) was a graphical variant of the text terminal interface,
> i.e. the articulation (link, network) was put on the wrong place,
> the graphical terminal (X11 server) being a kind of dumb terminal (a
> little above a frame buffer), leaving all the processing, including the
> handling of the graphical interface (generating the image,
> administrating the UI, the menus) on the CPU (Xlib and toolkits run on
> the CPU, not the Xserver).
>
> A terminal is not a no-processing capabilities (a dumb terminal):
> it can be a full terminal, that is able to handle the interface,
> the representation of data and commands (wandering in a menu shall
> be terminal stuff; other users have not to be impacted by an user's
> wandering through the UI).
>
> More and more, for administration, using light terminals, without
> software installations is a way to go (less ressources in TCO). "Green"
> technology. Data less terminals for security (one looses a terminal, not
> the data), and data less for safety (data is centralized and protected).
>
>
> Secondly, one is accustomed to a physical user being several distinct
> logical users (accounts), for managing different tasks, or accessing
> different kind of data.
>
> But (to my surprise), the converse is true: a collection of individuals
> can be a single logical user, having to handle concurrently the very
> same rw data. Terminals are then just distinct views of the same data
> (imagine in a CAD program having different windows, different views of a
> file ; this is the same, except that the windows are on different
> terminals, with different "instances" of the logical user in front of
> them).
>
> The processing is then better kept on a single CPU, handling the
> concurrency (and not the fileserver trying to accomodate). The views are
> multiplexed, but not the handling of the data.
>
> Thirdly, you can have a slow/loose link between a CPU and a terminal
> since the commands are only a small fraction of the processing done.
> You must have a fast or tight link between the CPU and the fileserver.
>
> In some sense, logically (but not efficiently: read the caveats in the
> Plan9 papers; a processor is nothing without tightly coupled memory, so
> memory is not a remote pool sharable---Mach!), even today on an
> average computer one has this articulation: a CPU (with a FPU
> perhaps) ; tightly or loosely connected storage (?ATA or SAN) ;
> graphical capacities (terminal) : GPU.
>
> --
> Thierry Laronde (Alceste) <tlaronde +AT+ polynum +dot+ com>
> http://www.kergis.com/
> Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
>
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc. (was: Re: security questions)
2009-04-17 18:18 ` tlaronde
@ 2009-04-17 19:00 ` erik quanstrom
0 siblings, 0 replies; 94+ messages in thread
From: erik quanstrom @ 2009-04-17 19:00 UTC (permalink / raw)
To: 9fans
On Fri Apr 17 14:21:03 EDT 2009, tlaronde@polynum.com wrote:
> On Fri, Apr 17, 2009 at 01:29:09PM -0400, erik quanstrom wrote:
> > > In some sense, logically (but not efficiently: read the caveats in the
> > > Plan9 papers; a processor is nothing without tightly coupled memory, so
> > > memory is not a remote pool sharable---Mach!),
> >
> > if you look closely enough, this kind of breaks down. numa
> > machines are pretty popular these days (opteron, intel qpi-based
> > processors). it's possible with a modest loss of performance to
> > share memory across processors and not worry about it.
>
> NUMA are, from my point of view, "tightly" connected.
>
> By loosely, I mean a memory accessed by non dedicated processor
> hardware means (if this makes sense). Moving data from different
> memories via some IP based protocol or worse. But all in all,
> finally a copy is put in the tightly connected memory, whether huge
> caches, or dedicated main memory.
why do you care what gives you the illusion of a large, flat
address space? that is, what is special about having a quick
path network instead of, say, infiniband or ethernet?
why does networking imply ip networking?
my point is that i think we need to recognize that there vast differences
in performance between, say, local memory, memory across the quickpath
bus, memory on the the next machine, and these differences may vary
greatly between one set of machines and another.
then, the 64¢ question is, how does one use this to one's advantage
without assuming ahead of time what's faster than what.
(one could easily imagine a 40gbps ethernet connection being competitive
with a 3-hop numa connection.)
- erik
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc. (was: Re: security questions)
2009-04-17 16:32 ` [9fans] VMs, etc. (was: Re: security questions) blstuart
2009-04-17 17:11 ` tlaronde
@ 2009-04-17 19:02 ` lucio
2009-04-17 21:01 ` blstuart
2009-04-17 19:16 ` [9fans] Plan9 - the next 20 years Steve Simon
2 siblings, 1 reply; 94+ messages in thread
From: lucio @ 2009-04-17 19:02 UTC (permalink / raw)
To: 9fans
> But the question in my mind for a while
> has been, is it time for another step back and rethinking
> the big picture?
Maybe, and maybe what we ought to look at is precisely what Plan 9
skipped, with good reason, in its infancy: distributed "core"
resources or "the platform as a filesystem".
What struck me when first looking at Xen, long after I had decided
that there was real merit in VMware, was that it allowed migration as
well as checkpoint/restarting of guest OS images with the smallest
amount of administration. Today, to me, that means distributed
virtualisation. So, back to my first impression: Plan 9 would make a
much better foundation for a virtualiser than any of the other OSes
currently in use (limited to my experience, there may be something in
the league of IBM's 1960s VMS (do I remember right? sanctions made
IBM a little scarce in my formative years) out there that I don't know
about). Given a Plan 9 based virtualiser, are we far from using
long-running applications and migrating them in flight from whichever
equipment may have been useful yesterday to whatever is handy today?
The way I see it, we would progress from conventional utilities strung
together with Windows' crappy glue to having a single "profile"
application, itself a virtualiser's guest, which includes any
activities you may find useful online. It sits on the web and follows
you around, wherever you go. It is engineered against any possible
failures, including security-related ones and is always there for you.
Add Venti to its persistent objects and you can also rewind to a
better past state.
Do you not like it? It smacks of Inferno and o/mero on top of a
virtualiser-enhanced Plan 9. Those who might prefer the conventional
Windows/Linux platforms may have to wait a little longer before they
figure out how to catch up :-)
++L
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc. (was: Re: security questions)
2009-04-17 18:59 ` blstuart
@ 2009-04-17 19:05 ` erik quanstrom
2009-04-17 20:21 ` blstuart
0 siblings, 1 reply; 94+ messages in thread
From: erik quanstrom @ 2009-04-17 19:05 UTC (permalink / raw)
To: 9fans
> I often tell my students that every cycle used by overhead
> (kernel, UI, etc) is a cycle taken away from doing the work
> of applications. I'd much rather have my DNA sequencing
> application finish in 25 days instead of 30 than to have
> the system look pretty during those 30 days.
i didn't mean to imply we should not be frugal with cycles.
i ment to say that we simply don't have anything useful to
do with a vast majority of cycles, and that's just as wasteful
as doing bouncing icons. we need to work on that problem.
> > that, plus the fact that the the mhz wars are dead and
> > gone.
>
> Does that mean we're all playing core wars now? :)
yes it does. i've got $50 that says that in 2011 we'll be
saying that this one goes to eleven (cores).
- erik
^ permalink raw reply [flat|nested] 94+ messages in thread
* [9fans] Plan9 - the next 20 years
2009-04-17 16:32 ` [9fans] VMs, etc. (was: Re: security questions) blstuart
2009-04-17 17:11 ` tlaronde
2009-04-17 19:02 ` lucio
@ 2009-04-17 19:16 ` Steve Simon
2009-04-17 19:39 ` J.R. Mauro
` (2 more replies)
2 siblings, 3 replies; 94+ messages in thread
From: Steve Simon @ 2009-04-17 19:16 UTC (permalink / raw)
To: 9fans
I cannot find the reference (sorry), but I read an interview with Ken
(Thompson) a while ago.
He was asked what he would change if he where working on plan9 now,
and his reply was somthing like "I would add support for cloud computing".
I admin I am not clear exactly what he meant by this.
-Steve
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc. (was: Re: security questions)
2009-04-17 18:31 ` blstuart
2009-04-17 18:45 ` erik quanstrom
@ 2009-04-17 19:39 ` tlaronde
2009-04-17 21:25 ` blstuart
1 sibling, 1 reply; 94+ messages in thread
From: tlaronde @ 2009-04-17 19:39 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
On Fri, Apr 17, 2009 at 01:31:12PM -0500, blstuart@bellsouth.net wrote:
>
> Absolutly, but part of what has changed over the past 20
> years is that the rate at which this local processing power
> has grown has been faster than rate at which the processing
> power of the rack-mount box in the machine room has
> grown (large clusters not withstanding, that is). So the
> gap between them has narrowed.
This is a geek attitude ;) You say that since I can buy something more
powerful (if I do not change the programs for fatter ones...) for the
same amount of money or a little more, I have to find something to do
with that.
My point of view is: if my terminal works, I keep it. If not, I buy
something cheaper, including in TCO, for happily doing the work that has
to be done ;)
I don't have to buy expensive things and try to find something to do
with them.
I try to have hardware that matches my needs.
And I prefer to put money on a CPU, more powerful, "far" from average
user "creativity", and the only beast I have to manage.
>
> > The processing is then better kept on a single CPU, handling the
> > concurrency (and not the fileserver trying to accomodate). The views are
> > multiplexed, but not the handling of the data....
>
> That is part of the conversation the question is meant
> to raise. If cycles/second isn't as strong a justification
> for separate CPU servers, then are there other reasons
> we should still have the separation? If so, do we need
> to think differently about the model?
The main point I have discovered very recently is that giving access to
the "system" resources is a centralized thing, and that a logical user
can have several distinct sessions on several distinct terminals, but
these are just "views": the data opened, especially for random rw is
opened by a single program.
Fileservers have only to provide what they do provide :
1) Random read/write for an uniq user.
2) Append only for shared data. (In KerGIS for example, some attributes
can be shared among users. So distinct (logical) users can open a file
rw, but they only append/write and the semantics of the data is so that
appending the n+1 records doesn't invalidate the [0,n]---records are
partitions, there is no overlapping. Changing the records (random
access) is possible but the cases are rare, and the stuff is done by the
user manager (another logical user)).
So the semantics of the data and the handling of users is so that a user
can randomly read/write (not sharable). A group can append/write but
without modifying records. And others can only (perhaps) read.
--
Thierry Laronde (Alceste) <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] Plan9 - the next 20 years
2009-04-17 19:16 ` [9fans] Plan9 - the next 20 years Steve Simon
@ 2009-04-17 19:39 ` J.R. Mauro
2009-04-17 19:43 ` tlaronde
2009-04-17 20:20 ` John Barham
2 siblings, 0 replies; 94+ messages in thread
From: J.R. Mauro @ 2009-04-17 19:39 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
On Fri, Apr 17, 2009 at 3:16 PM, Steve Simon <steve@quintile.net> wrote:
> I cannot find the reference (sorry), but I read an interview with Ken
> (Thompson) a while ago.
>
> He was asked what he would change if he where working on plan9 now,
> and his reply was somthing like "I would add support for cloud computing".
>
> I admin I am not clear exactly what he meant by this.
>
> -Steve
>
He said that even with as smooth as Plan 9 makes things seem, you can
still tell that there's more than one computer involved, and that
sometimes that is a bit bad. IIRC he didn't mention anything in
particular.
I imagine process migration via /proc would make things pretty nice,
as well as a better way to move namespaces around than cpu does. In
general, the ability to really blur the line when you want to could be
better. Letting the computer blur the line when you want it to is also
something that I think would help. Plan 9 leaves it to the user to
pick where something should run, and then only on that machine. I'd
like to let the OS decide where to run certain things (and if that
computer isn't available, to pick another), and maybe when I go
someplace else, I'd like to bring that process back over to the
computer I'm at instead of talking to it over a (potentially slow)
network connection.
On a slightly related note, I talked with Vint Cerf recently, and his
major concern is a standardized way to have different clouds
communicate their capabilities and the services they're willing to
provide. I mentioned Plan 9's concepts, and we basically came to the
conclusion that we need Plan 9's ideas brought into every major OS, as
well as protocols to facilitate the non-homogeneous systems with a way
of communicating and cooperating.
Creating the tools is fairly straightforward. The tough nut to crack
is adoption (as we all know)
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] Plan9 - the next 20 years
2009-04-17 19:16 ` [9fans] Plan9 - the next 20 years Steve Simon
2009-04-17 19:39 ` J.R. Mauro
@ 2009-04-17 19:43 ` tlaronde
2009-04-17 19:56 ` J.R. Mauro
2009-04-17 20:14 ` Eric Van Hensbergen
2009-04-17 20:20 ` John Barham
2 siblings, 2 replies; 94+ messages in thread
From: tlaronde @ 2009-04-17 19:43 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
On Fri, Apr 17, 2009 at 08:16:40PM +0100, Steve Simon wrote:
> I cannot find the reference (sorry), but I read an interview with Ken
> (Thompson) a while ago.
>
> He was asked what he would change if he where working on plan9 now,
> and his reply was somthing like "I would add support for cloud computing".
>
> I admin I am not clear exactly what he meant by this.
My interpretation of cloud computing is precisely the split done by
plan9 with terminal/CPU/FileServer: a UI runing on a this Terminal, with
actual computing done somewhere about data stored somewhere.
Perhaps tools for migrating tasks or managing the thing. But I have the
impression that the Plan 9 framework is the best for such a scheme.
--
Thierry Laronde (Alceste) <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] Plan9 - the next 20 years
2009-04-17 19:43 ` tlaronde
@ 2009-04-17 19:56 ` J.R. Mauro
2009-04-17 20:14 ` Eric Van Hensbergen
1 sibling, 0 replies; 94+ messages in thread
From: J.R. Mauro @ 2009-04-17 19:56 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
On Fri, Apr 17, 2009 at 3:43 PM, <tlaronde@polynum.com> wrote:
> On Fri, Apr 17, 2009 at 08:16:40PM +0100, Steve Simon wrote:
>> I cannot find the reference (sorry), but I read an interview with Ken
>> (Thompson) a while ago.
>>
>> He was asked what he would change if he where working on plan9 now,
>> and his reply was somthing like "I would add support for cloud computing".
>>
>> I admin I am not clear exactly what he meant by this.
>
> My interpretation of cloud computing is precisely the split done by
> plan9 with terminal/CPU/FileServer: a UI runing on a this Terminal, with
> actual computing done somewhere about data stored somewhere.
The problem is that the CPU and Fileservers can't be assumed to be
static. Things can and will go down, move about, and become
temporarily unusable over time.
>
> Perhaps tools for migrating tasks or managing the thing. But I have the
> impression that the Plan 9 framework is the best for such a scheme.
> --
> Thierry Laronde (Alceste) <tlaronde +AT+ polynum +dot+ com>
> http://www.kergis.com/
> Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
>
>
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc. (was: Re: security questions)
[not found] ` <1322FA0842063D3D53C712DC@192.168.1.2>
@ 2009-04-17 20:07 ` J.R. Mauro
0 siblings, 0 replies; 94+ messages in thread
From: J.R. Mauro @ 2009-04-17 20:07 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
On Fri, Apr 17, 2009 at 2:59 PM, Eris Discordia
<eris.discordia@gmail.com> wrote:
>> even today on an average computer one has this articulation: a CPU (with
>> a FPU perhaps) ; tightly or loosely connected storage (?ATA or SAN) ;
>> graphical capacities (terminal) : GPU.
>
> It happens so that a reversal of specialization has really taken place, as
> Brian Stuart suggests. These "terminals" you speak of, GPUs, contain such
> vast untapped general processing capabilities that new uses and a new
> framework for using them are being defined: GPGPU and OpenCL.
>
> <http://en.wikipedia.org/wiki/OpenCL>
> <http://en.wikipedia.org/wiki/GPGPU>
>
> Right now, the GPU on my low-end video card takes a huge burden off of the
> CPU when leveraged by the right H.264 decoder. Two high definition AVC
> streams would significantly slow down my computer before I began using a
> CUDA-enabled decoder. Now I can easily play four in parallel.
>
> Similarly, the GPUs in PS3 boxes are being integrated into one of the
> largest loosely-coupled clusters on the planet.
>
> <http://folding.stanford.edu/English/FAQ-highperformance>
>
> Today, even a mere cellphone may contain enough processing power to run a
> low-traffic web server or a 3D video game. This processing power comes cheap
> so it is mostly wasted.
I can't find the link, but a recent article described someone's
efforts at CMU to develop what he calls "FAWN" Fast Array of Wimpy
Nodes. He basically took a bunch of eeePC boards and turned them into
a single computer.
The performance per watt of such an array was staggeringly higher than
a monster computer with Xeons and disks.
So hopefully in the future, we will be able to have more fine-grained
control over such things and fewer cycles will be wasted. It's time
people realized that CPU cycles are a bit like employment. Sure
UNemployment is a problem, but so is UNDERemployment, and the latter
is sometimes harder to gauge.
>
> I'd like to add to Brian Stuart's comments the point that previous
> specialization of various "boxes" is mostly disappearing. At some point in
> near future all boxes may contain identical or very similar powerful
> hardware--even probably all integrated into one "black box." So cheap that
> it doesn't matter if one or another hardware resource is wasted. To put to
> good use such a computational environment system software should stop
> incorporating a role-based model of various installations. All boxes, except
> the costliest most special ones, shall be peers.
>
> --On Friday, April 17, 2009 7:11 PM +0200 tlaronde@polynum.com wrote:
>
>> On Fri, Apr 17, 2009 at 11:32:33AM -0500, blstuart@bellsouth.net wrote:
>>>
>>> - First, the gap between the computational power at the
>>> terminal and the computational power in the machine room
>>> has shrunk to the point where it might no longer be significant.
>>> It may be worth rethinking the separation of CPU and terminal.
>>> For example, I'm typing this in acme running in a 9vx terminal
>>> booted using using a combined fs/cpu/auth server for the
>>> file system. But I rarely use the cpu server capability of
>>> that machine.
>>
>> I'm afraid I don't quite agree with you.
>>
>> The definition of a terminal has changed. In Unix, the graphical
>> interface (X11) was a graphical variant of the text terminal interface,
>> i.e. the articulation (link, network) was put on the wrong place,
>> the graphical terminal (X11 server) being a kind of dumb terminal (a
>> little above a frame buffer), leaving all the processing, including the
>> handling of the graphical interface (generating the image,
>> administrating the UI, the menus) on the CPU (Xlib and toolkits run on
>> the CPU, not the Xserver).
>>
>> A terminal is not a no-processing capabilities (a dumb terminal):
>> it can be a full terminal, that is able to handle the interface,
>> the representation of data and commands (wandering in a menu shall
>> be terminal stuff; other users have not to be impacted by an user's
>> wandering through the UI).
>>
>> More and more, for administration, using light terminals, without
>> software installations is a way to go (less ressources in TCO). "Green"
>> technology. Data less terminals for security (one looses a terminal, not
>> the data), and data less for safety (data is centralized and protected).
>>
>>
>> Secondly, one is accustomed to a physical user being several distinct
>> logical users (accounts), for managing different tasks, or accessing
>> different kind of data.
>>
>> But (to my surprise), the converse is true: a collection of individuals
>> can be a single logical user, having to handle concurrently the very
>> same rw data. Terminals are then just distinct views of the same data
>> (imagine in a CAD program having different windows, different views of a
>> file ; this is the same, except that the windows are on different
>> terminals, with different "instances" of the logical user in front of
>> them).
>>
>> The processing is then better kept on a single CPU, handling the
>> concurrency (and not the fileserver trying to accomodate). The views are
>> multiplexed, but not the handling of the data.
>>
>> Thirdly, you can have a slow/loose link between a CPU and a terminal
>> since the commands are only a small fraction of the processing done.
>> You must have a fast or tight link between the CPU and the fileserver.
>>
>> In some sense, logically (but not efficiently: read the caveats in the
>> Plan9 papers; a processor is nothing without tightly coupled memory, so
>> memory is not a remote pool sharable---Mach!), even today on an
>> average computer one has this articulation: a CPU (with a FPU
>> perhaps) ; tightly or loosely connected storage (?ATA or SAN) ;
>> graphical capacities (terminal) : GPU.
>>
>> --
>> Thierry Laronde (Alceste) <tlaronde +AT+ polynum +dot+ com>
>> http://www.kergis.com/
>> Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
>>
>
>
>
>
>
>
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 16:15 ` Robert Raschke
@ 2009-04-17 20:12 ` John Barham
2009-04-17 21:40 ` blstuart
0 siblings, 1 reply; 94+ messages in thread
From: John Barham @ 2009-04-17 20:12 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
Robert Raschke wrote:
> Also note there's a new book out that includes Inferno as a major
> example, essentially explaining OS principles in general, in Inferno,
> and in Linux:
>
> Principles of Operating Systems: Design and Applications
> by Brian Stuart
>
> ( http://www.amazon.co.uk/exec/obidos/ASIN/1418837695 )
>
> I've only just started reading it, so can't really comment on how good
> it is yet. Looks promising so far though.
I recently bought this book and have read most of it. It's especially
good at bridging the gap between OS theory and the gritty details of
implementation with clear explanations of selected source code
extracts from the Inferno and Linux kernels. The chapter on Inferno
process management and its scheduler is especially illuminating.
Although it focuses on the implementation of Inferno I've also found
it helpful for understanding the Plan 9 kernel since it covers the
Inferno device driver model, viz. embedded 9p/Styx servers. It also
reviews the Inferno implementation of kfs, which is written in Limbo,
but the mental translation to C is easy.
John
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] Plan9 - the next 20 years
2009-04-17 19:43 ` tlaronde
2009-04-17 19:56 ` J.R. Mauro
@ 2009-04-17 20:14 ` Eric Van Hensbergen
2009-04-17 20:18 ` Benjamin Huntsman
` (2 more replies)
1 sibling, 3 replies; 94+ messages in thread
From: Eric Van Hensbergen @ 2009-04-17 20:14 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
On Fri, Apr 17, 2009 at 2:43 PM, <tlaronde@polynum.com> wrote:
> On Fri, Apr 17, 2009 at 08:16:40PM +0100, Steve Simon wrote:
>> I cannot find the reference (sorry), but I read an interview with Ken
>> (Thompson) a while ago.
>>
>
> My interpretation of cloud computing is precisely the split done by
> plan9 with terminal/CPU/FileServer: a UI runing on a this Terminal, with
> actual computing done somewhere about data stored somewhere.
>
That misses the dynamic nature which clouds could enable -- something
we lack as well with our hardcoded /lib/ndb files -- there is no
provisions for cluster resources coming and going (or failing) and no
control facilities given for provisioning (or deprovisioning) those
resources in a dynamic fashion. Lucho's kvmfs (and to a certain
extent xcpu) seem like steps in the right direction -- but IMHO more
fundamental changes need to occur in the way we think about things. I
believe the file system interfaces While not focused on "cloud
computing" in particular, the work we are doing under HARE aims to
explore these directions further (both in the context of Plan
9/Inferno as well as broader themes involving other platforms).
For hints/ideas/whatnot you can check the current pubs (more coming
soon): http://www.research.ibm.com/hare
-eric
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] Plan9 - the next 20 years
2009-04-17 20:14 ` Eric Van Hensbergen
@ 2009-04-17 20:18 ` Benjamin Huntsman
2009-04-18 4:26 ` erik quanstrom
2009-04-17 20:29 ` J.R. Mauro
2009-04-18 12:52 ` Steve Simon
2 siblings, 1 reply; 94+ messages in thread
From: Benjamin Huntsman @ 2009-04-17 20:18 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
[-- Attachment #1: Type: text/plain, Size: 296 bytes --]
Speaking of NUMA and such though, is there even any support for it in the kernel?
I know we have a 10gb Ethernet driver, but what about cluster interconnects such as InfiniBand, Quadrics, or Myrinet? Are such things even desired in Plan 9?
I'm glad see process migration has been mentioned
[-- Attachment #2: winmail.dat --]
[-- Type: application/ms-tnef, Size: 2612 bytes --]
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] Plan9 - the next 20 years
2009-04-17 19:16 ` [9fans] Plan9 - the next 20 years Steve Simon
2009-04-17 19:39 ` J.R. Mauro
2009-04-17 19:43 ` tlaronde
@ 2009-04-17 20:20 ` John Barham
2 siblings, 0 replies; 94+ messages in thread
From: John Barham @ 2009-04-17 20:20 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
Steve Simon wrote:
> I cannot find the reference (sorry), but I read an interview with Ken
> (Thompson) a while ago.
>
> He was asked what he would change if he where working on plan9 now,
> and his reply was somthing like "I would add support for cloud computing".
Perhaps you were thinking of his "Ask a Google engineer" answers at
http://moderator.appspot.com/#15/e=c9&t=2d, specifically the question
"If you could redesign Plan 9 now (and expect similar uptake to UNIX),
what would you do differently?"
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc. (was: Re: security questions)
2009-04-17 19:05 ` erik quanstrom
@ 2009-04-17 20:21 ` blstuart
2009-04-18 14:54 ` erik quanstrom
0 siblings, 1 reply; 94+ messages in thread
From: blstuart @ 2009-04-17 20:21 UTC (permalink / raw)
To: 9fans
>> I often tell my students that every cycle used by overhead
>> (kernel, UI, etc) is a cycle taken away from doing the work
>> of applications. I'd much rather have my DNA sequencing
>> application finish in 25 days instead of 30 than to have
>> the system look pretty during those 30 days.
>
> i didn't mean to imply we should not be frugal with cycles.
> i ment to say that we simply don't have anything useful to
> do with a vast majority of cycles, and that's just as wasteful
> as doing bouncing icons. we need to work on that problem.
I gotcha. I guess it depends on what you're doing. I remember
years ago running a simulation on the 11/750 we had. It
simulated a DSP chip running 2 seconds of real time. It
ran for over a week. (While it was running, I took the time
to write another, faster simulator that was able to run
the simulation in about 2 hours.) For something like that,
we can certainly use all the cycles we can get. On the other
hand, I might look for a faster way to compile a kernel
a while back, but now it compiles fast enough on most
any machine that I'm not too concerned about where to
use the cycles. (I'm speaking of a Plan 9 or Inferno kernel
here; not a *BSD or Linux kernel.) But I suspect that
virtualization and Dis-style VMs are a pretty good use of
cycles we have to spare.
>> > that, plus the fact that the the mhz wars are dead and
>> > gone.
>>
>> Does that mean we're all playing core wars now? :)
>
> yes it does. i've got $50 that says that in 2011 we'll be
> saying that this one goes to eleven (cores).
Excellent. I never expected to see core wars and Spinal
Tap in the same discussion about Plan 9.
BLS
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] Plan9 - the next 20 years
2009-04-17 20:14 ` Eric Van Hensbergen
2009-04-17 20:18 ` Benjamin Huntsman
@ 2009-04-17 20:29 ` J.R. Mauro
2009-04-18 3:56 ` erik quanstrom
2009-04-18 12:52 ` Steve Simon
2 siblings, 1 reply; 94+ messages in thread
From: J.R. Mauro @ 2009-04-17 20:29 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
On Fri, Apr 17, 2009 at 4:14 PM, Eric Van Hensbergen <ericvh@gmail.com> wrote:
> On Fri, Apr 17, 2009 at 2:43 PM, <tlaronde@polynum.com> wrote:
>> On Fri, Apr 17, 2009 at 08:16:40PM +0100, Steve Simon wrote:
>>> I cannot find the reference (sorry), but I read an interview with Ken
>>> (Thompson) a while ago.
>>>
>>
>> My interpretation of cloud computing is precisely the split done by
>> plan9 with terminal/CPU/FileServer: a UI runing on a this Terminal, with
>> actual computing done somewhere about data stored somewhere.
>>
>
> That misses the dynamic nature which clouds could enable -- something
> we lack as well with our hardcoded /lib/ndb files -- there is no
> provisions for cluster resources coming and going (or failing) and no
> control facilities given for provisioning (or deprovisioning) those
> resources in a dynamic fashion. Lucho's kvmfs (and to a certain
> extent xcpu) seem like steps in the right direction -- but IMHO more
> fundamental changes need to occur in the way we think about things. I
> believe the file system interfaces While not focused on "cloud
> computing" in particular, the work we are doing under HARE aims to
> explore these directions further (both in the context of Plan
> 9/Inferno as well as broader themes involving other platforms).
Vidi also seems to be an attempt to make Venti work in such a dynamic
environment. IMHO, the assumption that computers are always connected
to the network was a fundamental mistake in Plan 9
>
> For hints/ideas/whatnot you can check the current pubs (more coming
> soon): http://www.research.ibm.com/hare
>
> -eric
>
>
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc. (was: Re: security questions)
2009-04-17 19:02 ` lucio
@ 2009-04-17 21:01 ` blstuart
2009-04-18 5:25 ` lucio
0 siblings, 1 reply; 94+ messages in thread
From: blstuart @ 2009-04-17 21:01 UTC (permalink / raw)
To: lucio, 9fans
> What struck me when first looking at Xen, long after I had decided
> that there was real merit in VMware, was that it allowed migration as
> well as checkpoint/restarting of guest OS images with the smallest
>...
>
> The way I see it, we would progress from conventional utilities strung
> together with Windows' crappy glue to having a single "profile"
> application, itself a virtualiser's guest, which includes any
> activities you may find useful online. It sits on the web and follows
I guess I'm a little slow; it's taken me a little while to get
my head around this and understand it. Let me see if I've
got the right picture. When I "login" I basically look up a
previously saved session in much the same way that LISP
systems would save a whole environment. Then when I
"log off" my session is suspended and saved. Alternatively,
I could always log into the same previously saved state.
> you around, wherever you go. ...
>
> Do you not like it?
If I understand it, I at least find it interesting. (I think I'd
have to try using it before I decided on preference.) I can
easily see different saved environments that I use depending
on whether I'm at home or at work or wherever. But what
happens if I'm not on any network at all? The more I think
about it, the more I think this could be handled with the
same mechanism that handles better integration of laptops
and file servers.
> It smacks of Inferno and o/mero on top of a
> virtualiser-enhanced Plan 9.
Hmmm. It might be pretty easy to whip up a prototype
based on Inferno. I must give this some thought...
BLS
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc. (was: Re: security questions)
2009-04-17 19:39 ` [9fans] VMs, etc. (was: Re: security questions) tlaronde
@ 2009-04-17 21:25 ` blstuart
2009-04-17 21:59 ` tlaronde
2009-04-17 23:41 ` Mechiel Lukkien
0 siblings, 2 replies; 94+ messages in thread
From: blstuart @ 2009-04-17 21:25 UTC (permalink / raw)
To: 9fans
>> Absolutly, but part of what has changed over the past 20
>> years is that the rate at which this local processing power
>> has grown has been faster than rate at which the processing
>> power of the rack-mount box in the machine room has
>> grown (large clusters not withstanding, that is). So the
>> gap between them has narrowed.
>
> This is a geek attitude ;) You say that since I can buy something more
> powerful (if I do not change the programs for fatter ones...) for the
> same amount of money or a little more, I have to find something to do
> with that.
I'm not sure I follow. The point where I would do something
special to get a more powerful system are several years past.
For example, a little over a year ago, the hinges on my work
laptop broke. When ordering a new one, there was no need
to get a quote for one more powerful than the coporate standard
ones. The ones in the "catalog" were powerful enough to do
pretty much anything I needed. This is partly because the
performance has grown faster than my need and because the
performance gap with larger systems has closed. In '89, a
desktop box would be something along the lines of an early
SPARCstation. There was a pretty large gap between its
power and that of a large SGI machine one might use for a
CPU server. Today, the difference between a base-model
machine and a single machine CPU server isn't as big as it
once was.
> My point of view is: if my terminal works, I keep it. If not, I buy
> something cheaper, including in TCO, for happily doing the work that has
> to be done ;)
I don't disagree. For that matter, pretty much all the machines
I use here at home are ones that were surplus and I rescued.
But once you get to the point where the cheapest one you can
find has more than enough capability, performance ceases to
be a motivator for a separate CPU server.
Again, that's not to say that there aren't other valid motivators
for some centralized functionality. It's just that in my opinion,
we're at the point were if it's raw cycles we need, we'll have
to be looking at a large cluster and not a simple CPU server.
BLS
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc. (was: Re: security questions)
2009-04-17 18:59 ` Eris Discordia
@ 2009-04-17 21:38 ` blstuart
0 siblings, 0 replies; 94+ messages in thread
From: blstuart @ 2009-04-17 21:38 UTC (permalink / raw)
To: 9fans
> I'd like to add to Brian Stuart's comments the point that previous
> specialization of various "boxes" is mostly disappearing. At some point in
> near future all boxes may contain identical or very similar powerful
> hardware--even probably all integrated into one "black box." So cheap that
The domination of the commodity reminds me a lot of the
parallel processing world. At one time, the big honkin'
machines had very custom interconnect designs and often
custom CPUs as well. But by the time commodity CPUs
got to the point where they were competitive with what
you could do custom, and Ethernet got to the point where
it was competitive with what you could do custom, it
became very rare that you could justify a custom machine.
It was much more cost-effective to build a large cluster
of commodity machines.
For me, personally, this is leading to a point where my
home network is converging on a collection of laptops,
some get used the way most laptops get used, and some
just sit closed on shelves in the rack. The primary hardware
differences between servers and terminals is that servers
have bigger disks and the lids on terminals tend to stay
open where on servers they tend to stay closed. It's
getting farther away from the blinkin lights I miss, but
it sure makes my office more comfortable in the summer
both in terms of heat and noise. Now if I could just get
that Cisco switch to be quieter...
BLS
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] security questions
2009-04-17 20:12 ` John Barham
@ 2009-04-17 21:40 ` blstuart
0 siblings, 0 replies; 94+ messages in thread
From: blstuart @ 2009-04-17 21:40 UTC (permalink / raw)
To: 9fans
>> Principles of Operating Systems: Design and Applications
>> by Brian Stuart
>>
>> ( http://www.amazon.co.uk/exec/obidos/ASIN/1418837695 )
>>
>> I've only just started reading it, so can't really comment on how good
>> it is yet. Looks promising so far though.
>
> I recently bought this book and have read most of it. It's especially
> good at bridging the gap between OS theory and the gritty details of
> implementation with clear explanations of selected source code
> extracts from the Inferno and Linux kernels. The chapter on Inferno
> process management and its scheduler is especially illuminating.
>
> Although it focuses on the implementation of Inferno I've also found
> it helpful for understanding the Plan 9 kernel since it covers the
> Inferno device driver model, viz. embedded 9p/Styx servers. It also
> reviews the Inferno implementation of kfs, which is written in Limbo,
> but the mental translation to C is easy.
Thank you. I'm glad you're finding it useful.
BLS
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc. (was: Re: security questions)
2009-04-17 21:25 ` blstuart
@ 2009-04-17 21:59 ` tlaronde
2009-04-17 23:41 ` Mechiel Lukkien
1 sibling, 0 replies; 94+ messages in thread
From: tlaronde @ 2009-04-17 21:59 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
On Fri, Apr 17, 2009 at 04:25:40PM -0500, blstuart@bellsouth.net wrote:
>
> Again, that's not to say that there aren't other valid motivators
> for some centralized functionality. It's just that in my opinion,
> we're at the point were if it's raw cycles we need, we'll have
> to be looking at a large cluster and not a simple CPU server.
Well there is perhaps a hint about what we disagree about. I'm not using
CPU with the strict present meaning in Plan 9 but as a _logical_
processing unit (this can actually be, in this scheme, a cluster or
whatever).
This does not invalidate the logical difference between a terminal and
"a" CPU. A node can be both a CPU (resp. member of a CPU) and a terminal
etc. The plan 9 distinction, on the usage side et on the topology,
between FileServer, CPU and Terminal is sound and fundamental IMHO.
Enough for me at the moment since, even if I have some things on the
application side, for the rest my discussion of "cloud computing" could
be a discussion about "vapor computing" ;)
--
Thierry Laronde (Alceste) <tlaronde +AT+ polynum +dot+ com>
http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc. (was: Re: security questions)
2009-04-17 21:25 ` blstuart
2009-04-17 21:59 ` tlaronde
@ 2009-04-17 23:41 ` Mechiel Lukkien
1 sibling, 0 replies; 94+ messages in thread
From: Mechiel Lukkien @ 2009-04-17 23:41 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
On Fri, Apr 17, 2009 at 04:25:40PM -0500, blstuart@bellsouth.net wrote:
> Again, that's not to say that there aren't other valid motivators
> for some centralized functionality. It's just that in my opinion,
> we're at the point were if it's raw cycles we need, we'll have
> to be looking at a large cluster and not a simple CPU server.
exactly.
the main use of a cpu server for me (and many others i suspect) is running
network services. it's still nice to have a machine that's always on for
that (my terminals are not stable/always on enough for providing services
to others). perhaps "cpu server" is a wrong name name. "service server"
anyone? ;)
mjl
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] Plan9 - the next 20 years
2009-04-17 20:29 ` J.R. Mauro
@ 2009-04-18 3:56 ` erik quanstrom
2009-04-18 4:12 ` J.R. Mauro
0 siblings, 1 reply; 94+ messages in thread
From: erik quanstrom @ 2009-04-18 3:56 UTC (permalink / raw)
To: 9fans
> Vidi also seems to be an attempt to make Venti work in such a dynamic
> environment. IMHO, the assumption that computers are always connected
> to the network was a fundamental mistake in Plan 9
on the other hand, without this assumption, we would not have 9p.
it was a real innovation to dispense with underpowered workstations
with full adminstrative burdens.
i think it is anachronistic to consider the type of mobile devices we
have today. in 1990 i knew exactly 0 people with a cell phone. i had
a toshiba orange screen laptop from work, but in those days a 9600
baud vt100 was still a step up.
ah, the good old days.
none of this is do detract from the obviously good idea of being
able to carry around a working set and sync up with the main server
later without some revision control junk. in fact, i was excited to
learn about fossil — i was under the impression from reading the
paper that that's how it worked.
speaking of vidi, do the vidi authors have an update on their work?
i'd really like to hear how it is working out.
- erik
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] Plan9 - the next 20 years
2009-04-18 3:56 ` erik quanstrom
@ 2009-04-18 4:12 ` J.R. Mauro
2009-04-18 4:16 ` erik quanstrom
0 siblings, 1 reply; 94+ messages in thread
From: J.R. Mauro @ 2009-04-18 4:12 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
On Fri, Apr 17, 2009 at 11:56 PM, erik quanstrom <quanstro@quanstro.net> wrote:
>> Vidi also seems to be an attempt to make Venti work in such a dynamic
>> environment. IMHO, the assumption that computers are always connected
>> to the network was a fundamental mistake in Plan 9
>
> on the other hand, without this assumption, we would not have 9p.
> it was a real innovation to dispense with underpowered workstations
> with full adminstrative burdens.
>
> i think it is anachronistic to consider the type of mobile devices we
> have today. in 1990 i knew exactly 0 people with a cell phone. i had
> a toshiba orange screen laptop from work, but in those days a 9600
> baud vt100 was still a step up.
>
> ah, the good old days.
Of course it's easy to blame people for lack of vision 25 years later,
but with the rate at which computing moves in general, cell phones as
powerful as workstations should have been seen to be on their way
within the authors' lifetimes.
That said, Plan 9 was designed to furnish the needs of an environment
that might not ever have had iPhones and eeePCs attached to it even if
such things existed at the time it was made.
But I'll say that if anyone tries to solve these problems today, they
should not fall into the same trap, and look to the future. I hope
they'll consider how well their solution scales to computers so small
they're running through someone's bloodstream and so far away that
communication in one direction will take several light-minutes and be
subject to massive delay and loss.
It's not that ridiculous... teams are testing DTN, which hopes to
spread the internet to outer space, not only across this solar system,
but also to nearby stars. Now there's thinking forward!
>
> none of this is do detract from the obviously good idea of being
> able to carry around a working set and sync up with the main server
> later without some revision control junk. in fact, i was excited to
> learn about fossil — i was under the impression from reading the
> paper that that's how it worked.
>
> speaking of vidi, do the vidi authors have an update on their work?
> i'd really like to hear how it is working out.
>
> - erik
>
>
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] Plan9 - the next 20 years
2009-04-18 4:12 ` J.R. Mauro
@ 2009-04-18 4:16 ` erik quanstrom
2009-04-18 5:51 ` J.R. Mauro
0 siblings, 1 reply; 94+ messages in thread
From: erik quanstrom @ 2009-04-18 4:16 UTC (permalink / raw)
To: 9fans
> But I'll say that if anyone tries to solve these problems today, they
> should not fall into the same trap, [...]
yes. forward thinking was just the thing that made multics
what it is today.
it is equally a trap to try to prognosticate too far in advance.
one increases the likelyhood of failure and the chances of being
dead wrong.
- erik
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] Plan9 - the next 20 years
2009-04-17 20:18 ` Benjamin Huntsman
@ 2009-04-18 4:26 ` erik quanstrom
0 siblings, 0 replies; 94+ messages in thread
From: erik quanstrom @ 2009-04-18 4:26 UTC (permalink / raw)
To: 9fans
> Speaking of NUMA and such though, is there even any support for it in the kernel?
> I know we have a 10gb Ethernet driver, but what about cluster interconnects such as InfiniBand, Quadrics, or Myrinet? Are such things even desired in Plan 9?
there is no explicit numa support in the pc kernel.
however it runs just fine on standard x86-64 numa
architectures like intel nelaham and amd opteron.
we have two 10gbe ethernet drivers the myricom driver
and the intel 82598 driver. the blue gene folks have
support for a number of blue-gene-specific networks.
i don't know too much about myrinet, infiniband or
quadratics. i have nothing against any of them, but
10gbe has been a much better fit for the things i've
wanted to do.
- erik
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc. (was: Re: security questions)
2009-04-17 21:01 ` blstuart
@ 2009-04-18 5:25 ` lucio
2009-04-19 20:19 ` blstuart
0 siblings, 1 reply; 94+ messages in thread
From: lucio @ 2009-04-18 5:25 UTC (permalink / raw)
To: 9fans
> I guess I'm a little slow; it's taken me a little while to get
> my head around this and understand it. Let me see if I've
> got the right picture. When I "login" I basically look up a
> previously saved session in much the same way that LISP
> systems would save a whole environment. Then when I
> "log off" my session is suspended and saved. Alternatively,
> I could always log into the same previously saved state.
The seesion would not be suspended, it would continue to operate as
your agent and identity and, typically, accept mail on your behalf,
perform "background" operations such as pay your accounts and in
general represent you to the web to the extent that security (or lack
thereof, for many unsophisticated users) permits. Nothing wrong with
me having a private search bot to look for particular pornography or
art or documentation while I'm asleep, the trick is to run it on
whatever platform(s) are suitable at the time.
Take my situation, for example. I am at the dial-up end of an ISDN
BRA connection (2 x 64kbps channels for all intents and purposes, one
of them reserved for voice calls) which costs me a nominal amount to
stay connected (when the powers that be allow it) from 19:00 to 07:00
each weekday and from Friday evening to Monday morning (and a fortune
during what the Telco calls "peak time"). The rest of the time, I
find it preferable to use GPRS (3G is not yet available) for on-demand
connections because I pay per volume and not for connect time.
Naturally, that makes my network a roaming one. Having my mail
exchanger et al. in Cape Town permanently on line at a client's
premises provides the visibility I need all the time, but it is not
something I will continue to be able to afford as my involvement with
that client will eventually stop. I'm not sure I can afford hosting
thereafter, but that is a separate issue.
The other organisation I am associated with has a hosted Linux server
I may use, or I may piggyback on their hosting contract, but I get too
little choice of platform on which to operate and even the hosting
structure may not suit me for a number of technical and political
reasons.
My dream is to be able to virtualise not so much the platform as the
"application" where application means whatever I feel like using at
the time. Including being able to access on a low speed line the
stuff that is, say, strictly text based. Or, as I often do, download
big volume items overnight, while I sleep. But most of all, I want to
walk away from the workstation and pick up where I left off anywhere
else, including accessing the "profile" using the local resources, not
necessarily the extraordinary features I may have built for myself in
my workshop (I wish!).
Most of all, it must be possible for me to enhance my "profile"
wherever I am, teach it new tricks whenever I discover them, make it
aware that they may only work in specific locations.
Do you get my drift?
The crucial bit is that it depends heavily on "being on the network"
insofar as having access to resources you cannot possibly be expected
to carry on your laptop (now you need Windows, just now you need
MacOS, say, or connection to your burglar alarm). In fact, my idea of
security is to deploy my mobile phone as the key, GPRS allows me a
very inexpensive, always on-line tool to provide, say, encryption keys
that have my identity firmly attached to them, practically anywhere in
South Africa and in most places in Africa, nevermind Europe
(connectivity was superb in Italy and Greece, last October) or the
USA. Given the access key, any "terminal" ought to be able to provide
at least part of the "experience" I'm likely to need.
In passing, a device that struck me as being extremely handy is the
3G, USB dongle that is highly popular here, you mey be more familiar
with it than I: it contains a simulated CD-ROM that it uses to install
its software. I though that was particularly clever, specially if you
transform it into a Plan 9 or Inferno boot device.
I'm sorry if I'm throwing around too many ideas with too little flesh,
I must confess that I find this particular discussion very exciting, I
have never really had occasion to look at these ideas as carefully as
I am doing now.
I was going to address the issue of being disconnected and I note that
to some extent I have, because once you treat your mobile phone as a
factor, being disconnected becomes a non-issue. But if you do land in
a dead spot, for real, then, sure, you need much of your profile on
your portable. How much lives in your phone (no matter how, that has
to be connected to a computing device or _be_ a computing device) and
how much on, say, on your laptop, is not important, as both have to be
with you, ideally they ought to be the same device and most likely
will be. In fact, in a Plan 9 paradigm, the phone is the
CPU/fileserver, the laptop is the terminal (now you got me thinking!).
Replication is another issue that needs careful thought, although once
again, it gets resolved by defining the phone as the arbiter, no
matter how many devices need conflict resolution.
++L
PS: Who would have thought that instead of drawterm, we'd want to
implement a Plan 9 CPU server on the iPhone?! Less so on an iPod :-)
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] Plan9 - the next 20 years
2009-04-18 4:16 ` erik quanstrom
@ 2009-04-18 5:51 ` J.R. Mauro
0 siblings, 0 replies; 94+ messages in thread
From: J.R. Mauro @ 2009-04-18 5:51 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
On Sat, Apr 18, 2009 at 12:16 AM, erik quanstrom <quanstro@quanstro.net> wrote:
>> But I'll say that if anyone tries to solve these problems today, they
>> should not fall into the same trap, [...]
>
> yes. forward thinking was just the thing that made multics
> what it is today.
>
> it is equally a trap to try to prognosticate too far in advance.
> one increases the likelyhood of failure and the chances of being
> dead wrong.
>
> - erik
>
>
I don't think what I outlined is too far ahead, and the issues
presented are all doable as long as a small bit of extra consideration
is made.
Keeping your eye only on the here and now was "just the thing" that
gave Unix a bunch of tumorous growths like sockets and X11, and made
Windows the wonderful piece of hackery it is.
I'm not suggesting we consider how to solve the problems we'll face
when we're flying through space and time in the TARDIS and shrinking
ourselves and our bioships down to molecular sizes to cure someone's
brain cancer. I'm talking about making something scale across
distances and magnitudes that we will come accustomed to in the next
five decades.
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] Plan9 - the next 20 years
2009-04-17 20:14 ` Eric Van Hensbergen
2009-04-17 20:18 ` Benjamin Huntsman
2009-04-17 20:29 ` J.R. Mauro
@ 2009-04-18 12:52 ` Steve Simon
2 siblings, 0 replies; 94+ messages in thread
From: Steve Simon @ 2009-04-18 12:52 UTC (permalink / raw)
To: 9fans
I assumed cloud computing means you can log into any node
that you are authorised to and your data and code will migrate
to you as needed.
The idea being the sam -r split is not only dynamic but on demand,
you may connect to the cloud from your phone just to read your email
so the editor session stays attached to your home server as it has not
requested many graphics events over the slow link.
<anecdote>
When I first read the London UKUUG Plan9 paper in the early nineties
I was managing a HPC workstation cluster.
I misunderstood the plan9 cpu(1) command and thought it, by default, connected
to the least loaded cpu server available. I was sufficently inspired to write
a cpu(1) command for my HP/UX cluster which did exactly that.
Funny old world.
</anecdote>
-Steve
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc. (was: Re: security questions)
2009-04-17 20:21 ` blstuart
@ 2009-04-18 14:54 ` erik quanstrom
2009-04-18 16:06 ` Mechiel Lukkien
2009-04-19 20:52 ` blstuart
0 siblings, 2 replies; 94+ messages in thread
From: erik quanstrom @ 2009-04-18 14:54 UTC (permalink / raw)
To: 9fans
On Fri Apr 17 16:22:55 EDT 2009, blstuart@bellsouth.net wrote:
> >> I often tell my students that every cycle used by overhead
> >> (kernel, UI, etc) is a cycle taken away from doing the work
> >> of applications. I'd much rather have my DNA sequencing
> >> application finish in 25 days instead of 30 than to have
> >> the system look pretty during those 30 days.
> >
> > i didn't mean to imply we should not be frugal with cycles.
> > i ment to say that we simply don't have anything useful to
> > do with a vast majority of cycles, and that's just as wasteful
> > as doing bouncing icons. we need to work on that problem.
>
> I gotcha. I guess it depends on what you're doing. I remember
> years ago running a simulation on the 11/750 we had. It
> simulated a DSP chip running 2 seconds of real time. It
> ran for over a week. (While it was running, I took the time
> to write another, faster simulator that was able to run
> the simulation in about 2 hours.) For something like that,
> we can certainly use all the cycles we can get. On the other
> hand, I might look for a faster way to compile a kernel
> a while back, but now it compiles fast enough on most
> any machine that I'm not too concerned about where to
> use the cycles. (I'm speaking of a Plan 9 or Inferno kernel
> here; not a *BSD or Linux kernel.) But I suspect that
> virtualization and Dis-style VMs are a pretty good use of
> cycles we have to spare.
people's ideas about what's complicated or hard don't change
as quickly as computing power and storage has increased. i
think there's currently a failure of imagination, at least on
my part. there must be problems that aren't considered
because they were hard.
as an old example, i think that the lab's use of worm storage
for the main file server was incredibly insightful.
what could we do today, but don't quite dare?
- erik
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc. (was: Re: security questions)
2009-04-18 14:54 ` erik quanstrom
@ 2009-04-18 16:06 ` Mechiel Lukkien
2009-04-19 20:52 ` blstuart
1 sibling, 0 replies; 94+ messages in thread
From: Mechiel Lukkien @ 2009-04-18 16:06 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
On Sat, Apr 18, 2009 at 10:54:34AM -0400, erik quanstrom wrote:
> as an old example, i think that the lab's use of worm storage
> for the main file server was incredibly insightful.
>
> what could we do today, but don't quite dare?
stop writing all programs in C, and start writing them in a
higher-level language that takes care of the tedious work that's
hard to get right. like memory/fd accounting and preventing bugs
from compromising system security. dare i say limbo?
just another old example though...
mjl
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc. (was: Re: security questions)
2009-04-18 5:25 ` lucio
@ 2009-04-19 20:19 ` blstuart
0 siblings, 0 replies; 94+ messages in thread
From: blstuart @ 2009-04-19 20:19 UTC (permalink / raw)
To: lucio, 9fans
> The seesion would not be suspended, it would continue to operate as
> your agent and identity and, typically, accept mail on your behalf,
> perform "background" operations such as pay your accounts and in
> general represent you to the web to the extent that security (or lack
> thereof, for many unsophisticated users) permits. Nothing wrong with
> me having a private search bot to look for particular pornography or
> art or documentation while I'm asleep, the trick is to run it on
> whatever platform(s) are suitable at the time.
Okay, I think I have the picture now. The idea of logging
in and out kind of goes out the window. It gets replaced
by connecting, disconnecting, and migrating your online
alter ego. So when I fire up a terminal, I'm don't announce
my presence, but instead pull the alter ego to whatever
machine I'm using at the moment. Shutting down the
terminal would amount to pushing it back out into the
cloud.
> The rest of the time, I
> find it preferable to use GPRS (3G is not yet available) for on-demand
> connections because I pay per volume and not for connect time.
> Naturally, that makes my network a roaming one.
And in cases like this, you basically remotely connect to
the "console" of your alter ego.
> Do you get my drift?
I think I've got a clearer picture than I had before.
> In passing, a device that struck me as being extremely handy is the
> 3G, USB dongle that is highly popular here, you mey be more familiar
> with it than I: it contains a simulated CD-ROM that it uses to install
> its software. I though that was particularly clever, specially if you
> transform it into a Plan 9 or Inferno boot device.
That does sound cool. The only 3G interface I've worked
with was PCMCIA and basically just looked like a modem.
> I'm sorry if I'm throwing around too many ideas with too little flesh,
> I must confess that I find this particular discussion very exciting, I
> have never really had occasion to look at these ideas as carefully as
> I am doing now.
I'm still inclined to think Inferno would make a good
starting point. Suppose we have an Inferno configuration
without #U. Now we move the whole Inferno instance
around. Because apps don't have access to #U, state
information is pretty much confined to Styx connections
and resources managed by the Inferno kernel that gets
moved along with the running apps. I'm sure that as
with any other approach to migration, there be dragons
there. But as an initial line of thinking, it seems to be
a new model that's worth investigating.
BLS
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc. (was: Re: security questions)
2009-04-18 14:54 ` erik quanstrom
2009-04-18 16:06 ` Mechiel Lukkien
@ 2009-04-19 20:52 ` blstuart
2009-04-20 17:30 ` [9fans] VMs, etc maht
1 sibling, 1 reply; 94+ messages in thread
From: blstuart @ 2009-04-19 20:52 UTC (permalink / raw)
To: 9fans
> people's ideas about what's complicated or hard don't change
> as quickly as computing power and storage has increased. i
> think there's currently a failure of imagination, at least on
> my part. there must be problems that aren't considered
> because they were hard.
>
> as an old example, i think that the lab's use of worm storage
> for the main file server was incredibly insightful.
>
> what could we do today, but don't quite dare?
That's a very good question. I'm afraid my imagintion
may be failing here as well. When I think about how
to use cycles, my mind tends to gravitate toward simulation
tasks that need cycles by virtue of scale: things like
weather forcasting, protein folding, etc. One other
thing that periodically gets my attention are some
ideas in the AI realm. But none of those are particularly
novel. Maybe that's why I don't find myself longing
for more performance than the current crop of machines.
Ultimately, I think you're right though. Someone
more clever than I will likely identify a use for the
cycles that is more original than my thoughts and
more intellectually interesting than eye candy.
BLS
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc.
2009-04-19 20:52 ` blstuart
@ 2009-04-20 17:30 ` maht
2009-04-20 17:44 ` erik quanstrom
0 siblings, 1 reply; 94+ messages in thread
From: maht @ 2009-04-20 17:30 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
>> what could we do today, but don't quite dare?
>>
>
>
a Blue Ray writer does 50Gb per disk (we're supposed to be getting one
soon, so maybe I can report back about this later)
ArcVault SCSI autoloading tape drives do from 9.6tb - 76tb
http://www.b2net.co.uk/overland/overland_arcvault_12_autoloader.htm
http://www.b2net.co.uk/overland/overland_arcvault_24_autoloader.htm
http://www.b2net.co.uk/overland/overland_arcvault_48_library.htm
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc.
2009-04-20 17:30 ` [9fans] VMs, etc maht
@ 2009-04-20 17:44 ` erik quanstrom
2009-04-20 17:47 ` Devon H. O'Dell
2009-04-20 17:49 ` maht
0 siblings, 2 replies; 94+ messages in thread
From: erik quanstrom @ 2009-04-20 17:44 UTC (permalink / raw)
To: 9fans
> >> what could we do today, but don't quite dare?
> >>
> >
> >
> a Blue Ray writer does 50Gb per disk (we're supposed to be getting one
> soon, so maybe I can report back about this later)
>
> ArcVault SCSI autoloading tape drives do from 9.6tb - 76tb
>
> http://www.b2net.co.uk/overland/overland_arcvault_12_autoloader.htm
> http://www.b2net.co.uk/overland/overland_arcvault_24_autoloader.htm
> http://www.b2net.co.uk/overland/overland_arcvault_48_library.htm
i'm not following along. what would be the application?
- erik
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc.
2009-04-20 17:44 ` erik quanstrom
@ 2009-04-20 17:47 ` Devon H. O'Dell
2009-04-20 17:49 ` maht
1 sibling, 0 replies; 94+ messages in thread
From: Devon H. O'Dell @ 2009-04-20 17:47 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
2009/4/20 erik quanstrom <quanstro@quanstro.net>:
>
> i'm not following along. what would be the application?
Jukebox, perhaps?
> - erik
>
>
^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [9fans] VMs, etc.
2009-04-20 17:44 ` erik quanstrom
2009-04-20 17:47 ` Devon H. O'Dell
@ 2009-04-20 17:49 ` maht
1 sibling, 0 replies; 94+ messages in thread
From: maht @ 2009-04-20 17:49 UTC (permalink / raw)
To: Fans of the OS Plan 9 from Bell Labs
erik quanstrom wrote:
>>>> what could we do today, but don't quite dare?
>>>>
>>>>
>>>
>>>
>> a Blue Ray writer does 50Gb per disk (we're supposed to be getting one
>> soon, so maybe I can report back about this later)
>>
>> ArcVault SCSI autoloading tape drives do from 9.6tb - 76tb
>>
>> http://www.b2net.co.uk/overland/overland_arcvault_12_autoloader.htm
>> http://www.b2net.co.uk/overland/overland_arcvault_24_autoloader.htm
>> http://www.b2net.co.uk/overland/overland_arcvault_48_library.htm
>>
>
> i'm not following along. what would be the application?
>
> - erik
>
>
I realised after I posted, I snipped the thing about the Labs worm drive
^ permalink raw reply [flat|nested] 94+ messages in thread
end of thread, other threads:[~2009-04-20 17:49 UTC | newest]
Thread overview: 94+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-04-16 17:47 [9fans] security questions Devon H. O'Dell
2009-04-16 18:30 ` erik quanstrom
2009-04-16 19:14 ` Venkatesh Srinivas
2009-04-16 20:10 ` Devon H. O'Dell
2009-04-16 20:19 ` Devon H. O'Dell
2009-04-17 4:48 ` lucio
2009-04-17 5:03 ` Eris Discordia
2009-04-17 9:47 ` lucio
2009-04-17 10:24 ` Eris Discordia
2009-04-17 11:55 ` lucio
2009-04-17 13:08 ` Eris Discordia
2009-04-17 14:15 ` gdiaz
2009-04-17 16:39 ` lucio
[not found] ` <6FD675BC714D323BF959A53B@192.168.1.2>
2009-04-17 16:15 ` Robert Raschke
2009-04-17 20:12 ` John Barham
2009-04-17 21:40 ` blstuart
2009-04-17 16:32 ` [9fans] VMs, etc. (was: Re: security questions) blstuart
2009-04-17 17:11 ` tlaronde
2009-04-17 17:29 ` erik quanstrom
2009-04-17 18:18 ` tlaronde
2009-04-17 19:00 ` erik quanstrom
2009-04-17 18:50 ` blstuart
2009-04-17 18:31 ` blstuart
2009-04-17 18:45 ` erik quanstrom
2009-04-17 18:59 ` blstuart
2009-04-17 19:05 ` erik quanstrom
2009-04-17 20:21 ` blstuart
2009-04-18 14:54 ` erik quanstrom
2009-04-18 16:06 ` Mechiel Lukkien
2009-04-19 20:52 ` blstuart
2009-04-20 17:30 ` [9fans] VMs, etc maht
2009-04-20 17:44 ` erik quanstrom
2009-04-20 17:47 ` Devon H. O'Dell
2009-04-20 17:49 ` maht
2009-04-17 19:39 ` [9fans] VMs, etc. (was: Re: security questions) tlaronde
2009-04-17 21:25 ` blstuart
2009-04-17 21:59 ` tlaronde
2009-04-17 23:41 ` Mechiel Lukkien
2009-04-17 18:59 ` Eris Discordia
2009-04-17 21:38 ` blstuart
[not found] ` <1322FA0842063D3D53C712DC@192.168.1.2>
2009-04-17 20:07 ` J.R. Mauro
2009-04-17 19:02 ` lucio
2009-04-17 21:01 ` blstuart
2009-04-18 5:25 ` lucio
2009-04-19 20:19 ` blstuart
2009-04-17 19:16 ` [9fans] Plan9 - the next 20 years Steve Simon
2009-04-17 19:39 ` J.R. Mauro
2009-04-17 19:43 ` tlaronde
2009-04-17 19:56 ` J.R. Mauro
2009-04-17 20:14 ` Eric Van Hensbergen
2009-04-17 20:18 ` Benjamin Huntsman
2009-04-18 4:26 ` erik quanstrom
2009-04-17 20:29 ` J.R. Mauro
2009-04-18 3:56 ` erik quanstrom
2009-04-18 4:12 ` J.R. Mauro
2009-04-18 4:16 ` erik quanstrom
2009-04-18 5:51 ` J.R. Mauro
2009-04-18 12:52 ` Steve Simon
2009-04-17 20:20 ` John Barham
2009-04-16 20:51 ` [9fans] security questions erik quanstrom
2009-04-16 21:49 ` Devon H. O'Dell
2009-04-16 22:19 ` erik quanstrom
2009-04-16 23:36 ` Devon H. O'Dell
2009-04-17 0:00 ` erik quanstrom
2009-04-17 1:25 ` Devon H. O'Dell
2009-04-17 1:54 ` erik quanstrom
2009-04-17 2:17 ` Devon H. O'Dell
2009-04-17 2:23 ` erik quanstrom
2009-04-17 2:33 ` Devon H. O'Dell
2009-04-17 2:43 ` J.R. Mauro
2009-04-17 5:48 ` john
2009-04-17 5:52 ` Bruce Ellis
2009-04-17 5:52 ` andrey mirtchovski
2009-04-17 5:57 ` Bruce Ellis
2009-04-17 9:26 ` Charles Forsyth
2009-04-17 10:29 ` Steve Simon
2009-04-17 11:04 ` Mechiel Lukkien
2009-04-17 11:36 ` lucio
2009-04-17 11:40 ` lucio
2009-04-17 11:51 ` erik quanstrom
2009-04-17 12:06 ` erik quanstrom
2009-04-17 13:52 ` Steve Simon
2009-04-17 1:59 ` Russ Cox
2009-04-17 12:07 ` maht
2009-04-17 2:07 ` Bakul Shah
2009-04-17 2:19 ` Devon H. O'Dell
2009-04-17 6:33 ` Bakul Shah
2009-04-17 9:51 ` lucio
2009-04-17 11:34 ` erik quanstrom
2009-04-17 12:14 ` Devon H. O'Dell
2009-04-17 18:29 ` Bakul Shah
2009-04-17 11:59 ` Devon H. O'Dell
2009-04-17 5:06 ` Eris Discordia
2009-04-17 8:36 ` Richard Miller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).