From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: To: 9fans@9fans.net Date: Sun, 19 Apr 2009 19:14:51 -0700 From: Skip Tavakkolian <9nut@9netics.com> In-Reply-To: <3e1162e60904190826w1ff0d7e5ua5456981be9719cc@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Subject: Re: [9fans] Plan9 - the next 20 years Topicbox-Message-UUID: e9e8d4bc-ead4-11e9-9d60-3106f5b1d025 ericvh stated it better in the "FAWN" thread. choosing the abstraction that makes the resulting environments have required attributes (reliable, consistent, easy, etc.) will be the trick. i believe with the current state of the Internet -- e.g. lack of speed and security -- service abstraction is the right level of distributedness. presenting the services as file hierarchy makes sense; 9p is efficient and so the plan9 approach still feels like the right path to cloud computing. > On Sun, Apr 19, 2009 at 12:12 AM, Skip Tavakkolian <9nut@9netics.com> wrote: > >> > Well, in the octopus you have a fixed part, the pc, but all other >> > machines come and go. The feeling is very much that your stuff is in >> > the cloud. >> >> i was going to mention this. to me the current view of cloud >> computing as evidence by papers like this[1] are basically hardware >> infrastructure capable of running vm pools each of which would do >> exactly what a dedicated server would do. the main benefits being low >> administration cost and elasticity. networking, authentication and >> authorization remain as they are now. they are still not addressing >> what octopus and rangboom are trying to address: how to seamlessly and >> automatically make resources accessible. if you read what ken said it >> appears to be this view of cloud computing; he said "some framework to >> allow many loosely-coupled Plan9 systems to emulate a single system >> that would be larger and more reliable". in all virtualization >> systems i've seen the vm has to be smaller than the environment it >> runs on. if vmware or xen were ever to give you a vm that was larger >> than any given real machine it ran on, they'd have to solve the same >> problem. > > > I'm not sure a single system image is any better in the long run than > Distributed Shared Memory. Both have issues of locality, where the > abstraction that gives you the view of a single machine hurts your ability > to account for the lack of locality. > > In other words, I think applications should show a single system image but > maybe not programming models. I'm not 100% sure what I mean by that > actually, but it's sort of an intuitive feeling. > > >> >> >> [1] http://www.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-28.pdf >> >> >>