From mboxrd@z Thu Jan 1 00:00:00 1970 From: tlaronde@polynum.com Date: Fri, 17 Apr 2009 19:11:54 +0200 To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Message-ID: <20090417171154.GA2029@polynum.com> References: <21493d85b69b070fed0a4f864eb5325a@proxima.alt.za> <57928816a76e769327ca134d7e28bd06@bellsouth.net> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <57928816a76e769327ca134d7e28bd06@bellsouth.net> User-Agent: Mutt/1.4.2.3i Subject: Re: [9fans] VMs, etc. (was: Re: security questions) Topicbox-Message-UUID: e25f99c4-ead4-11e9-9d60-3106f5b1d025 On Fri, Apr 17, 2009 at 11:32:33AM -0500, blstuart@bellsouth.net wrote: > - First, the gap between the computational power at the > terminal and the computational power in the machine room > has shrunk to the point where it might no longer be significant. > It may be worth rethinking the separation of CPU and terminal. > For example, I'm typing this in acme running in a 9vx terminal > booted using using a combined fs/cpu/auth server for the > file system. But I rarely use the cpu server capability of > that machine. I'm afraid I don't quite agree with you. The definition of a terminal has changed. In Unix, the graphical interface (X11) was a graphical variant of the text terminal interface, i.e. the articulation (link, network) was put on the wrong place, the graphical terminal (X11 server) being a kind of dumb terminal (a little above a frame buffer), leaving all the processing, including the handling of the graphical interface (generating the image, administrating the UI, the menus) on the CPU (Xlib and toolkits run on the CPU, not the Xserver). A terminal is not a no-processing capabilities (a dumb terminal): it can be a full terminal, that is able to handle the interface, the representation of data and commands (wandering in a menu shall be terminal stuff; other users have not to be impacted by an user's wandering through the UI). More and more, for administration, using light terminals, without software installations is a way to go (less ressources in TCO). "Green" technology. Data less terminals for security (one looses a terminal, not the data), and data less for safety (data is centralized and protected). Secondly, one is accustomed to a physical user being several distinct logical users (accounts), for managing different tasks, or accessing different kind of data. But (to my surprise), the converse is true: a collection of individuals can be a single logical user, having to handle concurrently the very same rw data. Terminals are then just distinct views of the same data (imagine in a CAD program having different windows, different views of a file ; this is the same, except that the windows are on different terminals, with different "instances" of the logical user in front of them). The processing is then better kept on a single CPU, handling the concurrency (and not the fileserver trying to accomodate). The views are multiplexed, but not the handling of the data. Thirdly, you can have a slow/loose link between a CPU and a terminal since the commands are only a small fraction of the processing done. You must have a fast or tight link between the CPU and the fileserver. In some sense, logically (but not efficiently: read the caveats in the Plan9 papers; a processor is nothing without tightly coupled memory, so memory is not a remote pool sharable---Mach!), even today on an average computer one has this articulation: a CPU (with a FPU perhaps) ; tightly or loosely connected storage (?ATA or SAN) ; graphical capacities (terminal) : GPU. -- Thierry Laronde (Alceste) http://www.kergis.com/ Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C