From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Fri, 17 Apr 2009 19:59:49 +0100 From: Eris Discordia To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Message-ID: <1322FA0842063D3D53C712DC@[192.168.1.2]> In-Reply-To: <20090417171154.GA2029@polynum.com> References: <21493d85b69b070fed0a4f864eb5325a@proxima.alt.za> <57928816a76e769327ca134d7e28bd06@bellsouth.net> <20090417171154.GA2029@polynum.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline Subject: Re: [9fans] VMs, etc. (was: Re: security questions) Topicbox-Message-UUID: e368c6b0-ead4-11e9-9d60-3106f5b1d025 > even today on an average computer one has this articulation: a CPU (with > a FPU perhaps) ; tightly or loosely connected storage (?ATA or SAN) ; > graphical capacities (terminal) : GPU. It happens so that a reversal of specialization has really taken place, as Brian Stuart suggests. These "terminals" you speak of, GPUs, contain such vast untapped general processing capabilities that new uses and a new framework for using them are being defined: GPGPU and OpenCL. Right now, the GPU on my low-end video card takes a huge burden off of the CPU when leveraged by the right H.264 decoder. Two high definition AVC streams would significantly slow down my computer before I began using a CUDA-enabled decoder. Now I can easily play four in parallel. Similarly, the GPUs in PS3 boxes are being integrated into one of the largest loosely-coupled clusters on the planet. Today, even a mere cellphone may contain enough processing power to run a low-traffic web server or a 3D video game. This processing power comes cheap so it is mostly wasted. I'd like to add to Brian Stuart's comments the point that previous specialization of various "boxes" is mostly disappearing. At some point in near future all boxes may contain identical or very similar powerful hardware--even probably all integrated into one "black box." So cheap that it doesn't matter if one or another hardware resource is wasted. To put to good use such a computational environment system software should stop incorporating a role-based model of various installations. All boxes, except the costliest most special ones, shall be peers. --On Friday, April 17, 2009 7:11 PM +0200 tlaronde@polynum.com wrote: > On Fri, Apr 17, 2009 at 11:32:33AM -0500, blstuart@bellsouth.net wrote: >> - First, the gap between the computational power at the >> terminal and the computational power in the machine room >> has shrunk to the point where it might no longer be significant. >> It may be worth rethinking the separation of CPU and terminal. >> For example, I'm typing this in acme running in a 9vx terminal >> booted using using a combined fs/cpu/auth server for the >> file system. But I rarely use the cpu server capability of >> that machine. > > I'm afraid I don't quite agree with you. > > The definition of a terminal has changed. In Unix, the graphical > interface (X11) was a graphical variant of the text terminal interface, > i.e. the articulation (link, network) was put on the wrong place, > the graphical terminal (X11 server) being a kind of dumb terminal (a > little above a frame buffer), leaving all the processing, including the > handling of the graphical interface (generating the image, > administrating the UI, the menus) on the CPU (Xlib and toolkits run on > the CPU, not the Xserver). > > A terminal is not a no-processing capabilities (a dumb terminal): > it can be a full terminal, that is able to handle the interface, > the representation of data and commands (wandering in a menu shall > be terminal stuff; other users have not to be impacted by an user's > wandering through the UI). > > More and more, for administration, using light terminals, without > software installations is a way to go (less ressources in TCO). "Green" > technology. Data less terminals for security (one looses a terminal, not > the data), and data less for safety (data is centralized and protected). > > > Secondly, one is accustomed to a physical user being several distinct > logical users (accounts), for managing different tasks, or accessing > different kind of data. > > But (to my surprise), the converse is true: a collection of individuals > can be a single logical user, having to handle concurrently the very > same rw data. Terminals are then just distinct views of the same data > (imagine in a CAD program having different windows, different views of a > file ; this is the same, except that the windows are on different > terminals, with different "instances" of the logical user in front of > them). > > The processing is then better kept on a single CPU, handling the > concurrency (and not the fileserver trying to accomodate). The views are > multiplexed, but not the handling of the data. > > Thirdly, you can have a slow/loose link between a CPU and a terminal > since the commands are only a small fraction of the processing done. > You must have a fast or tight link between the CPU and the fileserver. > > In some sense, logically (but not efficiently: read the caveats in the > Plan9 papers; a processor is nothing without tightly coupled memory, so > memory is not a remote pool sharable---Mach!), even today on an > average computer one has this articulation: a CPU (with a FPU > perhaps) ; tightly or loosely connected storage (?ATA or SAN) ; > graphical capacities (terminal) : GPU. > > -- > Thierry Laronde (Alceste) > http://www.kergis.com/ > Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C >