From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: <3481e409129226c68958b4f16554718d@bellsouth.net> To: 9fans@9fans.net Date: Fri, 17 Apr 2009 13:50:56 -0500 From: blstuart@bellsouth.net In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Subject: Re: [9fans] VMs, etc. (was: Re: security questions) Topicbox-Message-UUID: e2992dc4-ead4-11e9-9d60-3106f5b1d025 > if you look closely enough, this kind of breaks down. numa > machines are pretty popular these days (opteron, intel qpi-based > processors). it's possible with a modest loss of performance to > share memory across processors and not worry about it. Way back in the dim times when hypercubes roamed the earth, I played around a bit with parallel machines. When I was writing my master's thesis, I tried to find a way to dispell the idea that shared-memory vs interconnection network was as bipolar as the terms multiprocessor and multicomputer would suggest. One of the few things in that work that I think still makes sense is characterizing the degree of coupling as a continuum based on the ratio of bytes transferred between CPUs to bytes accessed in local memory. So C.mmp would have a very high degree of coupling and SETI@home would have a very low degree of coupling. The upshot is that if I have a fast enough network, my degree of coupling is high enough that I don't really care whether or how much memory is local and how much is on the other side of the building. Of course, until recently, the rate at which CPU fetches must be to keep the pipeline full has grown much faster than network speeds. So the idea of remote memory hasn't been all that useful. However, I wouldn't be surprised to see that change over the next 10 to 20 years. So maybe my local CPU will gain access to most of its memory by importing /dev/memctl from a memory server (1/2 :)) BLS