From mboxrd@z Thu Jan 1 00:00:00 1970 MIME-Version: 1.0 In-Reply-To: <088ac8dca18ec9ef9e83f70f167982eb@coraid.com> References: <1310400339.71319.YahooMailClassic@web30908.mail.mud.yahoo.com> <088ac8dca18ec9ef9e83f70f167982eb@coraid.com> Date: Tue, 12 Jul 2011 02:10:50 +0200 Message-ID: From: hiro <23hiro@googlemail.com> To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Content-Type: text/plain; charset=UTF-8 Subject: Re: [9fans] GNU/Linux/Plan 9 disto Topicbox-Message-UUID: ff574c64-ead6-11e9-9d60-3106f5b1d025 > it's true that network latency is a huge deal these days. and even a > multicore > machine looks like a bunch of fast cores connected to their l3 and each > other > by a slow network. but if you have a "large enough" packet of work, it can > make > sense to move it over to another cpu. i don't see why given an apropriate > defn > of "large enough" this couldn't work across different orders of magnitude. I agree there are a lot of cases where I still want exactly that. On plan9 of course I don't pull from sources over my mobile phone, instead pull and compile on the CPU server. Some sane non-transparent CPU offloading worth it because of 1. slow networks and 2. compilers still running fastest on fast CPUs My main point is there won't be any space left for CPU offloading when it gets limited not only by good old network latency, but also the increasingly powerful GPGPU/APU/FPGA/CPLD/ASIC side. Correct me if I'm wrong.