From mboxrd@z Thu Jan 1 00:00:00 1970 MIME-Version: 1.0 In-Reply-To: <20091016170316.GA3135@nipl.net> References: <20091015105328.GA18947@nipl.net> <0d5ebfeb839e02bce3fb8511d6a32ce5@hamnavoe.com> <20091016170316.GA3135@nipl.net> Date: Sat, 17 Oct 2009 05:42:10 -0700 Message-ID: From: Roman Shaposhnik To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Subject: Re: [9fans] Barrelfish Topicbox-Message-UUID: 895b975a-ead5-11e9-9d60-3106f5b1d025 On Fri, Oct 16, 2009 at 10:03 AM, Sam Watkins wrote: > On Thu, Oct 15, 2009 at 12:50:48PM +0100, Richard Miller wrote: >> > It's easy to write good code that will take advantage of arbitrarily m= any >> > processors to run faster / smoother, if you have a proper language for= the >> > task. >> >> ... and if you can find a way around Amdahl's law (qv). > > "The speedup of a program using multiple processors in parallel computing= is > limited by the time needed for the sequential fraction of the program." > > So it would only be a problem supposing that a significant part of the pr= ogram > is unparallelizable. =A0I can think of many many tasks where "Amdahl's la= w" is > not going to be a problem at all, for a properly designed system. Lets do a little math, shall we? Better yet, lets graph it: http://en.wikipedia.org/wiki/File:AmdahlsLaw.svg Now, do you see what's on the right side of X axis? That's right 65536 cores. Pause and appreciate the measeleness of speedup... Thanks, Roman.