From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: <13426df10803021919l45b67b63uea5b8871bd2fb5a5@mail.gmail.com> Date: Sun, 2 Mar 2008 19:19:13 -0800 From: "ron minnich" To: "Fans of the OS Plan 9 from Bell Labs" <9fans@cse.psu.edu> Subject: Re: [9fans] GCC/G++: some stress testing In-Reply-To: <47CB236A.2020402@free.fr> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <47CB236A.2020402@free.fr> Topicbox-Message-UUID: 6d109f66-ead3-11e9-9d60-3106f5b1d025 On Sun, Mar 2, 2008 at 2:00 PM, Philippe Anel wrote: > If CSP system itself takes care about memory hierarchy and uses no > synchronisation (using IPI to send message to another core by example), > CSP scales very well. Is this something you have measured or is this conjecture? > Of course IPI mechanism requires a switch to kernel mode which costs a > lot. But this is necessary only if the destination thread is running on > another core, and I don't think latency is very important in algorigthms > requiring a lot of cpus. same question. For a look at an interesting library that scaled well on a 1024-node SMP at NASA Ame's, by Jim Taft. Short form: use shared memory for IPC, not data sharing. he's done very well this way. ron