9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
From: Philippe Anel <xigh@free.fr>
To: Fans of the OS Plan 9 from Bell Labs <9fans@cse.psu.edu>
Subject: Re: [9fans] GCC/G++: some stress testing
Date: Sun,  2 Mar 2008 23:00:10 +0100	[thread overview]
Message-ID: <47CB236A.2020402@free.fr> (raw)
In-Reply-To: <E329B8ED-63E6-42C7-BB0B-81F4FAF79962@telus.net>

Paul Lalonde wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> CSP doesn't scale very well to hundreds of simultaneously executing
> threads (my claim, not, as far as I've found yet, anyone else's).  It
> is very well suited to a small number of threads that need to
> communicate, and as a model of concurrency for tasks with few points
> of contact.  For performance, the channel locks become a bottleneck as
> the number of cores scale up.  As far as expressiveness, there are
> still issues with composability and correctness as the number of
> threads interacting increases.  Yes, you at least get local stacks,
> but the work seems to get exponentially harder as the number of
> systems in the simulation (um, game engine) increases.
Interesting.
I agree with you, taking care about memory hierarchy is becoming very
important. Especially if you think about the upcoming NUMAcc systems
(Opterons are already there though).
But the fact is doesn't scale well is not about CSP itself, but the way
it has been implemented.
If CSP system itself takes care about memory hierarchy and uses no
synchronisation (using IPI to send message to another core by example),
CSP scales very well.
Of course IPI mechanism requires a switch to kernel mode which costs a
lot. But this is necessary only if the destination thread is running on
another core, and I don't think latency is very important in algorigthms
requiring a lot of cpus.
What do you think ?

Phil;



  reply	other threads:[~2008-03-02 22:00 UTC|newest]

Thread overview: 78+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-03-01  4:55 lucio
2008-03-01  6:02 ` Eric Van Hensbergen
2008-03-01  6:25   ` lucio
2008-03-01  6:39   ` Lyndon Nerenberg
2008-03-01  6:52     ` lucio
2008-03-01  6:59       ` Lyndon Nerenberg
2008-03-01  7:42         ` lucio
2008-03-01  7:56           ` Lyndon Nerenberg
2008-03-01  7:11     ` ron minnich
2008-03-01  7:29       ` Lyndon Nerenberg
2008-03-01 16:40         ` ron minnich
2008-03-01 16:50           ` Bruce Ellis
2008-03-01  8:14       ` lucio
2008-03-01 11:13         ` hiro
2008-03-01 11:47           ` [9fans] intellect? Lyndon Nerenberg
2008-03-01 11:51             ` Lyndon Nerenberg
2008-03-01 11:53             ` Charles Forsyth
2008-03-01 12:11             ` hiro
2008-03-01 12:37               ` hiro
2008-03-01 16:22     ` [9fans] GCC/G++: some stress testing Eric Van Hensbergen
2008-03-01  7:12 ` ron minnich
2008-03-01  7:32   ` Lyndon Nerenberg
2008-03-01 11:49 ` Charles Forsyth
2008-03-01 11:58   ` lucio
2008-03-01 18:15   ` don bailey
2008-03-01 18:24     ` erik quanstrom
2008-03-01 20:19     ` lucio
2008-03-01 21:36       ` ron minnich
2008-03-02  1:07         ` Paul Lalonde
2008-03-02  4:28           ` ron minnich
2008-03-02  8:03             ` Bruce Ellis
2008-03-02 11:12           ` Charles Forsyth
2008-03-02 16:21             ` Paul Lalonde
2008-03-02 18:59               ` erik quanstrom
2008-03-02 20:34                 ` Paul Lalonde
2008-03-02 22:00                   ` Philippe Anel [this message]
2008-03-03  3:19                     ` ron minnich
2008-03-03  9:12                       ` Philippe Anel
2008-03-03  3:25                     ` Paul Lalonde
2008-03-03  9:12                       ` Philippe Anel
2008-03-04  2:31                         ` Paul Lalonde
2008-03-03  3:31                   ` Roman V. Shaposhnik
2008-03-04  0:49                     ` erik quanstrom
2008-03-04  2:17                       ` Paul Lalonde
2008-03-04  4:52                         ` erik quanstrom
2008-03-04 10:57                     ` Paweł Lasek
2008-03-04 14:57                       ` ron minnich
2008-03-04 15:55                         ` Philippe Anel
2008-03-04 18:27                           ` ron minnich
2008-03-04 18:00                       ` Roman V. Shaposhnik
2008-03-07  5:21   ` lucio
2008-03-07 10:21     ` Russ Cox
2008-03-07 11:15       ` Pietro Gagliardi
2008-03-07 18:49         ` Skip Tavakkolian
2008-03-07 18:55           ` lucio
2008-03-07 17:22       ` lucio
2008-03-01 14:45 ` lejatorn
2008-03-01 14:49   ` Pietro Gagliardi
2008-03-01 15:17     ` lucio
2008-03-01 16:35       ` Pietro Gagliardi
2008-03-01 16:41       ` ron minnich
2008-03-01 16:53         ` Bruce Ellis
2008-03-01 17:13           ` ron minnich
2008-03-01 20:29             ` geoff
2008-03-01 21:36               ` ron minnich
2008-03-01 22:29                 ` Bruce Ellis
2008-03-04  7:35         ` Lyndon Nerenberg
2008-03-01 15:02   ` lucio
2008-03-01 16:31     ` Eric Van Hensbergen
2008-03-01 17:52     ` lejatorn
2008-03-02 17:57     ` sqweek
2008-03-02 18:22       ` lucio
2008-03-02 20:56       ` cinap_lenrek
2008-03-09 17:53 Aharon Robbins
2008-03-09 17:57 ` Bruce Ellis
2008-03-09 18:28 ` Pietro Gagliardi
2008-03-09 19:17   ` erik quanstrom
2008-03-09 19:22   ` Skip Tavakkolian

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=47CB236A.2020402@free.fr \
    --to=xigh@free.fr \
    --cc=9fans@cse.psu.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).