From: "Roman V. Shaposhnik" <rvs@sun.com>
To: erik quanstrom <quanstro@quanstro.net>
Cc: 9fans@9fans.net
Subject: Re: [9fans] Plan 9 and multicores/parallelism/concurrency?
Date: Mon, 14 Jul 2008 13:33:01 -0700 [thread overview]
Message-ID: <1216067581.14715.45.camel@goose.sun.com> (raw)
In-Reply-To: <f1209aefaab5eece7465c3d0df545ddd@quanstro.net>
On Mon, 2008-07-14 at 12:35 -0400, erik quanstrom wrote:
> > > Plan 9 makes it easy via 9p, its file system/resource sharing
> > > protocol. In plan 9, things like graphics and network drivers export a
> > > 9p interface (a filetree). Furthermore, 9p is network transparent
> > > which means accesses to remote resources look exactly like accesses to
> > > local resources, and this is the main trick - processes do not care
> > > whether the file they are interested in is being served by the kernel,
> > > a userspace process, or a machine half way across the world.
> >
> > All very true. And it sure does provide enormous benefits on distributed
> > memory architectures. But do you know of any part that would be
> > beneficial for highly-SMP systems?
>
> do you have some reason to believe that 9p (or just read and write)
> is not effective on such a machine?
I have some (not a whole lot, since I haven't looked at source code
for a while) reason to believe that the current 9P implementation
doesn't seem to exploit the opportunity when both ends happen to run
on the same shared memory. I would love to be proved wrong. Although,
the higher level issue that I have with 9P on a shared memory
architectures is the fact that file and communication abstractions
might not be the best way to represent the shared memory resources
to begin with. IOW, mmap()-like things might be a closer match.
> since scheduling would be the main shared resource, do you think
> it would be the limiting factor?
Yes. And that's where the comment in my first email came from:
scheduling is a tricky thing on a shared memory NUMA-like systems.
Solaris's scheduler is not shy when it comes to big iron (100+ CPU SMP
boxes) but even it had to be heavily tuned when a Batoka box first
came to the labs. When you have physcical threads (CPUs), virtual
threads and a non trivial memory hierarchy -- the decision of what
is the best place (hardware-wise) for a give thread to run becomes
a non-trivial one. Kernels that can track affinity properly rule
the day. I don't think that Plan9 scheduler has had an
opportunity to be tuned for such an environment. Same goes for
virtual memory page related algorithms.
Here's a decent (albeit brief) overview of what kernel has to
face these days in order to be reasonably savvy on shared memory,
multicore architectures with NUMA-like memory hierarchy:
http://www.redhat.com/promo/summit/2008/downloads/pdf/Wednesday_1015am_Rik_Van_Riel_Hot_Topics.pdf
Start from slide #13.
Thanks,
Roman.
next parent reply other threads:[~2008-07-14 20:33 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <f1209aefaab5eece7465c3d0df545ddd@quanstro.net>
2008-07-14 20:33 ` Roman V. Shaposhnik [this message]
2008-07-15 1:37 ` Joel C. Salomon
2008-07-15 8:01 ` Bakul Shah
2008-07-15 17:50 ` Paul Lalonde
2008-07-17 19:29 ` Bakul Shah
2008-07-18 3:31 ` Paul Lalonde
2008-07-14 16:35 erik quanstrom
-- strict thread matches above, loose matches on Subject: below --
2008-07-14 8:45 ssecorp
2008-07-14 9:08 ` sqweek
2008-07-14 16:17 ` Iruata Souza
2008-07-14 16:31 ` Roman V. Shaposhnik
2008-07-14 10:15 ` a
2008-07-14 15:32 ` David Leimbach
2008-07-14 16:00 ` erik quanstrom
2008-07-14 16:29 ` Roman V. Shaposhnik
2008-07-14 20:08 ` a
2008-07-14 20:39 ` Roman V. Shaposhnik
2008-07-14 22:12 ` a
2008-07-17 12:26 ` Roman V. Shaposhnik
2008-07-17 12:40 ` erik quanstrom
2008-07-17 13:00 ` ron minnich
2008-07-14 20:43 ` Charles Forsyth
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1216067581.14715.45.camel@goose.sun.com \
--to=rvs@sun.com \
--cc=9fans@9fans.net \
--cc=quanstro@quanstro.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).