9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
From: "Lawrence E. Bakst" <ml@iridescent.org>
To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
Subject: Re: [9fans] interesting timing tests
Date: Mon, 21 Jun 2010 23:24:28 -0400	[thread overview]
Message-ID: <p0624081dc845d03ca048@[192.168.0.7]> (raw)
In-Reply-To: <ce0445185b53080d4414a74250d32458@coraid.com>

Do you have a way to turn off one of the sockets on "c" (2 x E5540) and get the numbers with HT (8 processors) and without HT (4 processors)? It would also be interesting to see "c" with HT turned off.

Certainly it seems to me that idlehands needs to be fixed, your bit array "active.schedwait" is one way.

In my experience of bringing up the Alliant FX/8 mini-supercomputer which had 8 (mostly CPU) + 12 (mostly I/O) = 20 processors there were a bunch of details that had to addressed as we went from 1 to 20 processors. There were even some issues with the system timing (user, sys, real) itself being messed up, but I can't remember the details.

I do remember one customer that had a billing system complain that they had their own customers complaining that in high I/O environments they were getting charged for interrupts (included in sys at the time) they didn't incur, which was true. I think we fixed that one, by having an idle process per CPU and charging each interrupt to the processor idle process.

I mention it because we got a lot of mileage out of the decision to give every processor an idle process. Our scheduler was set up to only run that process if there were no other processes available for that processor. When the idle process did run it did a few things and then called halt. There is some more to the story and if anyone is interested, let me know and I'll either post a follow up or I can respond in private.

We used to have a saying at Alliant: "Data drives out speculation".

leb


At 7:26 PM -0400 6/18/10, erik quanstrom wrote:
>note the extreme system time on the 16 processor machine
>
>a	2 * Intel(R) Xeon(R) CPU            5120  @ 1.86GHz
>b	4 * Intel(R) Xeon(R) CPU           E5630  @ 2.53GHz
>c	16* Intel(R) Xeon(R) CPU           E5540  @ 2.53GHz

--
leb@iridescent.org




  parent reply	other threads:[~2010-06-22  3:24 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-06-18 23:26 erik quanstrom
2010-06-19 13:42 ` Richard Miller
2010-06-20  1:36   ` erik quanstrom
2010-06-20  7:44     ` Richard Miller
2010-06-20 12:45       ` erik quanstrom
2010-06-20 16:51         ` Richard Miller
2010-06-20 21:55           ` erik quanstrom
2010-06-21  1:41             ` erik quanstrom
2010-06-21  3:46               ` Venkatesh Srinivas
2010-06-21 14:40                 ` erik quanstrom
2010-06-21 16:42                   ` Venkatesh Srinivas
2010-06-21 16:43                   ` erik quanstrom
2010-06-21 21:11 ` Bakul Shah
2010-06-21 21:21   ` erik quanstrom
2010-06-21 21:47     ` Bakul Shah
2010-06-21 22:16       ` erik quanstrom
2010-06-22  3:24 ` Lawrence E. Bakst [this message]
2010-06-23  1:09   ` erik quanstrom

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='p0624081dc845d03ca048@[192.168.0.7]' \
    --to=ml@iridescent.org \
    --cc=9fans@9fans.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).