The Unix Heritage Society mailing list
 help / color / mirror / Atom feed
From: "Perry E. Metzger" <perry@piermont.com>
To: tfb@tfeb.org
Cc: tuhs@minnie.tuhs.org
Subject: Re: [TUHS] PDP-11 legacy, C, and modern architectures
Date: Fri, 29 Jun 2018 12:09:42 -0400	[thread overview]
Message-ID: <20180629120942.6cd4b67d@jabberwock.cb.piermont.com> (raw)
In-Reply-To: <8881414B-FF5C-4BD9-B518-AD22366DE4BC@tfeb.org>

On Fri, 29 Jun 2018 16:32:59 +0100 tfb@tfeb.org wrote:
> On 28 Jun 2018, at 18:09, Larry McVoy <lm@mcvoy.com> wrote:
> > 
> > I'm not sure how people keep missing the original point.  Which
> > was: the market won't choose a bunch of wimpy cpus when it can
> > get faster ones.  It wasn't about the physics (which I'm not
> > arguing with), it was about a choice between lots of wimpy cpus
> > and a smaller number of fast cpus.  The market wants the latter,
> > as Ted said, Sun bet heavily on the former and is no more.
> 
> [I said I wouldn't reply more: I'm weak.]
> 
> I think we have been talking at cross-purposes, which is probably
> my fault.  I think you've been using 'wimpy' to mean 'intentionally
> slower than they could be' while I have been using it to mean 'of
> very tiny computational power compared to the power of the whole
> system'.  Your usage is probably more correct in terms of the way
> the term has been used historically.
>
> But I think my usage tells you something important: that the
> performance of individual cores will, inevitably, become
> increasingly tiny compared to the performance of the system they
> are in, and will almost certainly become asymptotically constant
> (ie however much money you spend on an individual core it will not
> be very much faster than the one you can buy off-the-shelf).

You've touched the point with a needle. This is exactly what I was
getting at. It's not a question of whether you want a better single
CPU or not, they don't exist, and one CPU is a tiny fraction of the
power in a modern distributed or parallel system.

> So, if you want to keep seeing performance improvements (especially
> if you want to keep seeing exponential improvements for any
> significant time), then you have no choice but to start thinking
> about parallelism.
>
> The place I work now is an example of this.  Our machines have the
> fastest cores we could get.  But we need nearly half a million of
> them to do the work we want to do (this is across three systems).

And that's been the case for a long time now. Supercomputers are all
giant arrays of CPUs, there hasn't been a world-beating single core
supercomputer in decades.

> I certainly don't want to argue that choosing intentionally slower
> cores than you can get is a good idea in general (although there
> are cases where it may be, including, perhaps, some HPC workloads).

And anything where power usage is key. If you're doing something that
computationally requires top-end single core performance you run the
4GHz core. Otherwise, you save a boatload of power and also
_cooling_ by going a touch slower. Dynamic power rises as the square
of the clock rate so you get quite a lot out of backing off just a
bit.

But it doesn't matter much even if you want to run flat out, because
no single processor can do what you want any more. There's a serious
limit to how much money you can throw at the vendors to give you a
faster core, and it tops out (per core) at a lot less than you're
paying your top engineers per day.

> However let me add something about the Sun T-series machines, which
> were 'wimpy cores' in the 'intentionally slower' sense.  When these
> started appearing I worked in a canonical Sun customer at the time:
> a big retail bank.  And the reason we did not buy lots of them was
> nothing to do with how fast they were (which was more than fast
> enough), it was because Sun's software was inadequate.

Your memory of this is the same as mine. I was also consulting to quite
similar customers at the time (most of my career has been consulting
to large investment banks. Haven't done much on the retail side though
that's changed in recent years.)

> (As an addentum: what eventually happened / is happening I think is
> that applications are getting recertified on Linux/x86 sitting on
> top of ESX, *which can move VMs live between hosts*, thus solving
> the problem.)

At the investment banks, the appetite for new Sun hardware when cheap
Linux on Intel was available was just not there. As you noted, too
many eggs in one basket was one problem, but another was the
combination of better control on an open source OS and just plain
cheaper and (by then sometimes even nicer) hardware.

Sun seemed to get fat in the .com bubble when people were throwing
money at their stuff like crazy, and never quite adjusted to the fact
that the world was changing and people wanted lots of cheap boxes more
than they wanted a few expensive ones.

Perry
-- 
Perry E. Metzger		perry@piermont.com

  reply	other threads:[~2018-06-29 16:09 UTC|newest]

Thread overview: 68+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-26 17:54 Nelson H. F. Beebe
2018-06-26 18:03 ` Cornelius Keck
2018-06-26 21:21   ` Nelson H. F. Beebe
2018-06-26 21:56   ` Kurt H Maier
2018-06-26 18:52 ` Ronald Natalie
2018-06-26 19:01 ` Ronald Natalie
2018-06-26 21:16   ` Arthur Krewat
2018-06-26 21:50     ` Larry McVoy
2018-06-26 21:54       ` Ronald Natalie
2018-06-26 21:59         ` Larry McVoy
2018-06-26 22:20           ` Bakul Shah
2018-06-26 22:33             ` Arthur Krewat
2018-06-26 23:53               ` Bakul Shah
2018-06-27  8:30             ` Tim Bradshaw
2018-06-26 22:33           ` Andy Kosela
2018-06-27  0:11             ` Bakul Shah
2018-06-27  6:10               ` arnold
2018-06-27  2:18           ` [TUHS] PDP-11 legacy, C, and modern architectTures Theodore Y. Ts'o
2018-06-27  2:22             ` Theodore Y. Ts'o
2018-06-28 14:36             ` Steffen Nurpmeso
2018-06-27 11:26         ` [TUHS] PDP-11 legacy, C, and modern architectures Tony Finch
2018-06-27 14:33           ` Clem Cole
2018-06-27 14:38             ` Clem Cole
2018-06-27 15:30             ` Paul Winalski
2018-06-27 16:55               ` Tim Bradshaw
2018-06-27  6:27     ` arnold
2018-06-27 16:00 ` Steve Johnson
2018-06-28  4:12   ` Bakul Shah
2018-06-28 14:15     ` Theodore Y. Ts'o
2018-06-28 14:40       ` Larry McVoy
2018-06-28 14:55         ` Perry E. Metzger
2018-06-28 14:58           ` Larry McVoy
2018-06-28 15:39             ` Tim Bradshaw
2018-06-28 16:02               ` Larry McVoy
2018-06-28 16:41                 ` Tim Bradshaw
2018-06-28 16:59                   ` Paul Winalski
2018-06-28 17:09                   ` Larry McVoy
2018-06-29 15:32                     ` tfb
2018-06-29 16:09                       ` Perry E. Metzger [this message]
2018-06-29 17:51                       ` Larry McVoy
2018-06-29 18:27                         ` Tim Bradshaw
2018-06-29 19:02                         ` Perry E. Metzger
2018-06-28 20:37                 ` Perry E. Metzger
2018-06-28 15:37         ` Clem Cole
2018-06-28 20:37           ` Lawrence Stewart
2018-06-28 14:43       ` Perry E. Metzger
2018-06-28 14:56         ` Larry McVoy
2018-06-28 15:07           ` Warner Losh
2018-06-28 19:42           ` Perry E. Metzger
2018-06-28 19:55             ` Paul Winalski
2018-06-28 20:42             ` Warner Losh
2018-06-28 21:03               ` Perry E. Metzger
2018-06-28 22:29                 ` Theodore Y. Ts'o
2018-06-29  0:18                   ` Larry McVoy
2018-06-29 15:41                     ` Perry E. Metzger
2018-06-29 18:01                       ` Larry McVoy
2018-06-29 19:07                         ` Perry E. Metzger
2018-06-29  5:58                   ` Michael Kjörling
2018-06-28 20:52             ` Lawrence Stewart
2018-06-28 21:07               ` Perry E. Metzger
2018-06-28 16:45       ` Paul Winalski
2018-06-28 20:47         ` Perry E. Metzger
2018-06-29 15:43         ` emanuel stiebler
2018-06-29  2:02       ` Bakul Shah
2018-06-29 12:58         ` Theodore Y. Ts'o
2018-06-29 18:41           ` Perry E. Metzger
2018-06-29  1:02 Noel Chiappa
2018-06-29  1:06 Noel Chiappa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180629120942.6cd4b67d@jabberwock.cb.piermont.com \
    --to=perry@piermont.com \
    --cc=tfb@tfeb.org \
    --cc=tuhs@minnie.tuhs.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).