The Unix Heritage Society mailing list
 help / color / mirror / Atom feed
From: "Perry E. Metzger" <perry@piermont.com>
To: "Theodore Y. Ts'o" <tytso@mit.edu>
Cc: tuhs@minnie.tuhs.org
Subject: Re: [TUHS] PDP-11 legacy, C, and modern architectures
Date: Fri, 29 Jun 2018 14:41:00 -0400	[thread overview]
Message-ID: <20180629144100.474eccf4@jabberwock.cb.piermont.com> (raw)
In-Reply-To: <20180629125848.GD1231@thunk.org>

On Fri, 29 Jun 2018 08:58:48 -0400 "Theodore Y. Ts'o" <tytso@mit.edu>
wrote:
> All of this is to point out that talking about 2998 threads really
> doesn't mean much.  We shouldn't be talking about threads; we should
> be talking about how many CPU cores can we usefully keep busy at the
> same time.  Most of the time, for desktops and laptops, except for
> brief moments when you are running "make -j32" (and that's only for
> us weird programmer-types; we aren't actually the common case),
> most of the time, the user-facing CPU is twiddling its fingers.

On the other hand, when people are doing machine learning stuff on
GPUs or TPUs, or are playing a video game on a GPU, or are doing
weather prediction or magnetohydrodynamics simulations, or are
calculating the bonding energies of molecules, they very much _are_
going to use each and every computation unit they can get their hands
on. Yah, a programmer doesn't use very much CPU unless they're
compiling, but the world's big iron isn't there to make programmers
happy these days.

Even if "all" you're doing is handling instant message traffic for a
couple hundred million people, you're going to be doing a lot of stuff
that isn't easily classified as concurrent vs. parallel
vs. distributed any more, and the edges all get funny.

> > 4. The reason most people prefer to use one very high perf.
> >    CPU rather than a bunch of "wimpy" processors is *because*
> >    most of our tooling uses only sequential languages with
> >    very little concurrency.
>
> The problem is that I've been hearing this excuse for two decades.
> And there have been people who have been working on this problem.
> And at this point, there's been a bit of "parallelism winter" that
> is much like the "AI winter" in the 80's.

The parallelism winter already happened, in the late 1980s. Larry may
think I'm a youngster, but I worked over 30 years ago on a machine
called "DADO" built at Columbia with 1023 processors arranged in a
tree with MIMD and SIMD features depending on how it was
arranged. Later I worked at Bellcore on something called the Y machine
built for massive simulations.

All that parallel stuff died hard because general purpose CPUs kept
getting faster too quickly for it to be worthwhile. Now, however,
things are different.

> Lots of people have been promisng wonderful results for a long time;
> Sun bet their company (and lost) on it; and there haven't been much
> in the way of results.

I'm going to dispute that. Sun bet the company on the notion that
people wanted machines they could sell them at ridiculously high
margins with pretty low performance per dollar when they could go out
and buy Intel boxes running RHEL instead. As has been said elsewhere
in the thread, what failed wasn't Niagra as much as Sun's entire
attempt to sell people what were effectively mainframes at a point
where they no longer wanted giant boxes with low throughput per
dollar.

Frankly, Sun jumped the shark long before. I remember some people (Hi
Larry!) valiantly keeping adding minor variants on the SunOS release
number (what did it get to? SunOS 4.1.3_u1b or some such?) at a point
where the firm had committed to Solaris. They stopped shipping
compilers so they could charge you for them, stopped maintaining the
desktop, stopped updating the userland utilities so they got
ridiculously behind (Solaris still, when I last checked, had a /bin/sh
that wouldn't handle $( ) ), and put everything into selling bigger
and bigger iron, which paid off for a little while, until it didn't
any more.

> Sure, there are specialized cases where this has been useful ---
> making better nuclear bumbs with which to kill ourselves, predicting
> the weather, etc.  But for the most part, there haven't been much
> improvement for anything other than super-specialzied use cases.

Your laptop's GPUs beat it's CPUs pretty badly on power, and you're
using them every day. Ditto for the GPUs in your cellphone. You're
invoking big parallel back-ends at Amazon, Google, and Apple all the
time over the network, too, to do things like voice recognition and
finding the best route on a large map. They might be "specialized"
uses, but you're invoking them enough times an hour that maybe they're
not that specialized?

> > 6. The conventional wisdom is parallel languages are a failure
> >    and parallel programming is *hard*.  Hoare's CSP and
> >    Dijkstra's "elephants made out of mosquitos" papers are
> >    over 40 years old.
>
> It's a failure because there hasn't been *results*.  There are
> parallel languages that have been proposed by academics --- I just
> don't think they are any good, and they certainly haven't proven
> themselves to end-users.

Er, Erlang? Rust? Go has CSP and is in very wide deployment? There's
multicore stuff in a bunch of functional languages, too.

If you're using Firefox right now, a large chunk of the code is
running multicore using Rust to assure that parallelism is safe. Rust
does that using type theory work (on linear types) from the PL
community. It's excellent research that's paying off in the real
world.

(And on the original point, you're also using the GPU to render most
of what you're looking at, invoked from your browser, whether Firefox,
Chrome, or Safari. That's all parallel stuff and it's going on every
time you open a web page.)

Perry
-- 
Perry E. Metzger		perry@piermont.com

  reply	other threads:[~2018-06-29 18:41 UTC|newest]

Thread overview: 68+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-26 17:54 Nelson H. F. Beebe
2018-06-26 18:03 ` Cornelius Keck
2018-06-26 21:21   ` Nelson H. F. Beebe
2018-06-26 21:56   ` Kurt H Maier
2018-06-26 18:52 ` Ronald Natalie
2018-06-26 19:01 ` Ronald Natalie
2018-06-26 21:16   ` Arthur Krewat
2018-06-26 21:50     ` Larry McVoy
2018-06-26 21:54       ` Ronald Natalie
2018-06-26 21:59         ` Larry McVoy
2018-06-26 22:20           ` Bakul Shah
2018-06-26 22:33             ` Arthur Krewat
2018-06-26 23:53               ` Bakul Shah
2018-06-27  8:30             ` Tim Bradshaw
2018-06-26 22:33           ` Andy Kosela
2018-06-27  0:11             ` Bakul Shah
2018-06-27  6:10               ` arnold
2018-06-27  2:18           ` [TUHS] PDP-11 legacy, C, and modern architectTures Theodore Y. Ts'o
2018-06-27  2:22             ` Theodore Y. Ts'o
2018-06-28 14:36             ` Steffen Nurpmeso
2018-06-27 11:26         ` [TUHS] PDP-11 legacy, C, and modern architectures Tony Finch
2018-06-27 14:33           ` Clem Cole
2018-06-27 14:38             ` Clem Cole
2018-06-27 15:30             ` Paul Winalski
2018-06-27 16:55               ` Tim Bradshaw
2018-06-27  6:27     ` arnold
2018-06-27 16:00 ` Steve Johnson
2018-06-28  4:12   ` Bakul Shah
2018-06-28 14:15     ` Theodore Y. Ts'o
2018-06-28 14:40       ` Larry McVoy
2018-06-28 14:55         ` Perry E. Metzger
2018-06-28 14:58           ` Larry McVoy
2018-06-28 15:39             ` Tim Bradshaw
2018-06-28 16:02               ` Larry McVoy
2018-06-28 16:41                 ` Tim Bradshaw
2018-06-28 16:59                   ` Paul Winalski
2018-06-28 17:09                   ` Larry McVoy
2018-06-29 15:32                     ` tfb
2018-06-29 16:09                       ` Perry E. Metzger
2018-06-29 17:51                       ` Larry McVoy
2018-06-29 18:27                         ` Tim Bradshaw
2018-06-29 19:02                         ` Perry E. Metzger
2018-06-28 20:37                 ` Perry E. Metzger
2018-06-28 15:37         ` Clem Cole
2018-06-28 20:37           ` Lawrence Stewart
2018-06-28 14:43       ` Perry E. Metzger
2018-06-28 14:56         ` Larry McVoy
2018-06-28 15:07           ` Warner Losh
2018-06-28 19:42           ` Perry E. Metzger
2018-06-28 19:55             ` Paul Winalski
2018-06-28 20:42             ` Warner Losh
2018-06-28 21:03               ` Perry E. Metzger
2018-06-28 22:29                 ` Theodore Y. Ts'o
2018-06-29  0:18                   ` Larry McVoy
2018-06-29 15:41                     ` Perry E. Metzger
2018-06-29 18:01                       ` Larry McVoy
2018-06-29 19:07                         ` Perry E. Metzger
2018-06-29  5:58                   ` Michael Kjörling
2018-06-28 20:52             ` Lawrence Stewart
2018-06-28 21:07               ` Perry E. Metzger
2018-06-28 16:45       ` Paul Winalski
2018-06-28 20:47         ` Perry E. Metzger
2018-06-29 15:43         ` emanuel stiebler
2018-06-29  2:02       ` Bakul Shah
2018-06-29 12:58         ` Theodore Y. Ts'o
2018-06-29 18:41           ` Perry E. Metzger [this message]
2018-06-29  1:02 Noel Chiappa
2018-06-29  1:06 Noel Chiappa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180629144100.474eccf4@jabberwock.cb.piermont.com \
    --to=perry@piermont.com \
    --cc=tuhs@minnie.tuhs.org \
    --cc=tytso@mit.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).