From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: tuhs-bounces@minnie.tuhs.org X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham autolearn_force=no version=3.4.1 Received: from minnie.tuhs.org (minnie.tuhs.org [45.79.103.53]) by inbox.vuxu.org (OpenSMTPD) with ESMTP id d5cc7e22 for ; Fri, 29 Jun 2018 18:41:17 +0000 (UTC) Received: by minnie.tuhs.org (Postfix, from userid 112) id 5A981A1B14; Sat, 30 Jun 2018 04:41:16 +1000 (AEST) Received: from minnie.tuhs.org (localhost [127.0.0.1]) by minnie.tuhs.org (Postfix) with ESMTP id D42B7A1857; Sat, 30 Jun 2018 04:41:04 +1000 (AEST) Received: by minnie.tuhs.org (Postfix, from userid 112) id 66F13A1857; Sat, 30 Jun 2018 04:41:02 +1000 (AEST) Received: from hacklheber.piermont.com (hacklheber.piermont.com [166.84.7.14]) by minnie.tuhs.org (Postfix) with ESMTPS id 50CADA1849 for ; Sat, 30 Jun 2018 04:41:01 +1000 (AEST) Received: from snark.cb.piermont.com (localhost [127.0.0.1]) by hacklheber.piermont.com (Postfix) with ESMTP id 9B97811B; Fri, 29 Jun 2018 14:41:00 -0400 (EDT) Received: from jabberwock.cb.piermont.com (jabberwock.cb.piermont.com [10.160.2.107]) by snark.cb.piermont.com (Postfix) with ESMTP id 75CFC2DED83; Fri, 29 Jun 2018 14:41:00 -0400 (EDT) Date: Fri, 29 Jun 2018 14:41:00 -0400 From: "Perry E. Metzger" To: "Theodore Y. Ts'o" Message-ID: <20180629144100.474eccf4@jabberwock.cb.piermont.com> In-Reply-To: <20180629125848.GD1231@thunk.org> References: <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com> <20180628141538.GB663@thunk.org> <20180629020219.1AAB4156E517@mail.bitblocks.com> <20180629125848.GD1231@thunk.org> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: [TUHS] PDP-11 legacy, C, and modern architectures X-BeenThere: tuhs@minnie.tuhs.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: The Unix Heritage Society mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: tuhs@minnie.tuhs.org Errors-To: tuhs-bounces@minnie.tuhs.org Sender: "TUHS" On Fri, 29 Jun 2018 08:58:48 -0400 "Theodore Y. Ts'o" wrote: > All of this is to point out that talking about 2998 threads really > doesn't mean much. We shouldn't be talking about threads; we should > be talking about how many CPU cores can we usefully keep busy at the > same time. Most of the time, for desktops and laptops, except for > brief moments when you are running "make -j32" (and that's only for > us weird programmer-types; we aren't actually the common case), > most of the time, the user-facing CPU is twiddling its fingers. On the other hand, when people are doing machine learning stuff on GPUs or TPUs, or are playing a video game on a GPU, or are doing weather prediction or magnetohydrodynamics simulations, or are calculating the bonding energies of molecules, they very much _are_ going to use each and every computation unit they can get their hands on. Yah, a programmer doesn't use very much CPU unless they're compiling, but the world's big iron isn't there to make programmers happy these days. Even if "all" you're doing is handling instant message traffic for a couple hundred million people, you're going to be doing a lot of stuff that isn't easily classified as concurrent vs. parallel vs. distributed any more, and the edges all get funny. > > 4. The reason most people prefer to use one very high perf. > > CPU rather than a bunch of "wimpy" processors is *because* > > most of our tooling uses only sequential languages with > > very little concurrency. > > The problem is that I've been hearing this excuse for two decades. > And there have been people who have been working on this problem. > And at this point, there's been a bit of "parallelism winter" that > is much like the "AI winter" in the 80's. The parallelism winter already happened, in the late 1980s. Larry may think I'm a youngster, but I worked over 30 years ago on a machine called "DADO" built at Columbia with 1023 processors arranged in a tree with MIMD and SIMD features depending on how it was arranged. Later I worked at Bellcore on something called the Y machine built for massive simulations. All that parallel stuff died hard because general purpose CPUs kept getting faster too quickly for it to be worthwhile. Now, however, things are different. > Lots of people have been promisng wonderful results for a long time; > Sun bet their company (and lost) on it; and there haven't been much > in the way of results. I'm going to dispute that. Sun bet the company on the notion that people wanted machines they could sell them at ridiculously high margins with pretty low performance per dollar when they could go out and buy Intel boxes running RHEL instead. As has been said elsewhere in the thread, what failed wasn't Niagra as much as Sun's entire attempt to sell people what were effectively mainframes at a point where they no longer wanted giant boxes with low throughput per dollar. Frankly, Sun jumped the shark long before. I remember some people (Hi Larry!) valiantly keeping adding minor variants on the SunOS release number (what did it get to? SunOS 4.1.3_u1b or some such?) at a point where the firm had committed to Solaris. They stopped shipping compilers so they could charge you for them, stopped maintaining the desktop, stopped updating the userland utilities so they got ridiculously behind (Solaris still, when I last checked, had a /bin/sh that wouldn't handle $( ) ), and put everything into selling bigger and bigger iron, which paid off for a little while, until it didn't any more. > Sure, there are specialized cases where this has been useful --- > making better nuclear bumbs with which to kill ourselves, predicting > the weather, etc. But for the most part, there haven't been much > improvement for anything other than super-specialzied use cases. Your laptop's GPUs beat it's CPUs pretty badly on power, and you're using them every day. Ditto for the GPUs in your cellphone. You're invoking big parallel back-ends at Amazon, Google, and Apple all the time over the network, too, to do things like voice recognition and finding the best route on a large map. They might be "specialized" uses, but you're invoking them enough times an hour that maybe they're not that specialized? > > 6. The conventional wisdom is parallel languages are a failure > > and parallel programming is *hard*. Hoare's CSP and > > Dijkstra's "elephants made out of mosquitos" papers are > > over 40 years old. > > It's a failure because there hasn't been *results*. There are > parallel languages that have been proposed by academics --- I just > don't think they are any good, and they certainly haven't proven > themselves to end-users. Er, Erlang? Rust? Go has CSP and is in very wide deployment? There's multicore stuff in a bunch of functional languages, too. If you're using Firefox right now, a large chunk of the code is running multicore using Rust to assure that parallelism is safe. Rust does that using type theory work (on linear types) from the PL community. It's excellent research that's paying off in the real world. (And on the original point, you're also using the GPU to render most of what you're looking at, invoked from your browser, whether Firefox, Chrome, or Safari. That's all parallel stuff and it's going on every time you open a web page.) Perry -- Perry E. Metzger perry@piermont.com