From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: tuhs-bounces@minnie.tuhs.org X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-0.7 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, LOTS_OF_MONEY,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham autolearn_force=no version=3.4.1 Received: from minnie.tuhs.org (minnie.tuhs.org [45.79.103.53]) by inbox.vuxu.org (OpenSMTPD) with ESMTP id 336b2ab7 for ; Thu, 28 Jun 2018 21:03:33 +0000 (UTC) Received: by minnie.tuhs.org (Postfix, from userid 112) id C1629A1B2E; Fri, 29 Jun 2018 07:03:32 +1000 (AEST) Received: from minnie.tuhs.org (localhost [127.0.0.1]) by minnie.tuhs.org (Postfix) with ESMTP id 61279A1B0A; Fri, 29 Jun 2018 07:03:22 +1000 (AEST) Received: by minnie.tuhs.org (Postfix, from userid 112) id 67DA9A1B0A; Fri, 29 Jun 2018 07:03:19 +1000 (AEST) Received: from hacklheber.piermont.com (hacklheber.piermont.com [166.84.7.14]) by minnie.tuhs.org (Postfix) with ESMTPS id 021CDA1815 for ; Fri, 29 Jun 2018 07:03:19 +1000 (AEST) Received: from snark.cb.piermont.com (localhost [127.0.0.1]) by hacklheber.piermont.com (Postfix) with ESMTP id 4913D1F0; Thu, 28 Jun 2018 17:03:18 -0400 (EDT) Received: from jabberwock.cb.piermont.com (jabberwock.cb.piermont.com [10.160.2.107]) by snark.cb.piermont.com (Postfix) with ESMTP id 2DC5D2DED65; Thu, 28 Jun 2018 17:03:18 -0400 (EDT) Date: Thu, 28 Jun 2018 17:03:17 -0400 From: "Perry E. Metzger" To: Warner Losh Message-ID: <20180628170317.14d65067@jabberwock.cb.piermont.com> In-Reply-To: References: <81277CC3-3C4A-49B8-8720-CFAD22BB28F8@bitblocks.com> <20180628141538.GB663@thunk.org> <20180628104329.754d2c19@jabberwock.cb.piermont.com> <20180628145609.GD21688@mcvoy.com> <20180628154246.3a1ce74a@jabberwock.cb.piermont.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: [TUHS] PDP-11 legacy, C, and modern architectures X-BeenThere: tuhs@minnie.tuhs.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: The Unix Heritage Society mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: TUHS main list Errors-To: tuhs-bounces@minnie.tuhs.org Sender: "TUHS" On Thu, 28 Jun 2018 14:42:47 -0600 Warner Losh wrote: > > > Got a source that backs up that claim? I was recently dancing > > > with Netflix and they don't match your claim, nor do the other > > > content delivery networks, they want every cycle they can get. > > > > Netflix has how many machines? > > We generally say we have tens of thousands of machines deployed > worldwide in our CDN. We don't give out specific numbers though. Tens of thousands of machines is a lot more than one. I think the point stands. This is the age of distributed and parallel systems. > > Taking the other way of looking at it, from what I understand, > > CDN boxes are about I/O and not CPU, though I could be wrong. I > > can ask some of the Netflix people, a former report of mine is > > one of the people behind their front end cache boxes and we keep > > in touch. > > I can tell you it's about both. We recently started encrypting all > traffic, which requires a crapton of CPU. Plus, we're doing > sophisticated network flow modeling to reduce congestion, which > takes CPU. On our 100G boxes, which we get in the low 90's > encrypted, we have some spare CPU, but almost no space memory > bandwidth and our PCI lanes are full of either 100G network traffic > or 4-6 NVMe drives delivering content up at about 85-90Gbps. > > Most of our other boxes are the same, with the exception of the > 'storage' tier boxes. Those we're definitely hard disk I/O bound. I believe all of this, but I think it is consistent with the point. You're not trying to buy $100,000 CPUs that are faster than the several-hundred-per-core things you can get, because no one sells them. You're building systems that scale out by adding more CPUs and more boxes. You might want very high end CPUs even, but the high end isn't vastly better than the low, and there's a limit to what you can spend per CPU because there just aren't better ones on the market. So, all of this means that, architecturally, we're no longer in an age where things get designed to run on one processor. Systems have to be built to be parallel and distributed. Our kernels are no longer one fast core and need to handle multiprocessing and all it entails. Our software needs to run multicore if it's going to take advantage of the expensive processors and motherboards we've bought. Thread pools, locking, IPC, and all the rest are now a way of life. We've got ways to avoid some of those things by using share nothing and message passing, but even so, the fact that we've structured our software to deal with parallelism is unavoidable. Why am I belaboring this? Because the original point, that language support for building distributed and parallel systems does help, isn't wrong. There are a lot of projects out there using things like Erlang and managing nearly miraculous feats of uptime because of it. There are people replacing C++ with Rust because they can't reason about concurrency well enough without language support and Rust's linear types mean you can't write code that accidentally shares memory between two writers by accident. The stuff does matter. Perry -- Perry E. Metzger perry@piermont.com