From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_INVALID,DKIM_SIGNED, MAILING_LIST_MULTI autolearn=unavailable autolearn_force=no version=3.4.4 Received: (qmail 25866 invoked from network); 7 Mar 2023 16:42:39 -0000 Received: from minnie.tuhs.org (2600:3c01:e000:146::1) by inbox.vuxu.org with ESMTPUTF8; 7 Mar 2023 16:42:39 -0000 Received: from minnie.tuhs.org (localhost [IPv6:::1]) by minnie.tuhs.org (Postfix) with ESMTP id 0BACE41225; Wed, 8 Mar 2023 02:42:33 +1000 (AEST) Received: from outgoing.mit.edu (outgoing-auth-1.mit.edu [18.9.28.11]) by minnie.tuhs.org (Postfix) with ESMTPS id 506D8411F4 for ; Wed, 8 Mar 2023 02:42:29 +1000 (AEST) Received: from cwcc.thunk.org (pool-173-48-120-46.bstnma.fios.verizon.net [173.48.120.46]) (authenticated bits=0) (User authenticated as tytso@ATHENA.MIT.EDU) by outgoing.mit.edu (8.14.7/8.12.4) with ESMTP id 327GgETY022104 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 7 Mar 2023 11:42:15 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mit.edu; s=outgoing; t=1678207336; bh=BxdXw536nQh4AVEmTGN48W0EnHXBa3fbiOuqxDjFUJE=; h=Date:From:To:Cc:Subject:References:In-Reply-To; b=N5rqs1yVFt9pGhHCtOWQOrafOY8KwPajmRytVk3EdtVXsyyMrjbXqIS3yWMJXr4JV K39L4qy4QyPIhG3moxJFwIiqhmzNUuI4lOsog5Md9UsN5M+cXX8HzjQ2mHgcodkxKY ohFrtHzvdkUGYdWKOBjpnSVOQ+vKC7PKjqZwBVYDkJc8/THcCWkvBq47e7c0rvYZiP w+3CZZQNKG3kiOIuYo6Axew+g0fIlsXSLGx+o2vyruwvBoOVf565HQntx420NkLCR2 56NK15GzMrbZnMPQV2QHTMQarqEwVk3ypHjXjOp4OgoD48nVBv01mDB8WxrUogYb1p L05Pb54Aa0lsA== Received: by cwcc.thunk.org (Postfix, from userid 15806) id 575CD15C3441; Tue, 7 Mar 2023 11:42:14 -0500 (EST) Date: Tue, 7 Mar 2023 11:42:14 -0500 From: "Theodore Ts'o" To: Larry McVoy Message-ID: <20230307164214.GC960946@mit.edu> References: <8BD57BAB138946830AF560E17376A63B.for-standards-violators@oclsc.org> <20230306232429.GL5398@mcvoy.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230306232429.GL5398@mcvoy.com> Message-ID-Hash: L7INE7SKZSJSN4OA7Q7BWBN4R4XO57H6 X-Message-ID-Hash: L7INE7SKZSJSN4OA7Q7BWBN4R4XO57H6 X-MailFrom: tytso@mit.edu X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: coff@tuhs.org X-Mailman-Version: 3.3.6b1 Precedence: list Subject: [TUHS] Re: Origins of the frame buffer device List-Id: The Unix Heritage Society mailing list Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: (Moving to COFF) On Mon, Mar 06, 2023 at 03:24:29PM -0800, Larry McVoy wrote: > But even that seems suspect, I would think they could put some logic > in there that just doesn't feed power to the GPU if you aren't using > it but maybe that's harder than I think. > > If it's not about power then I don't get it, there are tons of transistors > waiting to be used, they could easily plunk down a bunch of GPUs on the > same die so why not? Maybe the dev timelines are completely different > (I suspect not, I'm just grabbing at straws). Other potential reasons: 1) Moving functionality off-CPU also allows for those devices to have their own specialized video memory that might be faster (SDRAM) or dual-ported (VRAM) without having to add that complexity to the more general system DRAM and/or the CPU's Northbridge. 2) In some cases, having an off-chip co-processor may not need any access to the system memory at well. An example of this is the "bump in the wire" in-line crypto engines (ICE) which is located between the Southbridge and the eMMC/UFS flash storage device. If you are using a Android device, it's likely to have an ICE. The big advantage is that it avoids needing to have a bounce buffer on the write path, where the file system encryption layer has to copy-and-encrypt data from the page cache to a bounce buffer, and then the encrypted block will then get DMA'ed to the storage device. 3) From an architectural perspective, not all use cases need various co-processors, whether it is to doing cryptography, or running some kind of machine-learning module, or image manipulation to simulate bokeh, or create HDR images, etc. While RISC-V does have the concept of instructure set extensions, which can be developed without getting permission from the "owners" of the core CPU ISA (e.g., ARM, Intel, etc.), it's a lot more convenient for someone who doesn't need to bend the knee to ARM, inc. (or their new corporate overloads) or Intel, to simply put that extension outside the core ISA. (More recently, there is an interesting lawsuit about whether it's "allowed" to put a 3rd party co-processor on the same SOC without paying $$$$$ to the corporate overload, which may make this point moot --- although it might cause people to simply switch to another ISA that doesn't have this kind of lawsuit-happy rent-seeking....) In any case, if you don't need to play Quake with 240 frames per second, then there's no point putting the GPU in the core CPU architecture, and it may turn out that the kind of co-processor which is optimized for running ML models is different, and it is often easier to make changes to the programming model for a GPU, compared to making changes to a CPU's ISA. - Ted