From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=MAILING_LIST_MULTI, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.4 Received: (qmail 20467 invoked from network); 14 Dec 2023 23:54:49 -0000 Received: from minnie.tuhs.org (50.116.15.146) by inbox.vuxu.org with ESMTPUTF8; 14 Dec 2023 23:54:49 -0000 Received: from minnie.tuhs.org (localhost [IPv6:::1]) by minnie.tuhs.org (Postfix) with ESMTP id A125E43D54; Fri, 15 Dec 2023 09:54:47 +1000 (AEST) Received: from mcvoy.com (mcvoy.com [192.169.23.250]) by minnie.tuhs.org (Postfix) with ESMTPS id EA8A343D4C for ; Fri, 15 Dec 2023 09:54:43 +1000 (AEST) Received: by mcvoy.com (Postfix, from userid 3546) id 4778735E134; Thu, 14 Dec 2023 15:54:43 -0800 (PST) Date: Thu, 14 Dec 2023 15:54:43 -0800 From: Larry McVoy To: Noel Chiappa Message-ID: <20231214235443.GG6732@mcvoy.com> References: <20231214232935.802BB18C08F@mercury.lcs.mit.edu> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20231214232935.802BB18C08F@mercury.lcs.mit.edu> User-Agent: Mutt/1.5.24 (2015-08-30) Message-ID-Hash: O5B3QB6FQUEGXAJ6ZQBRIN5HADL6PVVI X-Message-ID-Hash: O5B3QB6FQUEGXAJ6ZQBRIN5HADL6PVVI X-MailFrom: lm@mcvoy.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: coff@tuhs.org X-Mailman-Version: 3.3.6b1 Precedence: list Subject: [COFF] Re: Terminology query - 'system process'? List-Id: Computer Old Farts Forum Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On Thu, Dec 14, 2023 at 06:29:35PM -0500, Noel Chiappa wrote: > > From: Bakul Shah > > > Now I'd probably call them kernel threads as they don't have a separate > > address space. > > Makes sense. One query about stacks, and blocking, there. Do kernel threads, > in general, have per-thread stacks; so that they can block (and later resume > exactly where they were when they blocked)? Yep, threads have stacks, not sure how they could work without them. Which reminds me of some Solaris insanity. Back when I was at Sun, they were threading the VM and I/O system. The kernel was pretty bloated and a stack was 2 8K pages. The I/O people wanted to allocate a kernel thread *per page* that was being sent to disk/network. I pointed out that this means if all of memory wants to head to disk, your dirty page cache is 1/3 of memory because the other 2/3s were thread stacks. They ignored me, implemented it, and it was a miserable failure and they had to start over. They just didn't believe in basic math. > Use of a kernel process probably makes the BSD pageout daemon code fairly > straightforward, too (well, as straightforward as anything done by Berzerkly > was :-). I have immense respect for the BSD pageout daemon code. When I did UFS clustering to make I/O go at the platter speed (rather than 1/2 the platter speed), it caused a big problem because the pageout daemon could not keep up with UFS, UFS used up page cache much faster than the pageout daemon could free pages. I wrote somewhere around 13 different pageout daemons in an attempt to do better than the BSD one. And, in certain cases, I did do better. All of them did at least a little better. But none of them did better in all cases. That BSD code was subtly awesome, I was at the top of my game and couldn't beat it. I ended up implementing a "free behind" in UFS, I'd watch the variable that controlled kicking the pageout code into running, and I'd start freeing behind when it was getting close to running (but enough ahead that the pageout daemon wouldn't wake up). It's a really gross hack but it was the best that I could come up with. --lm