From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=MAILING_LIST_MULTI, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.4 Received: (qmail 18023 invoked from network); 14 Dec 2023 23:29:46 -0000 Received: from minnie.tuhs.org (2600:3c01:e000:146::1) by inbox.vuxu.org with ESMTPUTF8; 14 Dec 2023 23:29:46 -0000 Received: from minnie.tuhs.org (localhost [IPv6:::1]) by minnie.tuhs.org (Postfix) with ESMTP id E250E43E7F; Fri, 15 Dec 2023 09:29:43 +1000 (AEST) Received: from mercury.lcs.mit.edu (mercury.lcs.mit.edu [18.26.0.122]) by minnie.tuhs.org (Postfix) with ESMTPS id 8648443DF6 for ; Fri, 15 Dec 2023 09:29:36 +1000 (AEST) Received: by mercury.lcs.mit.edu (Postfix, from userid 11178) id 802BB18C08F; Thu, 14 Dec 2023 18:29:35 -0500 (EST) To: coff@tuhs.org Message-Id: <20231214232935.802BB18C08F@mercury.lcs.mit.edu> Date: Thu, 14 Dec 2023 18:29:35 -0500 (EST) From: jnc@mercury.lcs.mit.edu (Noel Chiappa) Message-ID-Hash: J7VDB6YA4IPWYTFXOVI2BPJGJV6AQZCK X-Message-ID-Hash: J7VDB6YA4IPWYTFXOVI2BPJGJV6AQZCK X-MailFrom: jnc@mercury.lcs.mit.edu X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: jnc@mercury.lcs.mit.edu X-Mailman-Version: 3.3.6b1 Precedence: list Subject: [COFF] Re: Terminology query - 'system process'? List-Id: Computer Old Farts Forum Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: > From: Bakul Shah > Now I'd probably call them kernel threads as they don't have a separate > address space. Makes sense. One query about stacks, and blocking, there. Do kernel threads, in general, have per-thread stacks; so that they can block (and later resume exactly where they were when they blocked)? That was the thing that, I think, made kernel processes really attractive as a kernel structuring tool; you get code ike this (from V6): swap(rp->p_addr, a, rp->p_size, B_READ); mfree(swapmap, (rp->p_size+7)/8, rp->p_addr); The call to swap() blocks until the I/O operation is complete, whereupon that call returns, and away one goes. Very clean and simple code. Use of a kernel process probably makes the BSD pageout daemon code fairly straightforward, too (well, as straightforward as anything done by Berzerkly was :-). Interestingly, other early systems don't seem to have thought of this structuring technique. I assumed that Multics used a similar technique to write 'dirty' pages out, to maintain a free list. However, when I looked in the Multics Storage System Program Logic Manual: http://www.bitsavers.org/pdf/honeywell/large_systems/multics/AN61A_storageSysPLM_Sep78.pdf Multics just writes dirty pages as part of the page fault code: "This starting of writes is performed by the subroutine claim_mod_core in page_fault. This subroutine is invoked at the end of every page fault." (pg. 8-36, pg. 166 of the PDF.) (Which also increases the real-time delay to complete dealing with a page fault.) It makes sense to have a kernel process do this; having the page fault code do it just makes that code more complicated. (The code in V6 to swap processes in and out is beautifully simple.) But it's apparently only obvious in retrospect (like many brilliant ideas :-). Noel