From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-0.5 required=5.0 tests=DKIM_ADSP_CUSTOM_MED, DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI autolearn=ham autolearn_force=no version=3.4.4 Received: from minnie.tuhs.org (minnie.tuhs.org [50.116.15.146]) by inbox.vuxu.org (Postfix) with ESMTP id BEDD5253CB for ; Sun, 19 May 2024 04:54:06 +0200 (CEST) Received: from minnie.tuhs.org (localhost [IPv6:::1]) by minnie.tuhs.org (Postfix) with ESMTP id 5DF7643ABB; Sun, 19 May 2024 12:54:01 +1000 (AEST) Received: from mail-lf1-x129.google.com (mail-lf1-x129.google.com [IPv6:2a00:1450:4864:20::129]) by minnie.tuhs.org (Postfix) with ESMTPS id 44B5243AB9 for ; Sun, 19 May 2024 12:53:54 +1000 (AEST) Received: by mail-lf1-x129.google.com with SMTP id 2adb3069b0e04-5210684cee6so2357137e87.0 for ; Sat, 18 May 2024 19:53:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1716087231; x=1716692031; darn=tuhs.org; h=content-transfer-encoding:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=jURhr7PimnBMJ7UlvZTLHLqNkXmafiz16rPNPxPuHVs=; b=L57fNc+x5qizOhbwjE23qEjPkgTPkW0/KKQyDCD+d0//Vvgyaj+CcU5j9wceanzaPA 13DjNaWgDOVSPYbTynjg0iEE9LoPB7Zw8eAxzAV2b0ugZx6uTSfNRzGpCfSP7foDvCSh Pswt7o3Ly25uer45t6LnRFVe7vUQ5gUTvHR2kE/MrPE6ssP6C6vI3CQlFbK0CmmzlLVt qLXI6U75jTCuKcpFthn6LSGnKLJOenZEyK4HY4S/E1EYV5y05ksSUxO9XApNoBFFgrx0 T37g0V/fPIxNl8QLPF6eocuTn21NaRWhvYviXfcyTmXem8Yd/zgFK7sIdFflOcrqVUGm tuMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716087231; x=1716692031; h=content-transfer-encoding:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jURhr7PimnBMJ7UlvZTLHLqNkXmafiz16rPNPxPuHVs=; b=UAhWP4UHPPmzLz6RwlDeXcLyc62rFPLLA643IvSsv7eZVe/EeMRAye3oihGxghJiTK M6hDiyAZ/lIxgBawa0aBpBg2CyjjmXJsamWfFCGzzCv+BUl1NaT/HvVqGuRmEZqpxK2/ 9rymRvYX4JhrOTHZSubXkvzTBQ47nVl80LeXOsiU/vSshksQ0QREmoyNy6Glugb+VE54 c5CKPbp04sfvTy0Y0/6j2jh4dVqSmFmw1r30f/JzW+hxewcHZQqzvyn6WQgnPtwgZO4/ seXs5w4wXZnfhdo4xHic+7JHXqhncj308OIdJHBLQFax1STc5RYCKY3LK9UY/2Wo0ksh kylA== X-Gm-Message-State: AOJu0YwU6rArAee0/sC9jII95UZ0078bUNPFMAcHgLNgutVPqewf1fiU suL9EV6DolmVYiod0H7mjP0UXB58WWLvdN01Ml7g07EQ8A+NKqE6r8IafzsKYf7jdQ9TMVqY2Ve 6xm1kM2QI74WKWfRJgt7DAiOPs6vusw== X-Google-Smtp-Source: AGHT+IEiBX+BPknSabcr9m8xMDFzWanP/OOKc0CAUDXkJLurziYQLN0dUbw69gr6RgarhLer15uu+EzgkVyeubPOalE= X-Received: by 2002:a05:6512:3499:b0:523:8e07:5603 with SMTP id 2adb3069b0e04-5238e0756a2mr7029044e87.41.1716087231502; Sat, 18 May 2024 19:53:51 -0700 (PDT) MIME-Version: 1.0 References: <20240514111032.2kotrrjjv772h5f4@illithid> <20240515164212.beswgy4h2nwvbdck@illithid> <8D556958-0C7F-43F3-8694-D7391E9D89DA@iitbombay.org> <20240519012114.GU9216@mcvoy.com> <767E78C5-E6E7-4CB5-889D-B4E0E5FBA085@iitbombay.org> <20240519020256.GV9216@mcvoy.com> In-Reply-To: <20240519020256.GV9216@mcvoy.com> From: Andrew Warkentin Date: Sat, 18 May 2024 20:53:39 -0600 Message-ID: To: The Unix Heritage Society mailing list Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Message-ID-Hash: 53LRWL2KVAMKTN2QPFOENJFQ7Q4TUTVN X-Message-ID-Hash: 53LRWL2KVAMKTN2QPFOENJFQ7Q4TUTVN X-MailFrom: andreww591@gmail.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.6b1 Precedence: list Subject: [TUHS] Re: If forking is bad, how about buffering? List-Id: The Unix Heritage Society mailing list Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On Sat, May 18, 2024 at 8:03=E2=80=AFPM Larry McVoy wrote: > > On Sat, May 18, 2024 at 06:40:42PM -0700, Bakul Shah wrote: > > On May 18, 2024, at 6:21???PM, Larry McVoy wrote: > > > > > > On Sat, May 18, 2024 at 06:04:23PM -0700, Bakul Shah via TUHS wrote: > > >> [1] This brings up a separate point: in a microkernel even a simple > > >> thing like "foo | bar" would require a third process - a "pipe > > >> service", to buffer up the output of foo! You may have reduced > > >> the overhead of individual syscalls but you will have more of > > >> cross-domain calls! > > > > > > Do any micro kernels do address space to address space bcopy()? > > > > mmapping the same page in two processes won't be hard but now > > you have complicated cat (or some iolib)! > > I recall asking Linus if that could be done to save TLB entries, as in > multiple processes map a portion of their address space (at the same > virtual location) and then they all use the same TLB entries for that > part of their address space. He said it couldn't be done because the > process ID concept was hard wired into the TLB. I don't know if TLB > tech has evolved such that a single process could have multiple "process" > IDs associated with it in the TLB. > > I wanted it because if you could share part of your address space with > another process, using the same TLB entries, then motivation for threads > could go away (I've never been a threads fan but I acknowledge why > you might need them). I was channeling Rob's "If you think you need > threads, your processes are too fat". > > The idea of using processes instead of threads falls down when you > consider TLB usage. And TLB usage, when you care about performance, is > an issue. I could craft you some realistic benchmarks, mirroring real > world work loads, that would kill the idea of replacing threads with > processes unless they shared TLB entries. Think of a N-way threaded > application, lots of address space used, that application uses all of the > TLB. Now do that with N processes and your TLB is N times less effective= . > > This was a conversation decades ago so maybe TLB tech now has solved this= . > I doubt it, if this was a solved problem I think every OS would say screw > threads, just use processes and mmap(). The nice part of that model > is you can choose what parts of your address space you want to share. > That cuts out a HUGE swath of potential problems where another thread > can go poke in a part of your address space that you don't want poked. > I've never been a fan of the rfork()/clone() model. With the OS I'm working on, rather than using processes that share state as threads, a process will more or less just be a collection of threads that share a command line and get replaced on exec(). All of the state usually associated with a process (e.g. file descriptor space, filesystem namespace, virtual address space, memory allocations) will instead be stored in separate container objects that can be shared between threads. It will be possible to share any of these containers between processes, or use different combinations between threads within a process. This would allow more control over what gets shared between threads/processes than rfork()/clone() because the state containers will appear in the filesystem and be explicitly bound to threads rather than being anonymous and only transferred on rfork()/clone(). Emulating rfork()/clone on top of this will be easy enough though.