From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=none autolearn=ham autolearn_force=no version=3.4.4 Received: (qmail 15943 invoked from network); 4 Nov 2022 08:31:46 -0000 Received: from 9front.inri.net (168.235.81.73) by inbox.vuxu.org with ESMTPUTF8; 4 Nov 2022 08:31:46 -0000 Received: from sirjofri.de ([5.45.105.127]) by 9front; Fri Nov 4 04:29:47 -0400 2022 Received: from dummy.faircode.eu ([95.90.217.122]) by sirjofri.de; Fri Nov 4 09:29:45 +0100 2022 Date: Fri, 4 Nov 2022 09:29:43 +0100 (GMT+01:00) From: sirjofri To: 9front@9front.org Message-ID: <262944d5-3974-47fb-b8e3-2327a78f5edb@sirjofri.de> In-Reply-To: <83E1A1F40327CC6C521C846162693736@thinktankworkspaces.com> References: <83E1A1F40327CC6C521C846162693736@thinktankworkspaces.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Correlation-ID: <262944d5-3974-47fb-b8e3-2327a78f5edb@sirjofri.de> List-ID: <9front.9front.org> List-Help: X-Glyph: ➈ X-Bullshit: mobile compliant interface hypervisor NoSQL-based controller Subject: Re: [9front] Two file servers sharing an auth server Reply-To: 9front@9front.org Precedence: bulk 04.11.2022 08:11:26 william@thinktankworkspaces.com: > Wow. I'm really interested in learning more about this. I kind of figured= performance would be better > local rather than served remotely. I might want to dig into this further = and I'm curious about > concurrency. Can multiple rc-httpd services be started to handle concurre= ncy? > > Examples greatly welcomed > > What about non-script apps. I was really interested in some of the web fe= atures golang has and if it will work well on another node Well, I'd say it depends on the application and what it does. Let's forget = about user data for a moment, but I'm sure you can figure that out yourself= . Let's imagine a single-binary application for this. That binary starts once= and runs forever, and it uses sed (as a separate application) for some tas= k. This binary could spawn one sed process and feed it data. In this case it d= oesn't really make much sense to copy the binaries to ramfs and start from = there since it would only load both binaries once from the fileserver, star= t the processes and leave everything in memory. It doesn't matter if your a= pplication streams data to sed once per minute or 100 times in a second. Th= e process is loaded already and no additional disk reads will happen. Or, the application could spawn a sed process every time it uses it. This w= ould mean, however, that the sed binary has to be loaded each time, which i= s can be a huge load for the disk. In this scenario it makes sense to copy = the sed binary to some ramfs (or local disk if you need the memory otherwis= e, but sed is small). It could even be worse if sed is dynamically linked, since the loader has t= o load all the binaries needed for sed. rc-httpd is one of the worst examples though. It's totally fine to use it w= ith some scale, but rc-httpd will fork many processes per connection. Regarding your question about concurrency and rc-httpd: On Plan 9 you usual= ly start your process per connection, so each connection from a client will= have their own rc-httpd process (see /rc/bin/service/tcp80 or !tcp80). To = keep the same state you need something persistent, like a filesystem or "re= al" files (whatever this means). I think it would be possible to rewrite rc-httpd to spawn processes in the = beginning and stream data to them, but I guess that would make rc-httpd mor= e complicated and if you wanna do that you might as well use httpd or tcp80= or anything else. If I wanted to design a scalable application for concurrency with Plan 9 in= mind I'd probably create a filesystem that handles all synchronization, pr= ovides data for the compute nodes and receives results from them. That file= system can be imported to all compute machines and then we could start the = compute node processes on the compute system. Well, the binary could theore= tically actually be part of the filesystem, this would allow us to distribu= te the task basically automatically. Something similar to this: - /mnt/concfs/ =C2=A0 - ctl (every process should read this and announce their presence an= d what they do) =C2=A0 - data =C2=A0 - program Starting a compute process might be just importing the filesystem from the = server, then run the "program" file (which handles everything else, like re= ading from ctl and doing the actual processing). You can see that it easily= fits a very small shell script. data would be something like the endpoint. For rendering an image it might = be some structure for pixels, and the processes can just read scene data th= ey need to process and write the pixel result. The data file could also be = abstracted to a full filesystem hierarchy, which sometimes makes sense. ctl would be the synchronization point. The server just tells the client wh= ich pixel to compute and the client just announces when it's done (after wr= iting the pixel value to the data file). But that protocol would be hidden = inside the "program" file, so the client machine actually knows nothing abo= ut what to compute and how the protocol works. It just has to know what fil= e to execute. Note that it's still possible to write standard multithreaded programs for = stuff like that, using libthread, fork or anything like that. It all depend= s on the use case and the amount of scale. It would be actually possible to have the "program" multithreaded, so each = compute node would not run just one process but many. Note that I never built a system like that, it's just some idea that would = need refinement. sirjofri