9front - general discussion about 9front
 help / color / mirror / Atom feed
* [9front] Enabling a service
@ 2024-05-06 11:32 Rocky Hotas
  2024-05-06 11:58 ` Alex Musolino
  2024-05-06 12:43 ` ori
  0 siblings, 2 replies; 48+ messages in thread
From: Rocky Hotas @ 2024-05-06 11:32 UTC (permalink / raw)
  To: 9front

Hello!

I am running a default 9front installation and would like to enable
a test service listening on a port: the echo service on port 7.

This guide

 <https://doc.cat-v.org/plan_9/9.intro.pdf>

deals in Chapter 6.4 with services, but my system apparently differs
from the examples there.

I have in /rc/bin/service some files like the following ones:

!tcp110
!tcp143
!tcp7

with a leading `!' in the filename. These could correspond to a service:
I tried to rename `!tcp7' to `tcp7', but this doesn't make the system
listening on port 7 for the `echo' service (telnet can't contact the
host on port 7).

How to enable such a service? I guess that my system is not a CPU server,
probably it's just a file server. Does this matter?

Bye,

Rocky

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-06 11:32 [9front] Enabling a service Rocky Hotas
@ 2024-05-06 11:58 ` Alex Musolino
  2024-05-06 12:43 ` ori
  1 sibling, 0 replies; 48+ messages in thread
From: Alex Musolino @ 2024-05-06 11:58 UTC (permalink / raw)
  To: 9front

The /rc/bin/service directory now largely serves as a set of examples.
It's only used if you have neither a /cfg/$sysname/service directory
nor a /cfg/default/service directory.  Have a look in /bin/cpurc and
you'll see the following:

if(test -d /cfg/$sysname/service)
	serviced=/cfg/$sysname/service
if not if(test -d /cfg/default/service)
	serviced=/cfg/default/service
if not
	serviced=/rc/bin/service

We probably ought to update section 7.5.1 of the FQA to mention this.

--
Cheers,
Alex Musolino


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-06 11:32 [9front] Enabling a service Rocky Hotas
  2024-05-06 11:58 ` Alex Musolino
@ 2024-05-06 12:43 ` ori
  2024-05-06 15:16   ` Scott Flowers
  2024-05-06 22:18   ` Rocky Hotas
  1 sibling, 2 replies; 48+ messages in thread
From: ori @ 2024-05-06 12:43 UTC (permalink / raw)
  To: 9front

Quoth Rocky Hotas <rockyhotas@firemail.cc>:
> 
> How to enable such a service? I guess that my system is not a CPU server,
> probably it's just a file server. Does this matter?

yes, listen(1) is only started on cpu servers by default.
you can run it manually if you want.


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-06 12:43 ` ori
@ 2024-05-06 15:16   ` Scott Flowers
  2024-05-06 15:37     ` sirjofri
  2024-05-06 16:32     ` Stanley Lieber
  2024-05-06 22:18   ` Rocky Hotas
  1 sibling, 2 replies; 48+ messages in thread
From: Scott Flowers @ 2024-05-06 15:16 UTC (permalink / raw)
  To: 9front

--- fqa7.ms
+++ fqa7.ms
@@ -1356,11 +1356,18 @@
 causes the shell script
 .CW /rc/bin/cpurc
 to be run at boot time, which in turn launches a listener that scans the
+.CW /cfg/$sysname/service
+directory for scripts corresponding to various network ports. If that directory doesn't exist, 
+.CW /cfg/default/service
+and  
 .CW /rc/bin/service
-directory for scripts corresponding to various network ports. Read:
+are checked in order. The files in 
+.CW /rc/bin/service 
+are primarily intended to serve as examples. Read:
 .ihtml a <a href="http://man.9front.org/8/listen">
 .CW listen(8) .
 .ihtml a
+
 The scripts
 .CW tcp17019
 and


Scott Flowers
flowerss@cranky.ca

On Mon, 6 May 2024, at 06:43, ori@eigenstate.org wrote:
> Quoth Rocky Hotas <rockyhotas@firemail.cc>:
> > 
> > How to enable such a service? I guess that my system is not a CPU server,
> > probably it's just a file server. Does this matter?
> 
> yes, listen(1) is only started on cpu servers by default.
> you can run it manually if you want.
> 
> 

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-06 15:16   ` Scott Flowers
@ 2024-05-06 15:37     ` sirjofri
  2024-05-06 16:32     ` Stanley Lieber
  1 sibling, 0 replies; 48+ messages in thread
From: sirjofri @ 2024-05-06 15:37 UTC (permalink / raw)
  To: 9front

06.05.2024 17:18:30 Scott Flowers <flowerss@cranky.ca>:

> --- fqa7.ms
> +++ fqa7.ms
> @@ -1356,11 +1356,18 @@
> Read:
> .ihtml a <a href="http://man.9front.org/8/listen">
> .CW listen(8) .
> .ihtml a
> +
> The scripts
> .CW tcp17019
> and

Not sure about what you guys think, but I often try to push people to read cpurc to understand that properly. In my opinion it's easy to understand with just basic rc knowledge, and people will learn to look at the sources. Other than that, the info should be added to a document that explains the theory of those programs more in-depth, which is some documentation stage we were discussing earlier (see also: upas theory @ wiki.9front.org).

TL;DR: not sure if it's ok to add a "Read also" link to the source, if not I'd prefer having a link to a more in-depth wiki page.

sirjofri


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-06 15:16   ` Scott Flowers
  2024-05-06 15:37     ` sirjofri
@ 2024-05-06 16:32     ` Stanley Lieber
  1 sibling, 0 replies; 48+ messages in thread
From: Stanley Lieber @ 2024-05-06 16:32 UTC (permalink / raw)
  To: 9front

On May 6, 2024 11:16:01 AM EDT, Scott Flowers <flowerss@cranky.ca> wrote:
>--- fqa7.ms
>+++ fqa7.ms
>@@ -1356,11 +1356,18 @@
> causes the shell script
> .CW /rc/bin/cpurc
> to be run at boot time, which in turn launches a listener that scans the
>+.CW /cfg/$sysname/service
>+directory for scripts corresponding to various network ports. If that directory doesn't exist, 
>+.CW /cfg/default/service
>+and  
> .CW /rc/bin/service
>-directory for scripts corresponding to various network ports. Read:
>+are checked in order. The files in 
>+.CW /rc/bin/service 
>+are primarily intended to serve as examples. Read:
> .ihtml a <a href="http://man.9front.org/8/listen">
> .CW listen(8) .
> .ihtml a
>+
> The scripts
> .CW tcp17019
> and
>
>
>Scott Flowers
>flowerss@cranky.ca
>
>On Mon, 6 May 2024, at 06:43, ori@eigenstate.org wrote:
>> Quoth Rocky Hotas <rockyhotas@firemail.cc>:
>> > 
>> > How to enable such a service? I guess that my system is not a CPU server,
>> > probably it's just a file server. Does this matter?
>> 
>> yes, listen(1) is only started on cpu servers by default.
>> you can run it manually if you want.
>> 
>> 
>

thank you, i'll add this.

sl

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Re: [9front] Enabling a service
  2024-05-06 12:43 ` ori
  2024-05-06 15:16   ` Scott Flowers
@ 2024-05-06 22:18   ` Rocky Hotas
  2024-05-06 22:59     ` ori
  2024-05-06 23:00     ` ori
  1 sibling, 2 replies; 48+ messages in thread
From: Rocky Hotas @ 2024-05-06 22:18 UTC (permalink / raw)
  To: 9front; +Cc: ori

On mag 06  8:43, ori@eigenstate.org wrote:
> 
> yes, listen(1) is only started on cpu servers by default.
> you can run it manually if you want.

Ok, instead of creating a CPU server I would prefer (if possible) to
launch listen manually on my current machine.

But it is listen(8), not listen(1), isn't it?

Which is the correct syntax to create a tcp listener on port 7 which
is an echo service (basically, what performs /rc/bin/service/!tcp7)?

I couldn't find any example.

Bye,

Rocky

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Re: [9front] Enabling a service
  2024-05-06 22:18   ` Rocky Hotas
@ 2024-05-06 22:59     ` ori
  2024-05-06 23:00     ` ori
  1 sibling, 0 replies; 48+ messages in thread
From: ori @ 2024-05-06 22:59 UTC (permalink / raw)
  To: 9front, rockyhotas; +Cc: ori

Quoth Rocky Hotas <rockyhotas@firemail.cc>:
> On mag 06  8:43, ori@eigenstate.org wrote:
> > 
> > yes, listen(1) is only started on cpu servers by default.
> > you can run it manually if you want.
> 
> Ok, instead of creating a CPU server I would prefer (if possible) to
> launch listen manually on my current machine.
> 
> But it is listen(8), not listen(1), isn't it?
> 
> Which is the correct syntax to create a tcp listener on port 7 which
> is an echo service (basically, what performs /rc/bin/service/!tcp7)?
> 
> I couldn't find any example.
> 
> Bye,
> 
> Rocky


% g listen /bin/cpurc
/bin/cpurc:26: # usb listener
/bin/cpurc:108: 	aux/listen -q -t /rc/bin/service.auth -d $serviced tcp
/bin/cpurc:112: 	aux/listen -q -d $serviced tcp
/bin/cpurc:122: # other /proc files, such as note, so let listen be killed

the invocation on line 112 is what you're looking for.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Re: [9front] Enabling a service
  2024-05-06 22:18   ` Rocky Hotas
  2024-05-06 22:59     ` ori
@ 2024-05-06 23:00     ` ori
  2024-05-07  8:22       ` Rocky Hotas
  1 sibling, 1 reply; 48+ messages in thread
From: ori @ 2024-05-06 23:00 UTC (permalink / raw)
  To: 9front; +Cc: ori

Quoth Rocky Hotas <rockyhotas@firemail.cc>:
> On mag 06  8:43, ori@eigenstate.org wrote:
> > 
> > yes, listen(1) is only started on cpu servers by default.
> > you can run it manually if you want.
> 
> Ok, instead of creating a CPU server I would prefer (if possible) to
> launch listen manually on my current machine.
> 
> But it is listen(8), not listen(1), isn't it?
> 
> Which is the correct syntax to create a tcp listener on port 7 which
> is an echo service (basically, what performs /rc/bin/service/!tcp7)?
> 
> I couldn't find any example.
> 
> Bye,
> 
> Rocky

% g listen /bin/cpurc
/bin/cpurc:26: # usb listener
/bin/cpurc:108: 	aux/listen -q -t /rc/bin/service.auth -d $serviced tcp
/bin/cpurc:112: 	aux/listen -q -d $serviced tcp
/bin/cpurc:122: # other /proc files, such as note, so let listen be killed

the invocation on line 112 is what you're looking for.



^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Re: Re: [9front] Enabling a service
  2024-05-06 23:00     ` ori
@ 2024-05-07  8:22       ` Rocky Hotas
  2024-05-07  8:29         ` Frank D. Engel, Jr.
  2024-05-07  9:14         ` sirjofri
  0 siblings, 2 replies; 48+ messages in thread
From: Rocky Hotas @ 2024-05-07  8:22 UTC (permalink / raw)
  To: 9front; +Cc: ori

On mag 06 19:00, ori@eigenstate.org wrote:
> % g listen /bin/cpurc
> /bin/cpurc:26: # usb listener
> /bin/cpurc:108: 	aux/listen -q -t /rc/bin/service.auth -d $serviced tcp
> /bin/cpurc:112: 	aux/listen -q -d $serviced tcp
> /bin/cpurc:122: # other /proc files, such as note, so let listen be killed
> 
> the invocation on line 112 is what you're looking for.

Ok! I make a recap if it can be useful for any other having the same
issue.

As in this example

aux/listen -q -t /rc/bin/service.auth -d /rc/bin/service tcp

from

 <https://9p.io/wiki/plan9/Configuring_a_Standalone_CPU_Server/index.html>

`$serviced' is (in my default 9front installation) /rc/bin/service.

After some attempts, this worked for me:

term% cd /rc/bin/service
term% mv !tcp7 tcp7
term% aux/listen -q -d /rc/bin/service tcp

This way, all the services represented by files whose names are in
the form `tcp<port_number>' in /rc/bin/service are enabled.
Files whose names have a trailing `!' are excluded, instead.

The contents of file tcp7 determines how the service will behave when
someone connects to it: in this case,

term% cat tcp7
#!/bin/rc
/bin/cat

And the client will get an echo of anything is typed, due to `cat'.

According to listen(8), the service directory (`/rc/bin/service' in my
example) is also periodically scanned for changes in the files. It gets
updated within seconds probably: so, after running `aux/listen -q
-d /rc/bin/service tcp', if the contents of the file `tcp<port_number>'
or the filename itself are modified, the system will behave accordingly
without need for any other action.

Thank you so much!

Rocky

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-07  8:22       ` Rocky Hotas
@ 2024-05-07  8:29         ` Frank D. Engel, Jr.
  2024-05-07  9:03           ` Rocky Hotas
  2024-05-07  9:14         ` sirjofri
  1 sibling, 1 reply; 48+ messages in thread
From: Frank D. Engel, Jr. @ 2024-05-07  8:29 UTC (permalink / raw)
  To: 9front

s/trailing/leading/


On 5/7/24 04:22, Rocky Hotas wrote:
> On mag 06 19:00, ori@eigenstate.org wrote:
>> % g listen /bin/cpurc
>> /bin/cpurc:26: # usb listener
>> /bin/cpurc:108: 	aux/listen -q -t /rc/bin/service.auth -d $serviced tcp
>> /bin/cpurc:112: 	aux/listen -q -d $serviced tcp
>> /bin/cpurc:122: # other /proc files, such as note, so let listen be killed
>>
>> the invocation on line 112 is what you're looking for.
> Ok! I make a recap if it can be useful for any other having the same
> issue.
>
> As in this example
>
> aux/listen -q -t /rc/bin/service.auth -d /rc/bin/service tcp
>
> from
>
>   <https://9p.io/wiki/plan9/Configuring_a_Standalone_CPU_Server/index.html>
>
> `$serviced' is (in my default 9front installation) /rc/bin/service.
>
> After some attempts, this worked for me:
>
> term% cd /rc/bin/service
> term% mv !tcp7 tcp7
> term% aux/listen -q -d /rc/bin/service tcp
>
> This way, all the services represented by files whose names are in
> the form `tcp<port_number>' in /rc/bin/service are enabled.
> Files whose names have a trailing `!' are excluded, instead.
>
> The contents of file tcp7 determines how the service will behave when
> someone connects to it: in this case,
>
> term% cat tcp7
> #!/bin/rc
> /bin/cat
>
> And the client will get an echo of anything is typed, due to `cat'.
>
> According to listen(8), the service directory (`/rc/bin/service' in my
> example) is also periodically scanned for changes in the files. It gets
> updated within seconds probably: so, after running `aux/listen -q
> -d /rc/bin/service tcp', if the contents of the file `tcp<port_number>'
> or the filename itself are modified, the system will behave accordingly
> without need for any other action.
>
> Thank you so much!
>
> Rocky
>


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: Re: [9front] Enabling a service
  2024-05-07  8:29         ` Frank D. Engel, Jr.
@ 2024-05-07  9:03           ` Rocky Hotas
  0 siblings, 0 replies; 48+ messages in thread
From: Rocky Hotas @ 2024-05-07  9:03 UTC (permalink / raw)
  To: 9front; +Cc: fde101

On mag 07  4:29, Frank D. Engel, Jr. wrote:
> s/trailing/leading/

Oh, yes, definitely!
Thanks for correcting this!

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-07  8:22       ` Rocky Hotas
  2024-05-07  8:29         ` Frank D. Engel, Jr.
@ 2024-05-07  9:14         ` sirjofri
  2024-05-07 21:11           ` Shawn Rutledge
  1 sibling, 1 reply; 48+ messages in thread
From: sirjofri @ 2024-05-07  9:14 UTC (permalink / raw)
  To: 9front

Hi,

Note that for single services that should run temporarily it's advised to use listen1 instead of listen.

In the other case, where you want your machine to always serve services, that machine is most probably a cpu by definition. Nowadays there's no reason to not run a cpu if you need services. In the beginning there were different kernels if I remember correctly, which meant a big difference, but nowadays it's only a different configuration.

Of course it all depends on your specific use case.

Listen is btw one of the reasons I like Plan 9. It's much easier to configure services than on any linux system.

sirjofri

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-07  9:14         ` sirjofri
@ 2024-05-07 21:11           ` Shawn Rutledge
  2024-05-07 21:35             ` Kurt H Maier
                               ` (2 more replies)
  0 siblings, 3 replies; 48+ messages in thread
From: Shawn Rutledge @ 2024-05-07 21:11 UTC (permalink / raw)
  To: 9front

On May 7, 2024, at 2:14 AM, sirjofri <sirjofri+ml-9front@sirjofri.de> wrote:
> In the other case, where you want your machine to always serve services, that machine is most probably a cpu by definition. Nowadays there's no reason to not run a cpu if you need services. In the beginning there were different kernels if I remember correctly, which meant a big difference, but nowadays it's only a different configuration.

I just don’t see much point in keeping it as split as it still is.  For some reason a 9front install is a term by default, and not being set up to run services by default seems like a needless limitation.  But if you switch to cpu, it’s a little extra work to get it to be a useful machine to sit in front of (even though “text mode” is already graphical), which is also a needless limitation.  It took me a few hours to learn how to customize the startup process at first, and I didn’t memorize it, so next time I’d have to look at the machines on which I already did it to remember what exactly I did.  When a newbie starts with one machine and then wants to try starting up network services shortly afterwards, it should be more directly possible, IMO; and the directory-of-services approach seems like a good one to me, which should not be ruled out by having a term by default.  Unless you are actually in a lab environment with shared machines, nobody expects a diskless terminal anymore; and it’s not a great introduction to what Plan 9 can do, to have that be the default.

If the majority thinks there needs to continue to be a distinction though, then maybe the installer should ask about the machine’s intended role at installation time, and try to achieve a suitable out-of-the-box experience for all the choices.


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-07 21:11           ` Shawn Rutledge
@ 2024-05-07 21:35             ` Kurt H Maier
  2024-05-07 21:45               ` sirjofri
  2024-05-07 21:54             ` sl
  2024-05-08  2:11             ` Thaddeus Woskowiak
  2 siblings, 1 reply; 48+ messages in thread
From: Kurt H Maier @ 2024-05-07 21:35 UTC (permalink / raw)
  To: 9front

On Tue, May 07, 2024 at 02:11:50PM -0700, Shawn Rutledge wrote:
> When a newbie starts with one machine and then wants to try starting 
> up network services shortly afterwards, it should be more directly 
> possible, IMO

Strong disagree.  The last thing we need is seventy thousand
poorly-configured 9front machines running half-assed network services.

> Unless you are actually in a lab environment with shared machines,
> nobody expects a diskless terminal anymore; 

Also not sure this is a valid assumption.  About half the plan 9 users I
know use diskless terminals.

Basically there's more to setting up a plan 9 network than switching a
machine to accept remote cpu connections.  If someone isn't willing to
learn how all the pieces work, I do not think they would benefit from
additional footguns.  If you don't want a distributed operating system,
and you don't want to learn how plan 9 works, there are a lot of other
choices for operating systems out there, and using one of those to set
up a dimly-understood imminent botnet drone would have far less annoying
consequences for this mailing list.

People can do whatever they want with their computers, but I don't think
there's a specific onus on 9front to make bad ideas easier to execute.

khm

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-07 21:35             ` Kurt H Maier
@ 2024-05-07 21:45               ` sirjofri
  0 siblings, 0 replies; 48+ messages in thread
From: sirjofri @ 2024-05-07 21:45 UTC (permalink / raw)
  To: 9front

07.05.2024 23:41:17 Kurt H Maier <khm@sciops.net>:
> [...]

I was about to compose a response, but khm found the perfect words. I wouldn't be able to say it like that, at least not today anymore.

Thank you.

sirjofri

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-07 21:11           ` Shawn Rutledge
  2024-05-07 21:35             ` Kurt H Maier
@ 2024-05-07 21:54             ` sl
  2024-05-07 21:58               ` sl
  2024-05-08  2:11             ` Thaddeus Woskowiak
  2 siblings, 1 reply; 48+ messages in thread
From: sl @ 2024-05-07 21:54 UTC (permalink / raw)
  To: 9front

plan 9's design is for the disk file server to live in a locked room,
with everyone else reyling upon it as the single source of truth.  the
idea was that the installation should marshal the resources of
whatever organization hosted it to move as much of the burden of
administration away from the end user as possible, placing it on the
shoulders of the "technical staff." rob's stated dream was for
computing to be a resource you extract from a wall socket, like
electricity.  the system is an octopus radiating out from the file
server.  it can afford to lose limbs, but not it's brain.  this
confers outsize benefits on hapless users who just want their computer
to work.  you will read stories of people at the labs just carrying
around a floppy disk (or equivalent) with which to boot random
computers into plan 9.  start up the machine, and all your stuff is
there, ready to go.  meanwhile, the file server originally ran a
completely separate operating system, and mere users were not allowed
anywhere near it's console.

not much has ever been done to address these original assumptions,
that the disk file server is locked up in a room, and that you're not
allowed to diddle with it, so stuff gets brittle when you violate the
system's fragile self-image.  for example, stand alone systems, with
their local disks, can muster little self-defense against crashing
programs, malicious intruders getting access to the local disk file
server via bugs in their crashing programs, etc.  conversely, being
hostowner on a diskless terminal is worth exactly whatever files that
user has access to on the remote file server, and whatever
capabilities are active within the scope of that compromise.

i realize it's factually true that "nobody' runs diskless terminals,
cpu servers, etc., but it makes plan 9 power users whince to see this
declared, over and over again, when a lot of the power of the system
comes from this very useful modularity that everyone just wants to
throw away.

the 9front installer used to set $service, but everyone wants
something different from their system, even while declaring their
choices are obvious and correct.  after a while we realized it was
useless trying to read the user's mind.  there are just too many ways
to configure and use the system.  in the end, setting these things up
is just not that onerous.  if you need to be service=cpu, you should
be able to figure out how to set it up.  if you can't, you may not be
ready to offer services to a network.

this isn't intended to be snarky.  the whole system is intentionally
simple and modular enough to make understanding how it works part of
the process of using it.

you have to start somewhere.

sl

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-07 21:54             ` sl
@ 2024-05-07 21:58               ` sl
  2024-05-07 23:15                 ` Lennart Jablonka
  2024-05-07 23:16                 ` Shawn Rutledge
  0 siblings, 2 replies; 48+ messages in thread
From: sl @ 2024-05-07 21:58 UTC (permalink / raw)
  To: 9front

> i realize it's factually true that "nobody' runs diskless terminals,
> cpu servers, etc., but it makes plan 9 power users whince to see this
> declared, over and over again, when a lot of the power of the system
> comes from this very useful modularity that everyone just wants to
> throw away.

realprogrammatik:

9p vs latency is a losing battle. trying to run a diskless terminal
over the internet really, REALLY sucks. even drawterm is not great
in this context.

like i said, not much has ever been done to address those original
assumptions, one of which is that everybody lives on the same fast,
local network. it helped that when plan 9 was under heavy development,
file sizes of just about everything anybody was working with were
relatively small, and multimedia wasn't really happening on the scale
that users today expect as part of any reasonable environment.

(exceptions duly noted)

sl

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-07 21:58               ` sl
@ 2024-05-07 23:15                 ` Lennart Jablonka
  2024-05-07 23:16                 ` Shawn Rutledge
  1 sibling, 0 replies; 48+ messages in thread
From: Lennart Jablonka @ 2024-05-07 23:15 UTC (permalink / raw)
  To: 9front

Quoth sl@stanleylieber.com:
>9p vs latency is a losing battle. trying to run a diskless terminal
>over the internet really, REALLY sucks. even drawterm is not great
>in this context.

How about importing some of Nemo’s stuff?  9Pix?  Op?

https://github.com/fjballest/docs/raw/master/9pix.pdf
https://github.com/fjballest/docs/raw/master/opiwp9.pdf

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-07 21:58               ` sl
  2024-05-07 23:15                 ` Lennart Jablonka
@ 2024-05-07 23:16                 ` Shawn Rutledge
  2024-05-07 23:45                   ` Shawn Rutledge
                                     ` (3 more replies)
  1 sibling, 4 replies; 48+ messages in thread
From: Shawn Rutledge @ 2024-05-07 23:16 UTC (permalink / raw)
  To: 9front

And yet, the FQA recommends laptops.  Usually the assumption with a laptop is you can take it on the subway or to the coffee shop and keep working.  That implies that you want to have some relevant files with you.  (So it’s good the default install has a local filesystem.)  Then later you get back to the home/office and maybe want to use a machine with a bigger monitor and more files available, but some work in progress is on the laptop so maybe you want to rcpu to it for a while.  Eventually files get synced up again (manually or automatically).  Maybe at home there is a file server, sure it’s good to have the dumps.  That’s probably how I’d use it as soon as I get to the point of depending on Plan 9 for any particular task, not just trying things.  (It reminds me of learning how to use Linux, 30 years ago.  It was at a similar level of development back then.)

So obviously there’s a tradeoff between 9front being usable by a laptop user today vs. trying to preserve the labs experience, and having to answer the same questions over and over (how do I start a service, why do I have to reboot into a different mode to make that possible).  I won’t ask how to do that; but others will.

As for "half-assed network services", I assume that means security concerns; ok so not enough faith in how secure the services are by default (well that ought to be fixable eventually?), and not enough faith in users not to realize that they should try experiments on a local LAN before connecting the services to the Internet (which usually involves some router work anyway, assuming the machine is behind one)?  People who take excessive risks are mainly risking their own files; they should know better, but they probably aren’t going to have a lot of files on Plan 9 anyway.  What’s the worst risk besides data theft?  A mail server getting used as a spam relay or something like that?  I agree that setting up a mail server should be more effort.

> 9p vs latency is a losing battle. trying to run a diskless terminal
> over the internet really, REALLY sucks. even drawterm is not great
> in this context.

I left a 9front machine running on my LAN in Norway while I’m in Phoenix for a while, and set up a wireguard vpn on the openwrt router in Norway so that I can connect to home.  Latency in Phoenix is worse than normal (Verizon wireless internet: it's only meant to be temporary).  Despite that, the performance I see connecting with rcpu or drawterm halfway around the world is comparable to connecting to Linux over VNC: not bad for an experiment (as long as wifi on 9front is not in the path!  ;-)  I don’t expect to play video over that link.  But I also have qualms about letting the router connect 9p over the Internet to that Plan 9 machine, since I don’t know yet all the varieties of half-assedness to expect by default.  I think I have to try the Plan 9 network forwarding at some point though, see if the claim that it’s as good as a VPN really holds up.  But I have to learn enough about security before even trying, it seems.


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-07 23:16                 ` Shawn Rutledge
@ 2024-05-07 23:45                   ` Shawn Rutledge
  2024-05-08  0:34                   ` Kurt H Maier
                                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 48+ messages in thread
From: Shawn Rutledge @ 2024-05-07 23:45 UTC (permalink / raw)
  To: 9front

And btw I hope I don’t offend anybody with the comment about level of development 30 years ago: that’s the fun part! Transparency and simplicity are hard-won and valuable, I agree.


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-07 23:16                 ` Shawn Rutledge
  2024-05-07 23:45                   ` Shawn Rutledge
@ 2024-05-08  0:34                   ` Kurt H Maier
  2024-05-08  0:35                   ` sl
  2024-05-08 14:57                   ` Lucas Francesco
  3 siblings, 0 replies; 48+ messages in thread
From: Kurt H Maier @ 2024-05-08  0:34 UTC (permalink / raw)
  To: 9front

On Tue, May 07, 2024 at 04:16:05PM -0700, Shawn Rutledge wrote:
> And yet, the FQA recommends laptops.  

Laptops work great diskless.  We recommend them because they're readily
available and cheap, not because we're implying a fucking deployment
scenario involving mass transit.

> As for "half-assed network services", I assume that means security 
> concerns; ok so not enough faith in how secure the services are by 
> default (well that ought to be fixable eventually?), and not enough 
> faith in users not to realize that they should try experiments on a 
> local LAN before connecting the services to the Internet (which 
> usually involves some router work anyway, assuming the machine is 
> behind one)?  People who take excessive risks are mainly risking 
> their own files; they should know better, but they probably aren’t 
> going to have a lot of files on Plan 9 anyway.  What’s the worst 
> risk besides data theft?  A mail server getting used as a spam 
> relay or something like that?  I agree that setting up a mail server 
> should be more effort.

I have NO faith in users WHATSOEVER.  If users were worth having faith
in, my entire job category would not exist.  And a popped box anywhere
is a hazard everywhere.  It doesn't matter whether you set it up as a
mail service; it will become one despite your wishes shortly after
whoever breaks into it feels it worthwhile.  This "I'm only risking my
computer" shit went out the window in the 1980s.  Anyone who thinks that
hasn't been on the wrong end of a DDoS. 

And as for faith in the services:  there is no software that is so good
it is secure in the hands of an incompetent.

> But I have to learn enough about security before even trying, it seems.

Not just you.  I don't want people putting 9front systems online without
understanding them.  If that means you never get around to putting
9front systems online, it is no great loss, to you or anyone else.  The
barriers to what you want to do are extremely low, and instead of
learning to overcome them you are asking other people to change the
default installation of the entire distribution.  This is the kind of
shit that leads to "do not eat" warnings on silica packets.

Meanwhile, we start lowering the bar, and we get people in here mad that
their 9front doesn't act exactly like their ubuntu, or mad that someone
put a 9front on their hypervisor and it started bogarting cycles from
the other clients with the furious rate it was posting dick pill
advertisements on similarly low-barrier Wordpress installations someone
installed and forgot about.  

I just don't think "do it for me" is the clientele we should be pursuing
at this time.  That's Cloudron's target market.

khm

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-07 23:16                 ` Shawn Rutledge
  2024-05-07 23:45                   ` Shawn Rutledge
  2024-05-08  0:34                   ` Kurt H Maier
@ 2024-05-08  0:35                   ` sl
  2024-05-08  1:05                     ` Jacob Moody
  2024-05-08 14:57                   ` Lucas Francesco
  3 siblings, 1 reply; 48+ messages in thread
From: sl @ 2024-05-08  0:35 UTC (permalink / raw)
  To: 9front

> And yet, the FQA recommends laptops.

the fqa lists hardware that is known to work.  most users who start
out with hardware that doesn't work never get to the stage of
complaining about what the installer does and doesn't do.  i keep
track of the stuff i've used, and share that information, just in case
someone else stumbles across the same machines.

if you'd prefer to get a taste of the pre- fqa3 plan 9 experience, you
can buy a random laptop manufactured in 2024 and report back on its
status, re: booting and running 9front.  if you do that, i'll add the
information to the hardware inventory linked in fqa3.

if instead you'd prefer the pre- 9front experience, you can keep this
information all to yourself and nobody will ever know what you
accomplished.  for the highest level of fidelity to the pre- 9front
experience, you could opt to brag vaguely about all the cool stuff you
got working on your computer, but never, ever make details or source
code available to the general public.  bonus if you lord any of this
over the newbs who are even newer to plan 9 than you are, assuring
them that, yes, all these wonderful things you're claiming you did are
trivial to accomplish in plan 9.

laptops are useful beyond the obvious use case because of they come
with built-in battery power.


> Usually the assumption with a
> laptop is you can take it on the subway or to the coffee shop and keep
> working.

you'd think so, right?  but if you read through the 9fans archives,
labs people basically blew off laptop problems as being irrelevant to
plan 9.  the system as designed never accounted for most the things
people today want to do with laptops, and not only because the system
assumes fast access to the disk file server.

here are some other unsolved, laptop-specific problems you'll encounter:

- how do i sleep and resume?	(we don't do state)
- how do i lock my screen?	(this exists, but relies on the network, lol)


> That implies that you want to have some relevant files with
> you.  (So it’s good the default install has a local filesystem.) Then
> later you get back to the home/office and maybe want to use a machine
> with a bigger monitor and more files available, but some work in
> progress is on the laptop so maybe you want to rcpu to it for a while.
> Eventually files get synced up again (manually or automatically).
> Maybe at home there is a file server, sure it’s good to have the
> dumps.  That’s probably how I’d use it as soon as I get to the point
> of depending on Plan 9 for any particular task, not just trying
> things.  (It reminds me of learning how to use Linux, 30 years ago.
> It was at a similar level of development back then.)

it's funny because all of this started with rob's experiments with
graphical terminals.  the terminal itself ran a small, custom
operating system that would be downloaded from the server at the start
of a session.  the user would do operations on data in local memory,
periodically syncing with the remote file server over the network.
there is a remnant of this design in how the sam editor's gui operates
on files.

and of course, plan 9 itself.

for many years i ran diskless terminals at home over wired ethernet
and wifi.  it was great because my environment was always the same, my
files were always exactly where i left them, and when i was finished i
could just power off the machine without worrying about unclean
shutdowns.

some people say, we have drawterm, best of both worlds!  but drawterm
also sucks when there's latency.  moody may run doom in drawterm over
local ethernet, but that's not happening over the internet.

today, i'm almost always remote, in different places at different
times.  my network is a wifi access point provided by my mobile phone.
over many years i've tried lots of different ways of running plan 9
with this setup, from tls booting over the internet, to
drawterm<->9front running in a local qemu on openbsd, etc., etc.  what
i typically do now is boot my laptop from a local disk (so, local
binaries), but all my critical work files live on the remove disk file
server i physically operate at home.  sometimes this is painfully
slow, so i'll work on files on the local disk and then sync them up.
i never, ever, need to rcpu into my terminal.  i just do whatever
syncing is necessary before i turn it off.  if i need a modern web
browser, i connect to openbsd over ssh/vnc (which is also sometimes
painfully slow).  this is far from ideal but works better than the
other combinations i've tried so far.

pick this apart, but this is me telling you what worked and didn't
work for me, and i'm happy to answer questions in detail.


> trying to preserve the labs experience

personally, i don't care at all about preserving the labs experience.
we already dumped fossil, made tons of changes.  but before you start
proposing all sorts of changes to the system it's important to
understand exactly what you're changing.  plan 9 was not designed by
idiots, or by accident.  everything that went in is the way it is for
a reason.  you may disagree with the reason, but it's important to
understand that reason before blowing it off.  and yes, people
involved with 9front will challenge you on every little detail,
because the details are important, and that's why we're all here.

this isn't linux.


> I agree that setting up a mail server should be more effort.

turning on listeners by default is bad practice.  by now, all
operating systems err on the side of this principle.  obviously, there
is leeway here when it comes to services needed to actually use the system.


>> 9p vs latency is a losing battle.  trying to run a diskless terminal
>> over the internet really, REALLY sucks.  even drawterm is not great
>> in this context.
> 
> I left a 9front machine running on my LAN in Norway while I’m in
> Phoenix for a while, and set up a wireguard vpn on the openwrt router
> in Norway so that I can connect to home.  Latency in Phoenix is worse
> than normal (Verizon wireless internet: it's only meant to be
> temporary).  Despite that, the performance I see connecting with rcpu
> or drawterm halfway around the world is comparable to connecting to
> Linux over VNC: not bad for an experiment (as long as wifi on 9front
> is not in the path!  ;-) I don’t expect to play video over that link.
> But I also have qualms about letting the router connect 9p over the
> Internet to that Plan 9 machine, since I don’t know yet all the
> varieties of half-assedness to expect by default.  I think I have to
> try the Plan 9 network forwarding at some point though, see if the
> claim that it’s as good as a VPN really holds up.  But I have to learn
> enough about security before even trying, it seems.

for a while i was actually carrying around an ipad and connecting to
9front with vnc over ssh:

	ipad -> internet -> openbsd/ssh -> ethernet -> 9front/vncs

this probably worked better than any other remote internet setup i
ever tried.  of course, you get none of the local plan 9 benefits that
way.  and no sound.

sl

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-08  0:35                   ` sl
@ 2024-05-08  1:05                     ` Jacob Moody
  2024-05-08  1:24                       ` sl
  2024-05-08  3:41                       ` Ori Bernstein
  0 siblings, 2 replies; 48+ messages in thread
From: Jacob Moody @ 2024-05-08  1:05 UTC (permalink / raw)
  To: 9front

I want to say from the getgo that sl's mail here is quite well said.
I would just like to echo some sentiments made here.

On 5/7/24 19:35, sl@stanleylieber.com wrote:
>> And yet, the FQA recommends laptops.
> 
> the fqa lists hardware that is known to work.  most users who start
> out with hardware that doesn't work never get to the stage of
> complaining about what the installer does and doesn't do.  i keep
> track of the stuff i've used, and share that information, just in case
> someone else stumbles across the same machines.
> 
> if you'd prefer to get a taste of the pre- fqa3 plan 9 experience, you
> can buy a random laptop manufactured in 2024 and report back on its
> status, re: booting and running 9front.  if you do that, i'll add the
> information to the hardware inventory linked in fqa3.
> 
> if instead you'd prefer the pre- 9front experience, you can keep this
> information all to yourself and nobody will ever know what you
> accomplished.  for the highest level of fidelity to the pre- 9front
> experience, you could opt to brag vaguely about all the cool stuff you
> got working on your computer, but never, ever make details or source
> code available to the general public.  bonus if you lord any of this
> over the newbs who are even newer to plan 9 than you are, assuring
> them that, yes, all these wonderful things you're claiming you did are
> trivial to accomplish in plan 9.
> 
> laptops are useful beyond the obvious use case because of they come
> with built-in battery power.
> 
> 
>> Usually the assumption with a
>> laptop is you can take it on the subway or to the coffee shop and keep
>> working.
> 
> you'd think so, right?  but if you read through the 9fans archives,
> labs people basically blew off laptop problems as being irrelevant to
> plan 9.  the system as designed never accounted for most the things
> people today want to do with laptops, and not only because the system
> assumes fast access to the disk file server.
> 
> here are some other unsolved, laptop-specific problems you'll encounter:
> 
> - how do i sleep and resume?	(we don't do state)
> - how do i lock my screen?	(this exists, but relies on the network, lol)
> 
> 
>> That implies that you want to have some relevant files with
>> you.  (So it’s good the default install has a local filesystem.) Then
>> later you get back to the home/office and maybe want to use a machine
>> with a bigger monitor and more files available, but some work in
>> progress is on the laptop so maybe you want to rcpu to it for a while.
>> Eventually files get synced up again (manually or automatically).
>> Maybe at home there is a file server, sure it’s good to have the
>> dumps.  That’s probably how I’d use it as soon as I get to the point
>> of depending on Plan 9 for any particular task, not just trying
>> things.  (It reminds me of learning how to use Linux, 30 years ago.
>> It was at a similar level of development back then.)
> 
> it's funny because all of this started with rob's experiments with
> graphical terminals.  the terminal itself ran a small, custom
> operating system that would be downloaded from the server at the start
> of a session.  the user would do operations on data in local memory,
> periodically syncing with the remote file server over the network.
> there is a remnant of this design in how the sam editor's gui operates
> on files.
> 
> and of course, plan 9 itself.
> 
> for many years i ran diskless terminals at home over wired ethernet
> and wifi.  it was great because my environment was always the same, my
> files were always exactly where i left them, and when i was finished i
> could just power off the machine without worrying about unclean
> shutdowns.
> 
> some people say, we have drawterm, best of both worlds!  but drawterm
> also sucks when there's latency.  moody may run doom in drawterm over
> local ethernet, but that's not happening over the internet.

I am delighted that my bragging about playing doom 2 over drawterm
has become an inside joke but I must kindly petition that this be
upgraded to my playthrough of cavestory in nearly 1440p that I did
over drawterm.

> 
> today, i'm almost always remote, in different places at different
> times.  my network is a wifi access point provided by my mobile phone.
> over many years i've tried lots of different ways of running plan 9
> with this setup, from tls booting over the internet, to
> drawterm<->9front running in a local qemu on openbsd, etc., etc.  what
> i typically do now is boot my laptop from a local disk (so, local
> binaries), but all my critical work files live on the remove disk file
> server i physically operate at home.  sometimes this is painfully
> slow, so i'll work on files on the local disk and then sync them up.
> i never, ever, need to rcpu into my terminal.  i just do whatever
> syncing is necessary before i turn it off.  if i need a modern web
> browser, i connect to openbsd over ssh/vnc (which is also sometimes
> painfully slow).  this is far from ideal but works better than the
> other combinations i've tried so far.
> 
> pick this apart, but this is me telling you what worked and didn't
> work for me, and i'm happy to answer questions in detail.

I want to expand this with some of my experiences. I tend to run a
filesystem on the laptop and just sync file between the FS at home
and the laptop at regular intervals (typically before I travel) but
I am home more times than not so my experience is a bit different.
I tend to keep any code of interest checked in to shithub, that way
its impossible to be caught without access to what I need. I sync the
.git and go about my day.

> 
> 
>> trying to preserve the labs experience
> 
> personally, i don't care at all about preserving the labs experience.
> we already dumped fossil, made tons of changes.  but before you start
> proposing all sorts of changes to the system it's important to
> understand exactly what you're changing.  plan 9 was not designed by
> idiots, or by accident.  everything that went in is the way it is for
> a reason.  you may disagree with the reason, but it's important to
> understand that reason before blowing it off.  and yes, people
> involved with 9front will challenge you on every little detail,
> because the details are important, and that's why we're all here.
> 
> this isn't linux.

Laptops are generally outside of the purview of what the labs intended,
computing has gone in interesting directions. However this is not much of
hurdle really, people lug around operating systems that would really prefer
to think of themselves as giant computers running in a dedicated room
servicing a dozen teletypes. Linux has put a lot of makeup on this pig,
but you can not change its core design principles. Linux folks make do
with little hacks here and there to improve the experience, you can likewise
find whatever fits your fancy for how you'd like to reconcile the world of
modern computing on to a 9front system. We've gained a fair bit of modern
conveniences (git), the world is your oyster.

In general though sl here is right, before we ship something with the system
we will want to make sure things make sense. We don't shove things in just because
they can be or because someone has expectations that are not met.


Thanks,
moody

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-08  1:05                     ` Jacob Moody
@ 2024-05-08  1:24                       ` sl
  2024-05-08  7:22                         ` hiro
                                           ` (2 more replies)
  2024-05-08  3:41                       ` Ori Bernstein
  1 sibling, 3 replies; 48+ messages in thread
From: sl @ 2024-05-08  1:24 UTC (permalink / raw)
  To: 9front

> I am delighted that my bragging about playing doom 2 over drawterm
> has become an inside joke but I must kindly petition that this be
> upgraded to my playthrough of cavestory in nearly 1440p that I did
> over drawterm.

what about wipeout


> Laptops are generally outside of the purview of what the labs intended,
> computing has gone in interesting directions. However this is not much of
> hurdle really, people lug around operating systems that would really prefer
> to think of themselves as giant computers running in a dedicated room
> servicing a dozen teletypes. Linux has put a lot of makeup on this pig,
> but you can not change its core design principles. Linux folks make do
> with little hacks here and there to improve the experience, you can likewise
> find whatever fits your fancy for how you'd like to reconcile the world of
> modern computing on to a 9front system. We've gained a fair bit of modern
> conveniences (git), the world is your oyster.

i'm very happy with how far we've come, but things are what they are,
and they're not what they aren't. we inherited a semi-broken plan 9 that
never accounted for use cases that were hardly even invented yet when
the real design work was undertaken. the laptop thing in particular is
always funny to me, because in practice it's very similar to that old
situation where bell labs people had slow connections to their terminals
at home. it's too bad ancient terminals weren't battery powered, or
maybe the plan 9 gods would have already put some thought into this.

i'm a bit removed from the software world in general, but it's always
strange, over the years, as people show up making strange demands
about what 9front should and shouldn't automatically provide, relative
to whatever is happening in other operating systems at the time. i realize
that as time goes by, young people will emerge who literally never knew
any different, but this post- social media phenomenon where people seem
to expect that an open source project's highest goal is always to attract
users is beyond bizarre to me. yes, network effects are powerful, but
indiscriminately attracting attention is not always beneficial. see also: 
predators vs prey.


> In general though sl here is right, before we ship something with the system
> we will want to make sure things make sense. We don't shove things in just because
> they can be or because someone has expectations that are not met.

i value that plan 9 is small, easy to erason about, and that a
reasonably intelligent person can solve a lot of their own problems
just by thinking about them before they start mashing keys.  the proof
here is that someone like me can get quite a lot of real (read: not
strictly computer-related) work done using the system.  hell, i even
publish a book about it that people who are smarter than me use to
get their own work done.

and i don't know what the fuck i'm doing.

sl

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-07 21:11           ` Shawn Rutledge
  2024-05-07 21:35             ` Kurt H Maier
  2024-05-07 21:54             ` sl
@ 2024-05-08  2:11             ` Thaddeus Woskowiak
  2 siblings, 0 replies; 48+ messages in thread
From: Thaddeus Woskowiak @ 2024-05-08  2:11 UTC (permalink / raw)
  To: 9front

On Tue, May 7, 2024 at 6:22 PM Shawn Rutledge <lists@ecloud.org> wrote:
>
> On May 7, 2024, at 2:14 AM, sirjofri <sirjofri+ml-9front@sirjofri.de> wrote:
> > In the other case, where you want your machine to always serve services, that machine is most probably a cpu by definition. Nowadays there's no reason to not run a cpu if you need services. In the beginning there were different kernels if I remember correctly, which meant a big difference, but nowadays it's only a different configuration.
>
> I just don’t see much point in keeping it as split as it still is.  For some reason a 9front install is a term by default, and not being set up to run services by default seems like a needless limitation.

Why is learning to setup a cpu server needless?

> But if you switch to cpu, it’s a little extra work to get it to be a useful machine to sit in front of (even though “text mode” is already graphical), which is also a needless limitation.

termrc

> It took me a few hours to learn how to customize the startup process at first, and I didn’t memorize it, so next time I’d have to look at the machines on which I already did it to remember what exactly I did.

This track explains everything:
https://kmfdm.bandcamp.com/track/anarchy-god-and-the-state-mix

>  When a newbie starts with one machine and then wants to try starting up network services shortly afterwards, it should be more directly possible, IMO; and the directory-of-services approach seems like a good one to me, which should not be ruled out by having a term by default.

What stopped you from using listen or even listen1 in termrc? Do you
think that setting cpu cripples things? Its just an env variable the
bootrc script uses to see wtf it is supposed to do. read through
boot(8)

> Unless you are actually in a lab environment with shared machines, nobody expects a diskless terminal anymore; and it’s not a great introduction to what Plan 9 can do, to have that be the default.

All of my plan 9 machines are tcp or pxe booted diskless terminals
unless they are a laptop or my cpu/disk server. I guess my house is a
lab - Ozone Labs, est. 2018. It's dumb to want a building scattered
with disks holding duplicate data - one disk to rule them all, one
command to bind(1) them. If the terminal goes bad I throw it away and
plug a new one in, change the ether= in ndb and move on with life.
Testing a new machine? just plug it in and setup ndb. Gigabit and 10gb
are really cheap and you wont notice you don't have a disk in the
machine.

> If the majority thinks there needs to continue to be a distinction though, then maybe the installer should ask about the machine’s intended role at installation time, and try to achieve a suitable out-of-the-box experience for all the choices.

You can do anything with a terminal install, which by default is a
newbie friendly experience, including turning it into a graphical
cpu/disk/terminal/router/networking thing. Its so damn simple to
achieve these things that I cant believe I am wasting my time writing
this. RTFM over and over, because you learn by listening. Later on we
will tell you how to setup a cpu server just as you would in any
colorful German operating system.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-08  1:05                     ` Jacob Moody
  2024-05-08  1:24                       ` sl
@ 2024-05-08  3:41                       ` Ori Bernstein
  2024-05-08  4:09                         ` sl
  1 sibling, 1 reply; 48+ messages in thread
From: Ori Bernstein @ 2024-05-08  3:41 UTC (permalink / raw)
  To: 9front; +Cc: Jacob Moody

On Tue, 7 May 2024 20:05:21 -0500, Jacob Moody <moody@posixcafe.org> wrote:

> > pick this apart, but this is me telling you what worked and didn't
> > work for me, and i'm happy to answer questions in detail.
> 
> I want to expand this with some of my experiences. I tend to run a
> filesystem on the laptop and just sync file between the FS at home
> and the laptop at regular intervals (typically before I travel) but
> I am home more times than not so my experience is a bit different.
> I tend to keep any code of interest checked in to shithub, that way
> its impossible to be caught without access to what I need. I sync the
> .git and go about my day.
> 

I tend to hack from coffee shops quite often, and don't
sync, but will instead pull things into my namespace,
pop up subrios on my home cpu server, etc.

Contrary to many people's experience, I actually find
that for the kind of latency I get over coffee shop wifi
to my home, remote rio sessions behave fairly well (though
displaying static images is quite painful).

> > 
> >> trying to preserve the labs experience
> > 
> > personally, i don't care at all about preserving the labs experience.
> > we already dumped fossil, made tons of changes.  but before you start
> > proposing all sorts of changes to the system it's important to
> > understand exactly what you're changing.  plan 9 was not designed by
> > idiots, or by accident.  everything that went in is the way it is for
> > a reason.  you may disagree with the reason, but it's important to
> > understand that reason before blowing it off.  and yes, people
> > involved with 9front will challenge you on every little detail,
> > because the details are important, and that's why we're all here.
> > 
> > this isn't linux.
> 
> Laptops are generally outside of the purview of what the labs intended,
> computing has gone in interesting directions. However this is not much of
> hurdle really, people lug around operating systems that would really prefer
> to think of themselves as giant computers running in a dedicated room
> servicing a dozen teletypes. Linux has put a lot of makeup on this pig,
> but you can not change its core design principles. Linux folks make do
> with little hacks here and there to improve the experience, you can likewise
> find whatever fits your fancy for how you'd like to reconcile the world of
> modern computing on to a 9front system. We've gained a fair bit of modern
> conveniences (git), the world is your oyster.
> 
> In general though sl here is right, before we ship something with the system
> we will want to make sure things make sense. We don't shove things in just because
> they can be or because someone has expectations that are not met.

There is an open question about how mobile computers
should fit into a Plan 9; so far, I don't have a good
answer.

I don't think turning every plan 9 machine into an
island is the answer.

-- 
    Ori Bernstein

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-08  3:41                       ` Ori Bernstein
@ 2024-05-08  4:09                         ` sl
  2024-05-08  8:39                           ` Frank D. Engel, Jr.
  0 siblings, 1 reply; 48+ messages in thread
From: sl @ 2024-05-08  4:09 UTC (permalink / raw)
  To: 9front

> Contrary to many people's experience, I actually find 
> that for the kind of latency I get over coffee shop wifi 
> to my home, remote rio sessions behave fairly well (though 
> displaying static images is quite painful).

but this doesn't actually sound contrary.

text is generally fine, even in drawterm-over-the-internet.  it's when
you try to work with anything other than text that life gets hectic.
i never had a lot of complaints until plan 9 became more than a text
editor for me and i started trying to live in it full-time.

people nowadays are doing a lot more with their plan 9 than they were
even when we started 9front.  but most of that stuff is either too
painful to be practical or flat out impossible unless your constituent
distributed parts live close together on a fast network.

the end result is that many of the very cool features we come to plan
9 for in the first place are pretty much useless most of the time.
and this drives travelers to just running stand alone systems.

this inherent tension between "plan 9 is a distributed environment"
and "nobody expected us to be quite *that* distributed" remains
unresolved, and largely unaddressed.  it's not really a fault of the
original concept, but it does expose weaknesses (or, perhaps more
accurately, period appropriate shortsightedness nobody could blame
them for) in the specific implementations of various components.

i like the suggestions made so far about automatic caching schemes.  i
have played around with cfs partitions, but that mechanism at least
never seems to deliver the promised results.

sl

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-08  1:24                       ` sl
@ 2024-05-08  7:22                         ` hiro
  2024-05-08 14:04                           ` Stanley Lieber
  2024-05-08 12:08                         ` Stuart Morrow
  2024-05-08 14:25                         ` [9front] Enabling a service Jacob Moody
  2 siblings, 1 reply; 48+ messages in thread
From: hiro @ 2024-05-08  7:22 UTC (permalink / raw)
  To: 9front

> the laptop thing in particular is
> always funny to me, because in practice it's very similar to that old
> situation where bell labs people had slow connections to their terminals
> at home. it's too bad ancient terminals weren't battery powered, or
> maybe the plan 9 gods would have already put some thought into this.

I believe this must have been a party trick mostly. They were likely
still required to show up in the office.

The latency of these connections was very good, so that's how I
explain their lack of optimizations in rio for limiting the amount of
roundtrips.
Or they used an older version that didn't have the very slow rio yet.
And obviously mostly everything was text back then, so no need to fix
the roundtrip-bottleneck for high bitrate media like images.

If they had our use-case, I don't know if they would have gone for the
kind of optimization that I have in mind, as this leads to diminishing
returns very fast. Fully meshed, non-centralized distributed systems
might be too juicy for them to ignore for solving this kind of
problem.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-08  4:09                         ` sl
@ 2024-05-08  8:39                           ` Frank D. Engel, Jr.
  2024-05-08 14:17                             ` Jacob Moody
  0 siblings, 1 reply; 48+ messages in thread
From: Frank D. Engel, Jr. @ 2024-05-08  8:39 UTC (permalink / raw)
  To: 9front

Some of this latency problem might be helped, at least for reading, by 
keeping a local cache of the data in the remote file system.

Fossil/venti may actually suggest a way of doing this.

Think of having blocks of data cached locally in a modified venti 
structure based on (usage_time, venti-hash, data_block), such that 
instead of making the data permanent, data with a sufficiently old 
usage_time may simply be overwritten when more space is needed, and the 
usage_time is updated when the data is used again.

The fossil remote filesystem would be fossil-like, holding the hash for 
the data blocks, such that data written to it is write-through from the 
remote (laptop) end of things, being written to the cache using the 
calculated hash (yes, that rhymes) as well as sent to the disk server 
across the (latent) network; when data needs to be retrieved, it obtains 
the hash from the remote side, then checks the local cache to see if a 
copy is available.  If so, it can simply use the local copy it has from 
the cache, otherwise it requests it from the other end.


On 5/8/24 00:09, sl@stanleylieber.com wrote:
>> Contrary to many people's experience, I actually find
>> that for the kind of latency I get over coffee shop wifi
>> to my home, remote rio sessions behave fairly well (though
>> displaying static images is quite painful).
> but this doesn't actually sound contrary.
>
> text is generally fine, even in drawterm-over-the-internet.  it's when
> you try to work with anything other than text that life gets hectic.
> i never had a lot of complaints until plan 9 became more than a text
> editor for me and i started trying to live in it full-time.
>
> people nowadays are doing a lot more with their plan 9 than they were
> even when we started 9front.  but most of that stuff is either too
> painful to be practical or flat out impossible unless your constituent
> distributed parts live close together on a fast network.
>
> the end result is that many of the very cool features we come to plan
> 9 for in the first place are pretty much useless most of the time.
> and this drives travelers to just running stand alone systems.
>
> this inherent tension between "plan 9 is a distributed environment"
> and "nobody expected us to be quite *that* distributed" remains
> unresolved, and largely unaddressed.  it's not really a fault of the
> original concept, but it does expose weaknesses (or, perhaps more
> accurately, period appropriate shortsightedness nobody could blame
> them for) in the specific implementations of various components.
>
> i like the suggestions made so far about automatic caching schemes.  i
> have played around with cfs partitions, but that mechanism at least
> never seems to deliver the promised results.
>
> sl
>


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-08  1:24                       ` sl
  2024-05-08  7:22                         ` hiro
@ 2024-05-08 12:08                         ` Stuart Morrow
  2024-05-08 16:37                           ` Brian Stuart
  2024-05-08 14:25                         ` [9front] Enabling a service Jacob Moody
  2 siblings, 1 reply; 48+ messages in thread
From: Stuart Morrow @ 2024-05-08 12:08 UTC (permalink / raw)
  To: 9front

> the laptop thing ... it's too bad ancient terminals weren't battery powered,
> or maybe the plan 9 gods would have already put some thought into this.

The Security in Plan 9 paper mentions a desire for public-key auth, reiterated
later by some GSOC proposal (probably Sape, but I can't be arsed to check).

I don't know why they wanted this, *BUT* it *would* happen to be a good first
step towards a real distributed filesystem like Orifs.  Which in turn
would solve
at least about four things in Plan 9 including the obvious one (shared root,
meaning a laptop doesn't separately have to be administrated).

Anyway, shout-out to inferno-bls's lapfs.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-08  7:22                         ` hiro
@ 2024-05-08 14:04                           ` Stanley Lieber
  0 siblings, 0 replies; 48+ messages in thread
From: Stanley Lieber @ 2024-05-08 14:04 UTC (permalink / raw)
  To: 9front

On May 8, 2024 3:22:25 AM EDT, hiro <23hiro@gmail.com> wrote:
>> the laptop thing in particular is
>> always funny to me, because in practice it's very similar to that old
>> situation where bell labs people had slow connections to their terminals
>> at home. it's too bad ancient terminals weren't battery powered, or
>> maybe the plan 9 gods would have already put some thought into this.
>
>I believe this must have been a party trick mostly. They were likely
>still required to show up in the office.
>
>The latency of these connections was very good, so that's how I
>explain their lack of optimizations in rio for limiting the amount of
>roundtrips.
>Or they used an older version that didn't have the very slow rio yet.
>And obviously mostly everything was text back then, so no need to fix
>the roundtrip-bottleneck for high bitrate media like images.
>
>If they had our use-case, I don't know if they would have gone for the
>kind of optimization that I have in mind, as this leads to diminishing
>returns very fast. Fully meshed, non-centralized distributed systems
>might be too juicy for them to ignore for solving this kind of
>problem.
>

yeah, i was thinking of the blit vs the first eight unix guis, jim/sam, etc.

sl

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-08  8:39                           ` Frank D. Engel, Jr.
@ 2024-05-08 14:17                             ` Jacob Moody
  2024-05-08 15:49                               ` Frank D. Engel, Jr.
  0 siblings, 1 reply; 48+ messages in thread
From: Jacob Moody @ 2024-05-08 14:17 UTC (permalink / raw)
  To: 9front

On 5/8/24 03:39, Frank D. Engel, Jr. wrote:
> Some of this latency problem might be helped, at least for reading, by 
> keeping a local cache of the data in the remote file system.
> 
> Fossil/venti may actually suggest a way of doing this.
> 
> Think of having blocks of data cached locally in a modified venti 
> structure based on (usage_time, venti-hash, data_block), such that 
> instead of making the data permanent, data with a sufficiently old 
> usage_time may simply be overwritten when more space is needed, and the 
> usage_time is updated when the data is used again.
> 
> The fossil remote filesystem would be fossil-like, holding the hash for 
> the data blocks, such that data written to it is write-through from the 
> remote (laptop) end of things, being written to the cache using the 
> calculated hash (yes, that rhymes) as well as sent to the disk server 
> across the (latent) network; when data needs to be retrieved, it obtains 
> the hash from the remote side, then checks the local cache to see if a 
> copy is available.  If so, it can simply use the local copy it has from 
> the cache, otherwise it requests it from the other end.

This would likely work fine for a single user but the merging of data
in use by a handful of people becomes quite the challenge.



^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-08  1:24                       ` sl
  2024-05-08  7:22                         ` hiro
  2024-05-08 12:08                         ` Stuart Morrow
@ 2024-05-08 14:25                         ` Jacob Moody
  2 siblings, 0 replies; 48+ messages in thread
From: Jacob Moody @ 2024-05-08 14:25 UTC (permalink / raw)
  To: 9front

On 5/7/24 20:24, sl@stanleylieber.com wrote:
>> I am delighted that my bragging about playing doom 2 over drawterm
>> has become an inside joke but I must kindly petition that this be
>> upgraded to my playthrough of cavestory in nearly 1440p that I did
>> over drawterm.
> 
> what about wipeout

wipeout worked but not in as high a resolution, I like to talk about
cavestory because it was the first game I was able to map the actual
rendering in to draw(2) calls, not just internally rendering to a framebuffer
and writing that to the screen. This has some tradeoffs but it does make
playing over drawterm very nice.


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-07 23:16                 ` Shawn Rutledge
                                     ` (2 preceding siblings ...)
  2024-05-08  0:35                   ` sl
@ 2024-05-08 14:57                   ` Lucas Francesco
  2024-05-08 15:10                     ` an2qzavok
  3 siblings, 1 reply; 48+ messages in thread
From: Lucas Francesco @ 2024-05-08 14:57 UTC (permalink / raw)
  To: 9front

> As for "half-assed network services", I assume that means security concerns; ok so not enough faith in how secure the services are by default (well that ought to be fixable eventually?), and not enough faith in users not to realize that they should try experiments on a local LAN before connecting the services to the Internet (which usually involves some router work anyway, assuming the machine is behind one)?  People who take excessive risks are mainly risking their own files; they should know better, but they probably aren’t going to have a lot of files on Plan 9 anyway.  What’s the worst risk besides data theft?  A mail server getting used as a spam relay or something like that?  I agree that setting up a mail server should be more effort.

Yes, we have NO faith in you or any other user whatsoever, making
those services even more trivial to set up without understanding would
be harmful since there are multiple risks involved and one of them is
being a node for a DDoS reflection attack for example.

On Wed, 8 May 2024 at 02:22, Shawn Rutledge <lists@ecloud.org> wrote:
>
> And yet, the FQA recommends laptops.  Usually the assumption with a laptop is you can take it on the subway or to the coffee shop and keep working.  That implies that you want to have some relevant files with you.  (So it’s good the default install has a local filesystem.)  Then later you get back to the home/office and maybe want to use a machine with a bigger monitor and more files available, but some work in progress is on the laptop so maybe you want to rcpu to it for a while.  Eventually files get synced up again (manually or automatically).  Maybe at home there is a file server, sure it’s good to have the dumps.  That’s probably how I’d use it as soon as I get to the point of depending on Plan 9 for any particular task, not just trying things.  (It reminds me of learning how to use Linux, 30 years ago.  It was at a similar level of development back then.)
>
> So obviously there’s a tradeoff between 9front being usable by a laptop user today vs. trying to preserve the labs experience, and having to answer the same questions over and over (how do I start a service, why do I have to reboot into a different mode to make that possible).  I won’t ask how to do that; but others will.
>
> As for "half-assed network services", I assume that means security concerns; ok so not enough faith in how secure the services are by default (well that ought to be fixable eventually?), and not enough faith in users not to realize that they should try experiments on a local LAN before connecting the services to the Internet (which usually involves some router work anyway, assuming the machine is behind one)?  People who take excessive risks are mainly risking their own files; they should know better, but they probably aren’t going to have a lot of files on Plan 9 anyway.  What’s the worst risk besides data theft?  A mail server getting used as a spam relay or something like that?  I agree that setting up a mail server should be more effort.
>
> > 9p vs latency is a losing battle. trying to run a diskless terminal
> > over the internet really, REALLY sucks. even drawterm is not great
> > in this context.
>
> I left a 9front machine running on my LAN in Norway while I’m in Phoenix for a while, and set up a wireguard vpn on the openwrt router in Norway so that I can connect to home.  Latency in Phoenix is worse than normal (Verizon wireless internet: it's only meant to be temporary).  Despite that, the performance I see connecting with rcpu or drawterm halfway around the world is comparable to connecting to Linux over VNC: not bad for an experiment (as long as wifi on 9front is not in the path!  ;-)  I don’t expect to play video over that link.  But I also have qualms about letting the router connect 9p over the Internet to that Plan 9 machine, since I don’t know yet all the varieties of half-assedness to expect by default.  I think I have to try the Plan 9 network forwarding at some point though, see if the claim that it’s as good as a VPN really holds up.  But I have to learn enough about security before even trying, it seems.
>

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-08 14:57                   ` Lucas Francesco
@ 2024-05-08 15:10                     ` an2qzavok
  0 siblings, 0 replies; 48+ messages in thread
From: an2qzavok @ 2024-05-08 15:10 UTC (permalink / raw)
  To: 9front

>being a node for a DDoS reflection attack
I'm doing my part! (was running a socks proxy on my 9front VPS and
discovered ~100% network load after a month)

ср, 8 мая 2024 г. в 18:02, Lucas Francesco <lucas.francesco93@gmail.com>:
>
> > As for "half-assed network services", I assume that means security concerns; ok so not enough faith in how secure the services are by default (well that ought to be fixable eventually?), and not enough faith in users not to realize that they should try experiments on a local LAN before connecting the services to the Internet (which usually involves some router work anyway, assuming the machine is behind one)?  People who take excessive risks are mainly risking their own files; they should know better, but they probably aren’t going to have a lot of files on Plan 9 anyway.  What’s the worst risk besides data theft?  A mail server getting used as a spam relay or something like that?  I agree that setting up a mail server should be more effort.
>
> Yes, we have NO faith in you or any other user whatsoever, making
> those services even more trivial to set up without understanding would
> be harmful since there are multiple risks involved and one of them is
> being a node for a DDoS reflection attack for example.
>
> On Wed, 8 May 2024 at 02:22, Shawn Rutledge <lists@ecloud.org> wrote:
> >
> > And yet, the FQA recommends laptops.  Usually the assumption with a laptop is you can take it on the subway or to the coffee shop and keep working.  That implies that you want to have some relevant files with you.  (So it’s good the default install has a local filesystem.)  Then later you get back to the home/office and maybe want to use a machine with a bigger monitor and more files available, but some work in progress is on the laptop so maybe you want to rcpu to it for a while.  Eventually files get synced up again (manually or automatically).  Maybe at home there is a file server, sure it’s good to have the dumps.  That’s probably how I’d use it as soon as I get to the point of depending on Plan 9 for any particular task, not just trying things.  (It reminds me of learning how to use Linux, 30 years ago.  It was at a similar level of development back then.)
> >
> > So obviously there’s a tradeoff between 9front being usable by a laptop user today vs. trying to preserve the labs experience, and having to answer the same questions over and over (how do I start a service, why do I have to reboot into a different mode to make that possible).  I won’t ask how to do that; but others will.
> >
> > As for "half-assed network services", I assume that means security concerns; ok so not enough faith in how secure the services are by default (well that ought to be fixable eventually?), and not enough faith in users not to realize that they should try experiments on a local LAN before connecting the services to the Internet (which usually involves some router work anyway, assuming the machine is behind one)?  People who take excessive risks are mainly risking their own files; they should know better, but they probably aren’t going to have a lot of files on Plan 9 anyway.  What’s the worst risk besides data theft?  A mail server getting used as a spam relay or something like that?  I agree that setting up a mail server should be more effort.
> >
> > > 9p vs latency is a losing battle. trying to run a diskless terminal
> > > over the internet really, REALLY sucks. even drawterm is not great
> > > in this context.
> >
> > I left a 9front machine running on my LAN in Norway while I’m in Phoenix for a while, and set up a wireguard vpn on the openwrt router in Norway so that I can connect to home.  Latency in Phoenix is worse than normal (Verizon wireless internet: it's only meant to be temporary).  Despite that, the performance I see connecting with rcpu or drawterm halfway around the world is comparable to connecting to Linux over VNC: not bad for an experiment (as long as wifi on 9front is not in the path!  ;-)  I don’t expect to play video over that link.  But I also have qualms about letting the router connect 9p over the Internet to that Plan 9 machine, since I don’t know yet all the varieties of half-assedness to expect by default.  I think I have to try the Plan 9 network forwarding at some point though, see if the claim that it’s as good as a VPN really holds up.  But I have to learn enough about security before even trying, it seems.
> >

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-08 14:17                             ` Jacob Moody
@ 2024-05-08 15:49                               ` Frank D. Engel, Jr.
  2024-05-08 16:10                                 ` Jacob Moody
  0 siblings, 1 reply; 48+ messages in thread
From: Frank D. Engel, Jr. @ 2024-05-08 15:49 UTC (permalink / raw)
  To: 9front

How did it work for Venti when there were multiple users?

You still have a single source of truth on the file server as all of the 
data would still be written there with this approach and it would still 
manage the directory structure, so any data that would come from other 
users would ultimately just be pulled from the file server and loaded 
into their cache separately.

One challenge might seem to be simultaneous writes to different parts of 
the same block, but in this case the locally calculated hash for the 
block that was written to cache would (hopefully) not match the one 
calculated by the file server which would reflect both updates, so when 
the read would occur the file server would send a hash that would not be 
in the local cache and the read would be sent across to the file server 
for the updated block, with the incorrect local block eventually being 
aged out of the cache as it started to fill up.


On 5/8/24 10:17, Jacob Moody wrote:
> On 5/8/24 03:39, Frank D. Engel, Jr. wrote:
>> Some of this latency problem might be helped, at least for reading, by
>> keeping a local cache of the data in the remote file system.
>>
>> Fossil/venti may actually suggest a way of doing this.
>>
>> Think of having blocks of data cached locally in a modified venti
>> structure based on (usage_time, venti-hash, data_block), such that
>> instead of making the data permanent, data with a sufficiently old
>> usage_time may simply be overwritten when more space is needed, and the
>> usage_time is updated when the data is used again.
>>
>> The fossil remote filesystem would be fossil-like, holding the hash for
>> the data blocks, such that data written to it is write-through from the
>> remote (laptop) end of things, being written to the cache using the
>> calculated hash (yes, that rhymes) as well as sent to the disk server
>> across the (latent) network; when data needs to be retrieved, it obtains
>> the hash from the remote side, then checks the local cache to see if a
>> copy is available.  If so, it can simply use the local copy it has from
>> the cache, otherwise it requests it from the other end.
> This would likely work fine for a single user but the merging of data
> in use by a handful of people becomes quite the challenge.
>
>
>


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-08 15:49                               ` Frank D. Engel, Jr.
@ 2024-05-08 16:10                                 ` Jacob Moody
  2024-05-08 16:33                                   ` Frank D. Engel, Jr.
  2024-05-08 19:46                                   ` Roberto E. Vargas Caballero
  0 siblings, 2 replies; 48+ messages in thread
From: Jacob Moody @ 2024-05-08 16:10 UTC (permalink / raw)
  To: 9front

On 5/8/24 10:49, Frank D. Engel, Jr. wrote:
> How did it work for Venti when there were multiple users?

My understanding of venti is somewhat limited to take my explanation with a grain of salt.
If you have divergent fossils using the same venti you will get divergent root scores, they
become two paths that have to be merged manually.

> 
> You still have a single source of truth on the file server as all of the 
> data would still be written there with this approach and it would still 
> manage the directory structure, so any data that would come from other 
> users would ultimately just be pulled from the file server and loaded 
> into their cache separately.

I got your proposal the wrong way around, you are talking about a local
venti and a remote fossil. At the point you decide that you want a single
remote source of truth(filesystem) that everyone must reconcile with you are still going
to have issues with merging. Venti works as a backing for multiple disjoint fossils
because it has no single source of truth for the filesystem, it just stores blocks.

No matter how you cut you are going to have to deal with multiple people merging
their cache in to the single root of truth with potentially latent updates.
Either you have to serialize all mutations at the source of truth (and at that point
you are latent bound) or you have to be clever about merging. A lot of ink has been
spilled about this problem in the scope of web programming (CRDs iirc), perhaps
that may serve as some inspiration.

> 
> One challenge might seem to be simultaneous writes to different parts of 
> the same block, but in this case the locally calculated hash for the 
> block that was written to cache would (hopefully) not match the one 
> calculated by the file server which would reflect both updates, so when 
> the read would occur the file server would send a hash that would not be 
> in the local cache and the read would be sent across to the file server 
> for the updated block, with the incorrect local block eventually being 
> aged out of the cache as it started to fill up.
> 
> 

Reading this and rereading your previous email I still do not fully understand
how you plan to deal with collisions. If you have a local cache that you write
to first and then you slowly drain that to the remote system you are still
going to have merge issues. The fileserver is going to say at some point "No
this is based on stale information" and you'll have to figure out out to retroactively
reconcile this error with a system that has already forgotten this request.


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-08 16:10                                 ` Jacob Moody
@ 2024-05-08 16:33                                   ` Frank D. Engel, Jr.
  2024-05-08 17:27                                     ` Jacob Moody
  2024-05-08 19:46                                   ` Roberto E. Vargas Caballero
  1 sibling, 1 reply; 48+ messages in thread
From: Frank D. Engel, Jr. @ 2024-05-08 16:33 UTC (permalink / raw)
  To: 9front

When you perform a write, you send that write to the file server on the 
remote system and it updates the block, generating the appropriate 
hash.  If another client sends a write with a collision, the file server 
handles it the same way it already would - the two are ultimately 
processed in some order or another.

Each client system may have updated a locally cached copy of the block 
with the same written data, but if the data did not match what was 
generated on the file server because it was updated in the meantime, 
then the hash on the client system will not match what the remote server 
came up with.

Consequently, when it later tries to read that block again, the file 
server sends it the hash that it has, which does not exist in the local 
cache, so the client is forced to request the correct block from the 
file server instead of using the locally cached copy.

Similarly, the other client performed its write and has yet another 
different hash, since the server has the data from both clients.  When 
that client tries to perform another read, its hash won't match either, 
forcing it too to obtain the correct copy of the data from the file server.

Had only one of them been manipulating the data, then that one would 
have the correct copy with a matching hash, so it would be able to use 
its local copy.


On 5/8/24 12:10, Jacob Moody wrote:
> On 5/8/24 10:49, Frank D. Engel, Jr. wrote:
>> How did it work for Venti when there were multiple users?
> My understanding of venti is somewhat limited to take my explanation with a grain of salt.
> If you have divergent fossils using the same venti you will get divergent root scores, they
> become two paths that have to be merged manually.
>
>> You still have a single source of truth on the file server as all of the
>> data would still be written there with this approach and it would still
>> manage the directory structure, so any data that would come from other
>> users would ultimately just be pulled from the file server and loaded
>> into their cache separately.
> I got your proposal the wrong way around, you are talking about a local
> venti and a remote fossil. At the point you decide that you want a single
> remote source of truth(filesystem) that everyone must reconcile with you are still going
> to have issues with merging. Venti works as a backing for multiple disjoint fossils
> because it has no single source of truth for the filesystem, it just stores blocks.
>
> No matter how you cut you are going to have to deal with multiple people merging
> their cache in to the single root of truth with potentially latent updates.
> Either you have to serialize all mutations at the source of truth (and at that point
> you are latent bound) or you have to be clever about merging. A lot of ink has been
> spilled about this problem in the scope of web programming (CRDs iirc), perhaps
> that may serve as some inspiration.
>
>> One challenge might seem to be simultaneous writes to different parts of
>> the same block, but in this case the locally calculated hash for the
>> block that was written to cache would (hopefully) not match the one
>> calculated by the file server which would reflect both updates, so when
>> the read would occur the file server would send a hash that would not be
>> in the local cache and the read would be sent across to the file server
>> for the updated block, with the incorrect local block eventually being
>> aged out of the cache as it started to fill up.
>>
>>
> Reading this and rereading your previous email I still do not fully understand
> how you plan to deal with collisions. If you have a local cache that you write
> to first and then you slowly drain that to the remote system you are still
> going to have merge issues. The fileserver is going to say at some point "No
> this is based on stale information" and you'll have to figure out out to retroactively
> reconcile this error with a system that has already forgotten this request.
>
>


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-08 12:08                         ` Stuart Morrow
@ 2024-05-08 16:37                           ` Brian Stuart
  2024-05-08 20:16                             ` hiro
  2024-05-08 21:17                             ` Disconnection-tolerant / distributed filesystems (was Re: [9front] Enabling a service) Shawn Rutledge
  0 siblings, 2 replies; 48+ messages in thread
From: Brian Stuart @ 2024-05-08 16:37 UTC (permalink / raw)
  To: 9front

> Anyway, shout-out to inferno-bls's lapfs.

I had played with a few different variants on the idea, though all
quite a while back.  To give an idea of just how far back, I presented
it at the 4th IWP9 15 years ago:

https://www.cs.drexel.edu/~bls96/lapfs.pdf

Maybe it's about time to pull it out of mothballs and do fresh 9native
and *nix versions.

BLS

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-08 16:33                                   ` Frank D. Engel, Jr.
@ 2024-05-08 17:27                                     ` Jacob Moody
  2024-05-08 18:00                                       ` Steve Simon
  0 siblings, 1 reply; 48+ messages in thread
From: Jacob Moody @ 2024-05-08 17:27 UTC (permalink / raw)
  To: 9front

On 5/8/24 11:33, Frank D. Engel, Jr. wrote:
> When you perform a write, you send that write to the file server on the 
> remote system and it updates the block, generating the appropriate 
> hash.  If another client sends a write with a collision, the file server 
> handles it the same way it already would - the two are ultimately 
> processed in some order or another.
> 
> Each client system may have updated a locally cached copy of the block 
> with the same written data, but if the data did not match what was 
> generated on the file server because it was updated in the meantime, 
> then the hash on the client system will not match what the remote server 
> came up with.
> 
> Consequently, when it later tries to read that block again, the file 
> server sends it the hash that it has, which does not exist in the local 
> cache, so the client is forced to request the correct block from the 
> file server instead of using the locally cached copy.
> 
> Similarly, the other client performed its write and has yet another 
> different hash, since the server has the data from both clients.  When 
> that client tries to perform another read, its hash won't match either, 
> forcing it too to obtain the correct copy of the data from the file server.
> 
> Had only one of them been manipulating the data, then that one would 
> have the correct copy with a matching hash, so it would be able to use 
> its local copy.
> 
Thank you I see the full picture now, using the hashes for checking against
the server. It sounds like it would work as well as any central filesystem
does for multiple users.



^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-08 17:27                                     ` Jacob Moody
@ 2024-05-08 18:00                                       ` Steve Simon
  2024-05-08 19:46                                         ` hiro
  0 siblings, 1 reply; 48+ messages in thread
From: Steve Simon @ 2024-05-08 18:00 UTC (permalink / raw)
  To: 9front


re: why plan9 was started

i may be wrong but as i remember, a significant driver was the amount of work required to manage peoples sun workstations.

 it was felt that moving to a central file store and using caching locally in terminals (cfs(8)) would be a better approach and would mean there is a single filesystem to look after.

cfs works fairly well on low latency low bandwidth links (dialup) but much less well on modern high bandwidth higher latency links. its not that modern latency is so high, more that the modern experience leads us to expect a better experience than it can offer.

a better caching filesystem/protocol would  be a big win for remote plan9 access IMHO. i did wonder about something that used venti hashes to pass cache coherency data between remote systems, and then allowed them to resynchronise efficiently - but nothing came of it.

-Steve


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-08 16:10                                 ` Jacob Moody
  2024-05-08 16:33                                   ` Frank D. Engel, Jr.
@ 2024-05-08 19:46                                   ` Roberto E. Vargas Caballero
  2024-05-08 20:34                                     ` tlaronde
  1 sibling, 1 reply; 48+ messages in thread
From: Roberto E. Vargas Caballero @ 2024-05-08 19:46 UTC (permalink / raw)
  To: 9front

Hi,

On Wed, May 08, 2024 at 11:10:00AM -0500, Jacob Moody wrote:
> On 5/8/24 10:49, Frank D. Engel, Jr. wrote:
> > How did it work for Venti when there were multiple users?
> 
> My understanding of venti is somewhat limited to take my explanation with a grain of salt.
> If you have divergent fossils using the same venti you will get divergent root scores, they
> become two paths that have to be merged manually.

Yes, that is what would happen. Fossil is not designed at all to be a
distributed shared file system. If you back two fossil fs to the same
venti you will have two different fs, no merge, no conflict, no anything.

> No matter how you cut you are going to have to deal with multiple people merging
> their cache in to the single root of truth with potentially latent updates.
> Either you have to serialize all mutations at the source of truth (and at that point
> you are latent bound) or you have to be clever about merging. A lot of ink has been
> spilled about this problem in the scope of web programming (CRDs iirc), perhaps
> that may serve as some inspiration.

I would suggest to take a look to AFS (Andrew File System, that I think had also
an evoultion but I don't remember the name) and how it works, and the implications.
Sometimes you can merge the changes, and other times you have to request
human decisions.

Regards,

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-08 18:00                                       ` Steve Simon
@ 2024-05-08 19:46                                         ` hiro
  0 siblings, 0 replies; 48+ messages in thread
From: hiro @ 2024-05-08 19:46 UTC (permalink / raw)
  To: 9front

> its not that modern latency is so high

it is though, at least in my experience. horrible DSL connections with
time interleaving error correction mechanisms, multiple hops of
wireless protocols with by design high collision rates, brute-force
randomized modulation with resulting packet loss, high interference,
cross-talk, multiple layers of unsynchronised retransmission
mechanisms to work around all those issues, optimizations for
high-throughput, unflexible minimum packet sizes with outbound buffers
causing transmission delays (again for throughput optimization),
optimizations to reduce packet rates (packet coalescing),
buffer-bloat, horrible routing policies, lawful interception resulting
in everything being routed through a central point that seemed
convenient to the NSA.
it was not like this when i was still on dial-up.
i see more and more places that have perfect end-user experience to
consume netflix, youtube and advertisement, but never try to have a
simple voip call over that line, it just doesn't work. packet loss,
super high multi-seconds delays, huge jitter...
the internet was an error.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-08 16:37                           ` Brian Stuart
@ 2024-05-08 20:16                             ` hiro
  2024-05-08 21:26                               ` Stuart Morrow
  2024-05-08 21:17                             ` Disconnection-tolerant / distributed filesystems (was Re: [9front] Enabling a service) Shawn Rutledge
  1 sibling, 1 reply; 48+ messages in thread
From: hiro @ 2024-05-08 20:16 UTC (permalink / raw)
  To: 9front

i had hoped there would evolve something out of lapfs for a long time.
i still think of it often, but now i stopped using laptops mostly,
so...

On Wed, May 8, 2024 at 6:40 PM Brian Stuart <blstuart@gmail.com> wrote:
>
> > Anyway, shout-out to inferno-bls's lapfs.
>
> I had played with a few different variants on the idea, though all
> quite a while back.  To give an idea of just how far back, I presented
> it at the 4th IWP9 15 years ago:
>
> https://www.cs.drexel.edu/~bls96/lapfs.pdf
>
> Maybe it's about time to pull it out of mothballs and do fresh 9native
> and *nix versions.
>
> BLS

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-08 19:46                                   ` Roberto E. Vargas Caballero
@ 2024-05-08 20:34                                     ` tlaronde
  0 siblings, 0 replies; 48+ messages in thread
From: tlaronde @ 2024-05-08 20:34 UTC (permalink / raw)
  To: 9front

On Wed, May 08, 2024 at 09:46:43PM +0200, Roberto E. Vargas Caballero wrote:
> On Wed, May 08, 2024 at 11:10:00AM -0500, Jacob Moody wrote:
> 
> > No matter how you cut you are going to have to deal with multiple people merging
> > their cache in to the single root of truth with potentially latent updates.
> > Either you have to serialize all mutations at the source of truth (and at that point
> > you are latent bound) or you have to be clever about merging. A lot of ink has been
> > spilled about this problem in the scope of web programming (CRDs iirc), perhaps
> > that may serve as some inspiration.
> 
> I would suggest to take a look to AFS (Andrew File System, that I think had also
> an evoultion but I don't remember the name) and how it works, and the implications.
> Sometimes you can merge the changes, and other times you have to request

I bought, long ago, a book about the AFS system. But it was no
explanation about how things were done, but only the overall organization
and how to administrate it.

To be honest, the conclusion was: for the "big" picture, it was simply
what I would have thought almost at first. But there were no
indication of what supplementary features pop up when thinking about
filesystems considering a WORM, registering file content history by
blocks, but adding another layer for tentative writes (I call them
"minutes") for files that are tentative files and that have to be
backed-up without history and differently, and a special command to
"commit" the file, that is to indicate that this version has to be
saved.

All in all several combined levels:

1) A rock solid low level WORM filesystem, with deduplication of
blocks and history, with a canonical name for a file and a way to
retrieve a specific version of the file;

2) A mechanism providing a way of scripting a policy allowing to create
a namespace and allowing to pile storages (being anywhere: local or
remote), with the upper level being a local cached (and back-upped
with no history) of the current version, commiting being a special
command saying that this version is to be saved in a versioning
WORM;

3) One server (that could simply be special instances of 2) )
to contact to centralize requests for sharing, knowing where the files
are in whatever version, with scriptable policy about the delay for
requiring the canonical version in order to save it in one or more
back-upped WORM systems a changed file, with the ability to reconstruct
information from local caches when a version of the file is missing in
the authoritative servers, and the possibility to redirect a request
to another local cache (if allowed by administration) even if the
canonical version has not been for the moment saved in an authoritative
server.

But all this (except perhaps for the "tentative" layer that is not
exactly a "/tmp") exists more or less.

The problem is to try to make the thing reliable and efficient.

I wonder too if the aliases of the file should be in the low level
filesystem (like hard links), or if the filesystem should have a
canonical name (and only one), and the user aliases for a same content
(several user names for the same content; or several user names for
different versions of the same content) should be a separate feature,
the filesystem keeping only one pair (some uniq id + some canonical
utf-8 userlevel name---its birthname).

Plan9 has already some chunks of all this. And AFS, if I read
correctly, was not a panacea and, if I'm not mistaken, is declining or
is even more or less orphaned.

The WORM and deduplication of blocks was, for me, a feature discovered
with Plan9. And I still think it's a great idea and a basis for VCS.
And it seems not many have got it right in this area---from what I
read, it was supposed to exist with ZFS but it does not widely exist
in the implementations and I seem to recall that even when it is
existing, its use is discouraged.
-- 
        Thierry Laronde <tlaronde +AT+ kergis +dot+ com>
                     http://www.kergis.com/
                    http://kertex.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89  250D 52B1 AE95 6006 F40C

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Disconnection-tolerant / distributed filesystems (was Re: [9front] Enabling a service)
  2024-05-08 16:37                           ` Brian Stuart
  2024-05-08 20:16                             ` hiro
@ 2024-05-08 21:17                             ` Shawn Rutledge
  1 sibling, 0 replies; 48+ messages in thread
From: Shawn Rutledge @ 2024-05-08 21:17 UTC (permalink / raw)
  To: 9front



> On May 8, 2024, at 9:37 AM, Brian Stuart <blstuart@gmail.com> wrote:
> 
>> Anyway, shout-out to inferno-bls's lapfs.
> 
> I had played with a few different variants on the idea, though all
> quite a while back.  To give an idea of just how far back, I presented
> it at the 4th IWP9 15 years ago:
> 
> https://www.cs.drexel.edu/~bls96/lapfs.pdf
> 
> Maybe it's about time to pull it out of mothballs and do fresh 9native
> and *nix versions.

Cool, I read it just now.

For limbo, not having 64-bit support seems to be an obstacle.  Am I missing something?  I didn’t succeed in getting 9ferno working yet, but isn’t it working in theory?

I’ve been following (and trying to use) ipfs for a while.  It seems that using it as a mounted filesystem was not a priority (those guys seem to care about web applications first, and calling it a filesystem was a stretch at first; perhaps it still is, but I need to test the latest version).  But the core idea of using a storage block’s hash as its ID and building Merkle trees out of those blocks seems like a good way to build a filesystem to me.   A file is then identified by its cumulative hash (i.e. probably the hash of the block that is the head of the metadata pointing to the data blocks).  Then maybe you no longer care about comparing paths and timestamps: if both the laptop and the file server have hash tables to keep track of which files and blocks they’re storing, checking whether a particular file is cached should be very fast and not depend on walking so much.

When planning for a laptop to be away from the file server for a while (and not assuming that network access is always available) I’d want to pre-sync some directories.  On filesystems that have extended attributes, I was thinking it would be nice to just set an attribute ahead of time (sharing peers or channel ID or something) and be able to verify the sync state later, before disconnecting from the file server.  But 9p doesn’t support those, so any automated sync arrangement has to be a separately-configured process.  Currently I use syncthing, but I suppose it does more work than it would need to if the filesystem were designed to make syncing more efficient.  Fanotify is a big win for syncing with less walking on Linux though.  (I used it myself in a syncing experiment.)

But then I was thinking one downside of the ipfs approach is just the difficulty of maintaining a Merkle tree, depending on how you like to do typical file modifications.  Blocks could form an intrusive linked list, using their neighbors’ hashes as pointers: but they’d better not point directly, because it would de-optimize updates everywhere but on one end.  (Imagine what rrdtool would do to it.  If your philosophy is that storage is growing fast enough, why delete anything ever - then a simple log is better.  But rrdtool is very efficient if storage is limited and random writes are cheap.  It might even turn out that some blocks in the database file are stable over some time periods.)  So the index of blocks that make up a file probably should be completely separate.  Still, the index itself takes some blocks; and if the index is a Merkle tree, then every write could potentially invalidate most of it.  (Although a suitable design could make appending cheap, or some other selective optimization.)  The filesystem should be venti-compatible, so minimizing garbage would be best.

Anyway, even if only the actual data blocks are hashed, you have easy venti compatibility and at least somewhat easier checking of what is synced and what is not.  Migrating any amount of data between any number and size of storage devices ought to be easier.

Aren’t filesystems like zfs built on hashing too?  But zfs includes the block layer rather than building on top of a generic block layer.  Explicit layering should yield some benefits, I imagine.


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [9front] Enabling a service
  2024-05-08 20:16                             ` hiro
@ 2024-05-08 21:26                               ` Stuart Morrow
  0 siblings, 0 replies; 48+ messages in thread
From: Stuart Morrow @ 2024-05-08 21:26 UTC (permalink / raw)
  To: 9front

On Wed, 8 May 2024 at 21:17, hiro <23hiro@gmail.com> wrote:
> i had hoped there would evolve something out of lapfs for a long time.
> i still think of it often, but now i stopped using laptops mostly,
> so...

But even within a single box, Something Like Orifs could replicate
across heterogeneous disks (as a side effect of being able to
replicate across different boxes).  It might be faster than devfs
because a write's correctness doesn't depend on every one of the disks
being touched straight away (disconnected operation is the premise of
Orifs).

Online resizing of a sort would fall out for free, too, not by
resizing the partition but by making a new partition and doing as
above (except with a partition instead of a disk). This would be good
to have on platforms installed by just writing 9front.*.img to the
disk.

The Orifs paper (old enough to be referenced by the newest Modern
Operating Systems by Tanenbaum) doesn't think of either of those.

With multiple machines:

Replication subsumes backups, at least the first level of them
(copies, but in the same building).  Plan 9 has a founding assumption
that computers are capital goods, and there's someone whose job it is
to make backups.  Today every 9front user has to do this themselves.
I understand the point of an operating system to be "to make computers
easier to use", not "& but only up to a certain point and then stop".

The Something Like Orifs <> Something Like Orifs protocol wouldn't be
9P.  So it's faster over the internet.

(Orifs itself is C++ abandonware.)

^ permalink raw reply	[flat|nested] 48+ messages in thread

end of thread, other threads:[~2024-05-08 21:29 UTC | newest]

Thread overview: 48+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-05-06 11:32 [9front] Enabling a service Rocky Hotas
2024-05-06 11:58 ` Alex Musolino
2024-05-06 12:43 ` ori
2024-05-06 15:16   ` Scott Flowers
2024-05-06 15:37     ` sirjofri
2024-05-06 16:32     ` Stanley Lieber
2024-05-06 22:18   ` Rocky Hotas
2024-05-06 22:59     ` ori
2024-05-06 23:00     ` ori
2024-05-07  8:22       ` Rocky Hotas
2024-05-07  8:29         ` Frank D. Engel, Jr.
2024-05-07  9:03           ` Rocky Hotas
2024-05-07  9:14         ` sirjofri
2024-05-07 21:11           ` Shawn Rutledge
2024-05-07 21:35             ` Kurt H Maier
2024-05-07 21:45               ` sirjofri
2024-05-07 21:54             ` sl
2024-05-07 21:58               ` sl
2024-05-07 23:15                 ` Lennart Jablonka
2024-05-07 23:16                 ` Shawn Rutledge
2024-05-07 23:45                   ` Shawn Rutledge
2024-05-08  0:34                   ` Kurt H Maier
2024-05-08  0:35                   ` sl
2024-05-08  1:05                     ` Jacob Moody
2024-05-08  1:24                       ` sl
2024-05-08  7:22                         ` hiro
2024-05-08 14:04                           ` Stanley Lieber
2024-05-08 12:08                         ` Stuart Morrow
2024-05-08 16:37                           ` Brian Stuart
2024-05-08 20:16                             ` hiro
2024-05-08 21:26                               ` Stuart Morrow
2024-05-08 21:17                             ` Disconnection-tolerant / distributed filesystems (was Re: [9front] Enabling a service) Shawn Rutledge
2024-05-08 14:25                         ` [9front] Enabling a service Jacob Moody
2024-05-08  3:41                       ` Ori Bernstein
2024-05-08  4:09                         ` sl
2024-05-08  8:39                           ` Frank D. Engel, Jr.
2024-05-08 14:17                             ` Jacob Moody
2024-05-08 15:49                               ` Frank D. Engel, Jr.
2024-05-08 16:10                                 ` Jacob Moody
2024-05-08 16:33                                   ` Frank D. Engel, Jr.
2024-05-08 17:27                                     ` Jacob Moody
2024-05-08 18:00                                       ` Steve Simon
2024-05-08 19:46                                         ` hiro
2024-05-08 19:46                                   ` Roberto E. Vargas Caballero
2024-05-08 20:34                                     ` tlaronde
2024-05-08 14:57                   ` Lucas Francesco
2024-05-08 15:10                     ` an2qzavok
2024-05-08  2:11             ` Thaddeus Woskowiak

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).