9front - general discussion about 9front
 help / color / mirror / Atom feed
* [9front] AUTHENTICATE failed can't get challenge (imap4d)
@ 2024-05-03 11:12 sirjofri
  2024-05-03 11:52 ` Alex Musolino
  2024-05-03 21:09 ` Roberto E. Vargas Caballero
  0 siblings, 2 replies; 19+ messages in thread
From: sirjofri @ 2024-05-03 11:12 UTC (permalink / raw)
  To: 9front

Hey all,

for a few days now my mobile mail client (FairEmail) receives the message "AUTHENTICATE failed can't get challenge" together with log messages about failed SSL handshakes and connection resets by the server.

The server is a 9front server. I never had any issues like that, and I also didn't change a thing recently. The only thing I did was running acmed to update the certificate for https (imap4d uses another cert iirc, which I manually checked in my mail client: it isn't signed by a CA).

Surprisingly, my mail client still works as expected. It's just the notification popping up every now and then, and it can't connect at that point, but a few minutes later it works fine. I can also receive and send mail (like this one).

Does someone know what could cause this?

I didn't investigate further yet, as I didn't have the time, but if someone has a hint I'd be happy.

Have a nice day

sirjofri

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [9front] AUTHENTICATE failed can't get challenge (imap4d)
  2024-05-03 11:12 [9front] AUTHENTICATE failed can't get challenge (imap4d) sirjofri
@ 2024-05-03 11:52 ` Alex Musolino
  2024-05-03 18:23   ` sirjofri
  2024-05-03 21:09 ` Roberto E. Vargas Caballero
  1 sibling, 1 reply; 19+ messages in thread
From: Alex Musolino @ 2024-05-03 11:52 UTC (permalink / raw)
  To: 9front

> Does someone know what could cause this?

Is your server hosted with vultr?  Some problems have started recently
with this service provider and some kernel fixes have been made that,
so far, seem to have at least improved things.

--
Cheers,
Alex Musolino


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [9front] AUTHENTICATE failed can't get challenge (imap4d)
  2024-05-03 11:52 ` Alex Musolino
@ 2024-05-03 18:23   ` sirjofri
  2024-05-03 19:51     ` Jacob Moody
  0 siblings, 1 reply; 19+ messages in thread
From: sirjofri @ 2024-05-03 18:23 UTC (permalink / raw)
  To: 9front

03.05.2024 14:12:18 Alex Musolino <alex@musolino.id.au>:

>> Does someone know what could cause this?
>
> Is your server hosted with vultr?

Nope, netcup. It's the cheapest plan there, but I never had issues so far.

sirjofri

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [9front] AUTHENTICATE failed can't get challenge (imap4d)
  2024-05-03 18:23   ` sirjofri
@ 2024-05-03 19:51     ` Jacob Moody
  0 siblings, 0 replies; 19+ messages in thread
From: Jacob Moody @ 2024-05-03 19:51 UTC (permalink / raw)
  To: 9front

On 5/3/24 13:23, sirjofri wrote:
> 03.05.2024 14:12:18 Alex Musolino <alex@musolino.id.au>:
> 
>>> Does someone know what could cause this?
>>
>> Is your server hosted with vultr?
> 
> Nope, netcup. It's the cheapest plan there, but I never had issues so far.
> 
> sirjofri

We've seen it happen to folks primarily in vultr but I don't think there is currently
any reason to believe vultr specifically is a target. Getting a list of every machine
with ports open on the ipv4 net these days is really not much of a task, so I wouldn't
be surprised if they were just spamming any address they know of listening on web ports.

Either way, I'd update your kernel and userspace and see if that fixes the problem.


Thanks,
moody


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [9front] AUTHENTICATE failed can't get challenge (imap4d)
  2024-05-03 11:12 [9front] AUTHENTICATE failed can't get challenge (imap4d) sirjofri
  2024-05-03 11:52 ` Alex Musolino
@ 2024-05-03 21:09 ` Roberto E. Vargas Caballero
  2024-05-06 11:12   ` sirjofri
  1 sibling, 1 reply; 19+ messages in thread
From: Roberto E. Vargas Caballero @ 2024-05-03 21:09 UTC (permalink / raw)
  To: 9front

Hi,

On Fri, May 03, 2024 at 01:12:48PM +0200, sirjofri wrote:
> Hey all,
> 
> for a few days now my mobile mail client (FairEmail) receives the message "AUTHENTICATE failed can't get challenge" together with log messages about failed SSL handshakes and connection resets by the server.

I had a similar problem fetching hotmail with imap. I didn't try after the last changes.
I will try in the coming days.

Regards,

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [9front] AUTHENTICATE failed can't get challenge (imap4d)
  2024-05-03 21:09 ` Roberto E. Vargas Caballero
@ 2024-05-06 11:12   ` sirjofri
  2024-05-06 12:59     ` qwx
                       ` (2 more replies)
  0 siblings, 3 replies; 19+ messages in thread
From: sirjofri @ 2024-05-06 11:12 UTC (permalink / raw)
  To: 9front

Hello all,

a simple reboot fixed it for me for some reason. In the end it was so borked that I couldn't even connect in any way, so I used the VPS panel to reboot it.

According to the panel, that machine was up for about 860 days, though I have no idea about the accuracy of those numbers, and I couldn't confirm using uptime.

Btw, the VPS panel reports a disk usage of 19GB/20GB, which wouldn't be good. I really hope that's just due to formatting. What's the best way to ask cwfs for accurate numbers? (I run a cwfs without worm).

sirjofri

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [9front] AUTHENTICATE failed can't get challenge (imap4d)
  2024-05-06 11:12   ` sirjofri
@ 2024-05-06 12:59     ` qwx
  2024-05-06 14:34       ` Stanley Lieber
  2024-05-06 13:56     ` [9front] AUTHENTICATE failed can't get challenge (imap4d) Jacob Moody
  2024-05-06 14:32     ` Stanley Lieber
  2 siblings, 1 reply; 19+ messages in thread
From: qwx @ 2024-05-06 12:59 UTC (permalink / raw)
  To: 9front

On Mon May  6 13:12:51 +0200 2024, sirjofri+ml-9front@sirjofri.de wrote:
> Hello all,
> 
> a simple reboot fixed it for me for some reason. In the end it was so borked that I couldn't even connect in any way, so I used the VPS panel to reboot it.
> 
> According to the panel, that machine was up for about 860 days, though I have no idea about the accuracy of those numbers, and I couldn't confirm using uptime.
> 
> Btw, the VPS panel reports a disk usage of 19GB/20GB, which wouldn't be good. I really hope that's just due to formatting. What's the best way to ask cwfs for accurate numbers? (I run a cwfs without worm).
> 
> sirjofri

It's a bad idea to run cwfs without a worm, especially in a vm.  The
system being "borked" isn't encouraging.  Who knows what state it's
in, but you can try the check commands on the cwfs console.  I don't
think you can get numbers on how full cache or other partitions are
without a worm, but you could probably modify cwfs to find out.

What VPS panel is this that would give uptime of the system or cwfs's
disk usage?  I imagine that the 20gb are the raw disk and that the
plan9 partition is using the entire disk?  As for uptime, is this the
current number after you rebooted?  If so it's probably how long the
vm has been active.  860 days would make the system (or kernel anyway)
out of date since at least late december 2021; if you updated userland
in the mean time perhaps the kernel is out of sync.  Have you updated,
installed a new kernel and rebooted the machine before during the past
several years?

A lot of unknowns here.  Perhaps rule out a borked cwfs first, then
update.  Make sure you rebuild your compilers first before anything
else, or pull the binaries from the latest iso.

Hope this helps,
qwx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [9front] AUTHENTICATE failed can't get challenge (imap4d)
  2024-05-06 11:12   ` sirjofri
  2024-05-06 12:59     ` qwx
@ 2024-05-06 13:56     ` Jacob Moody
  2024-05-06 14:32     ` Stanley Lieber
  2 siblings, 0 replies; 19+ messages in thread
From: Jacob Moody @ 2024-05-06 13:56 UTC (permalink / raw)
  To: 9front

On 5/6/24 06:12, sirjofri wrote:
> Hello all,
> 
> a simple reboot fixed it for me for some reason. In the end it was so borked that I couldn't even connect in any way, so I used the VPS panel to reboot it.

Yes it will with this issue, but if it is what I think it is than it will just come back.
Why not just update the machine?

> 
> According to the panel, that machine was up for about 860 days, though I have no idea about the accuracy of those numbers, and I couldn't confirm using uptime.
> 
> Btw, the VPS panel reports a disk usage of 19GB/20GB, which wouldn't be good. I really hope that's just due to formatting. What's the best way to ask cwfs for accurate numbers? (I run a cwfs without worm).

Without a worm you can not get this information.





^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [9front] AUTHENTICATE failed can't get challenge (imap4d)
  2024-05-06 11:12   ` sirjofri
  2024-05-06 12:59     ` qwx
  2024-05-06 13:56     ` [9front] AUTHENTICATE failed can't get challenge (imap4d) Jacob Moody
@ 2024-05-06 14:32     ` Stanley Lieber
  2 siblings, 0 replies; 19+ messages in thread
From: Stanley Lieber @ 2024-05-06 14:32 UTC (permalink / raw)
  To: 9front

On May 6, 2024 7:12:09 AM EDT, sirjofri <sirjofri+ml-9front@sirjofri.de> wrote:
>Hello all,
>
>a simple reboot fixed it for me for some reason. In the end it was so borked that I couldn't even connect in any way, so I used the VPS panel to reboot it.
>
>According to the panel, that machine was up for about 860 days, though I have no idea about the accuracy of those numbers, and I couldn't confirm using uptime.
>
>Btw, the VPS panel reports a disk usage of 19GB/20GB, which wouldn't be good. I really hope that's just due to formatting. What's the best way to ask cwfs for accurate numbers? (I run a cwfs without worm).
>
>sirjofri
>

du -h /root | tail

sl

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [9front] AUTHENTICATE failed can't get challenge (imap4d)
  2024-05-06 12:59     ` qwx
@ 2024-05-06 14:34       ` Stanley Lieber
  2024-05-06 14:50         ` qwx
  0 siblings, 1 reply; 19+ messages in thread
From: Stanley Lieber @ 2024-05-06 14:34 UTC (permalink / raw)
  To: 9front

> It's a bad idea to run cwfs without a worm, especially in a vm.

why?

sl

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [9front] AUTHENTICATE failed can't get challenge (imap4d)
  2024-05-06 14:34       ` Stanley Lieber
@ 2024-05-06 14:50         ` qwx
  2024-05-06 15:30           ` sirjofri
  2024-05-06 15:40           ` Stanley Lieber
  0 siblings, 2 replies; 19+ messages in thread
From: qwx @ 2024-05-06 14:50 UTC (permalink / raw)
  To: 9front

On Mon May  6 16:34:44 +0200 2024, sl@stanleylieber.com wrote:
> > It's a bad idea to run cwfs without a worm, especially in a vm.
> 
> why?
> 
> sl

In my experience cwfs is very sensitive to unclean shutdowns, much
more than hjfs.  VPS providers may sometimes reboot instances, or VMs
may go down, or 9front may panic and freeze, etc.  I also remember
many instances of corruption (iirc with hjfs) under qemu but this
might have been fixed.  Either way I think it's too risky to not have
any recovery method; if the check commands fail it's over.  Of course
please correct me if I'm wrong, I'm not very familiar with the code.

qwx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [9front] AUTHENTICATE failed can't get challenge (imap4d)
  2024-05-06 14:50         ` qwx
@ 2024-05-06 15:30           ` sirjofri
  2024-05-06 15:40           ` Stanley Lieber
  1 sibling, 0 replies; 19+ messages in thread
From: sirjofri @ 2024-05-06 15:30 UTC (permalink / raw)
  To: 9front

Reports:

I updated the machine to current sources.

I'm pretty sure I also updated it after 2021. I don't update it regularly, but every now and then. My assumption is that the VPS panel has an uptime of the instance, but not counting reboots (fshalt -r), just full shutdowns.

The filesystem is fine, as far as I can tell. I remember having more issues on that VPS provider with a worm than without, especially because of dumps and the small disk. Still waiting for gefs or until I have enough time to set up some backup solution (like, venti or such).

I also assume that the "disk usage" in the VPS panel is only valid when using "common" formats like fat or extX. That provider allows you to change the root password on Linux instances in that panel, so it would make sense that it can "read" the disk to some extent.

The provider is netcup.

As a side note, to better monitor the server, I pushed a stripped down version of stats(8) that just outputs data to stdout (no UI at all!). I'm pretty sure that code could be easily integrated in the original program, too. And I fixed my finger server to not allow any .. filesystem traversals (yeah, that was an issue). Code for both is on shithub.

sirjofri

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [9front] AUTHENTICATE failed can't get challenge (imap4d)
  2024-05-06 14:50         ` qwx
  2024-05-06 15:30           ` sirjofri
@ 2024-05-06 15:40           ` Stanley Lieber
  2024-05-06 16:03             ` qwx
                               ` (2 more replies)
  1 sibling, 3 replies; 19+ messages in thread
From: Stanley Lieber @ 2024-05-06 15:40 UTC (permalink / raw)
  To: 9front

On May 6, 2024 10:50:17 AM EDT, qwx@sciops.net wrote:
>On Mon May  6 16:34:44 +0200 2024, sl@stanleylieber.com wrote:
>> > It's a bad idea to run cwfs without a worm, especially in a vm.
>> 
>> why?
>> 
>> sl
>
>In my experience cwfs is very sensitive to unclean shutdowns, much
>more than hjfs.  VPS providers may sometimes reboot instances, or VMs
>may go down, or 9front may panic and freeze, etc.  I also remember
>many instances of corruption (iirc with hjfs) under qemu but this
>might have been fixed.  Either way I think it's too risky to not have
>any recovery method; if the check commands fail it's over.  Of course
>please correct me if I'm wrong, I'm not very familiar with the code.
>
>qwx
>

for over ten years, i have been running 9front associated mailing lists, websites, etc., in virtual machines hosted at a series of different places. in that time, i have tried all kinds of different setups, including:

- hjfs, dump only triggered manually: 1k blocks, all on one partition, corruption can be surgically addressed if you're a disk doctor, gets very slow when a large partition passes half full

- cwfs, dump mandatory and automatic: 4k blocks, multiple partitions, very easy to fill cache or worm, corruption relatively easy to address

- cwfs, no dump: 4k blocks, all on one partition, corruption harder to address

in the context of managing the sites, corruption has never been a significant source of pain, even with lots of unclean shutdowns. my biggest problem (by far) with disks has been babysitting the cache/worm. moving lots of files onto the system means shoveling bits carefully into the cache until it's not quite full and then doing a dump to clean it out before proceeding. when corruption happens, i move the corrupted file out of the way and move on. when the worm fills up, you're fucked,  full stop.

all our disk file servers are vulnerable to unclean shutdowns. hjfs is good for constrained disk space because 1k blocks and everything being on one partition means no babysitting the cache/worm. cwfs is good for actually being a file system because it's fast. but, depending on your setup, dealing with the cache is a huge pain. on a system with a huge number of files, many of which are changing all the time, and all of which need to be available and readily accessible, the way cwfs' cache works and is proportioned is a nightmare.

in retrospect it's obvious the most advantageous setup would be to retain the worm for system files, and create a separate, non-worm partition for the huge number of files, many of which are changing all the time, and all of which need to be available and readily accessible.

i will now travel back in time and give myself this advice. 

sl


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [9front] AUTHENTICATE failed can't get challenge (imap4d)
  2024-05-06 15:40           ` Stanley Lieber
@ 2024-05-06 16:03             ` qwx
  2024-05-06 16:24               ` Stanley Lieber
  2024-05-06 16:03             ` Stanley Lieber
  2024-05-06 16:08             ` [9front] cwfs no-worm + venti (was: AUTHENTICATE failed can't get challenge (imap4d)) sirjofri
  2 siblings, 1 reply; 19+ messages in thread
From: qwx @ 2024-05-06 16:03 UTC (permalink / raw)
  To: 9front

On Mon May  6 17:42:29 +0200 2024, sl@stanleylieber.com wrote:
> for over ten years, i have been running 9front associated mailing lists, websites, etc., in virtual machines hosted at a series of different places. in that time, i have tried all kinds of different setups, including:
> 
> - hjfs, dump only triggered manually: 1k blocks, all on one partition, corruption can be surgically addressed if you're a disk doctor, gets very slow when a large partition passes half full
> 
> - cwfs, dump mandatory and automatic: 4k blocks, multiple partitions, very easy to fill cache or worm, corruption relatively easy to address
> 
> - cwfs, no dump: 4k blocks, all on one partition, corruption harder to address
> 
> in the context of managing the sites, corruption has never been a significant source of pain, even with lots of unclean shutdowns. my biggest problem (by far) with disks has been babysitting the cache/worm. moving lots of files onto the system means shoveling bits carefully into the cache until it's not quite full and then doing a dump to clean it out before proceeding. when corruption happens, i move the corrupted file out of the way and move on. when the worm fills up, you're fucked,  full stop.
> 
> all our disk file servers are vulnerable to unclean shutdowns. hjfs is good for constrained disk space because 1k blocks and everything being on one partition means no babysitting the cache/worm. cwfs is good for actually being a file system because it's fast. but, depending on your setup, dealing with the cache is a huge pain. on a system with a huge number of files, many of which are changing all the time, and all of which need to be available and readily accessible, the way cwfs' cache works and is proportioned is a nightmare.
> 
> in retrospect it's obvious the most advantageous setup would be to retain the worm for system files, and create a separate, non-worm partition for the huge number of files, many of which are changing all the time, and all of which need to be available and readily accessible.
> 
> i will now travel back in time and give myself this advice. 
> 
> sl

Thanks for taking the time to describe your experience in detail.
There's no ideal solution (yet!).  I've personally had more trouble
with corruption and gave up on the no-dump configuration because of
it.  Filling up the cache by accident and similar are not too easy to
avoid and worm management can also be really frustrating as you said.

imho your mail should be added to the fqa, even as is, since there's
little public information on how to choose a filesystem and especially
what the trade-offs or maintenance burdens are in the longer term,
where exactly performance suffers, etc.

Cheers,
qwx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [9front] AUTHENTICATE failed can't get challenge (imap4d)
  2024-05-06 15:40           ` Stanley Lieber
  2024-05-06 16:03             ` qwx
@ 2024-05-06 16:03             ` Stanley Lieber
  2024-05-06 16:08             ` [9front] cwfs no-worm + venti (was: AUTHENTICATE failed can't get challenge (imap4d)) sirjofri
  2 siblings, 0 replies; 19+ messages in thread
From: Stanley Lieber @ 2024-05-06 16:03 UTC (permalink / raw)
  To: 9front

On May 6, 2024 11:40:04 AM EDT, Stanley Lieber <sl@stanleylieber.com> wrote:
>On May 6, 2024 10:50:17 AM EDT, qwx@sciops.net wrote:
>>On Mon May  6 16:34:44 +0200 2024, sl@stanleylieber.com wrote:
>>> > It's a bad idea to run cwfs without a worm, especially in a vm.
>>> 
>>> why?
>>> 
>>> sl
>>
>>In my experience cwfs is very sensitive to unclean shutdowns, much
>>more than hjfs.  VPS providers may sometimes reboot instances, or VMs
>>may go down, or 9front may panic and freeze, etc.  I also remember
>>many instances of corruption (iirc with hjfs) under qemu but this
>>might have been fixed.  Either way I think it's too risky to not have
>>any recovery method; if the check commands fail it's over.  Of course
>>please correct me if I'm wrong, I'm not very familiar with the code.
>>
>>qwx
>>
>
>for over ten years, i have been running 9front associated mailing lists, websites, etc., in virtual machines hosted at a series of different places. in that time, i have tried all kinds of different setups, including:
>
>- hjfs, dump only triggered manually: 1k blocks, all on one partition, corruption can be surgically addressed if you're a disk doctor, gets very slow when a large partition passes half full
>
>- cwfs, dump mandatory and automatic: 4k blocks, multiple partitions, very easy to fill cache or worm, corruption relatively easy to address
>
>- cwfs, no dump: 4k blocks, all on one partition, corruption harder to address
>
>in the context of managing the sites, corruption has never been a significant source of pain, even with lots of unclean shutdowns. my biggest problem (by far) with disks has been babysitting the cache/worm. moving lots of files onto the system means shoveling bits carefully into the cache until it's not quite full and then doing a dump to clean it out before proceeding. when corruption happens, i move the corrupted file out of the way and move on. when the worm fills up, you're fucked,  full stop.
>
>all our disk file servers are vulnerable to unclean shutdowns. hjfs is good for constrained disk space because 1k blocks and everything being on one partition means no babysitting the cache/worm. cwfs is good for actually being a file system because it's fast. but, depending on your setup, dealing with the cache is a huge pain. on a system with a huge number of files, many of which are changing all the time, and all of which need to be available and readily accessible, the way cwfs' cache works and is proportioned is a nightmare.
>
>in retrospect it's obvious the most advantageous setup would be to retain the worm for system files, and create a separate, non-worm partition for the huge number of files, many of which are changing all the time, and all of which need to be available and readily accessible.
>
>i will now travel back in time and give myself this advice. 
>
>sl
>
>

of course i meant to say: hjfs 4k blocks, cwfs 16k blocks.

sl

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [9front] cwfs no-worm + venti (was: AUTHENTICATE failed can't get challenge (imap4d))
  2024-05-06 15:40           ` Stanley Lieber
  2024-05-06 16:03             ` qwx
  2024-05-06 16:03             ` Stanley Lieber
@ 2024-05-06 16:08             ` sirjofri
  2024-05-06 16:21               ` [9front] " Stanley Lieber
  2 siblings, 1 reply; 19+ messages in thread
From: sirjofri @ 2024-05-06 16:08 UTC (permalink / raw)
  To: 9front

Hello,

did anyone ever try to run a cwfs without worm and a venti?

Here's my thought:

- Public accessible VPS with just cwfs, no worm
- local/on-site, non-public-accessible, not-always-available venti server for backups/dumps

Those dumps could happen manually. The benefit would be that a small local server is cheap and doesn't have to run all the time, so that would be one of the cheapest worms possible. You'd get the benefit of that worm, plus the convenience of a non-worm partition on the server.

It could also be extended by running the cwfs+worm combination on the local server, and the publicly accessible VPS would only need a cfs.

That's only theory. I never actually used venti in any way, and my fossil days are a long time ago. Let me know what you think.

sirjofri

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [9front] Re: [9front] cwfs no-worm + venti (was: AUTHENTICATE failed can't get challenge (imap4d))
  2024-05-06 16:08             ` [9front] cwfs no-worm + venti (was: AUTHENTICATE failed can't get challenge (imap4d)) sirjofri
@ 2024-05-06 16:21               ` Stanley Lieber
  2024-05-06 16:37                 ` [9front] Re: [9front] cwfs no-worm + venti Jacob Moody
  0 siblings, 1 reply; 19+ messages in thread
From: Stanley Lieber @ 2024-05-06 16:21 UTC (permalink / raw)
  To: 9front

On May 6, 2024 12:08:03 PM EDT, sirjofri <sirjofri+ml-9front@sirjofri.de> wrote:
>Hello,
>
>did anyone ever try to run a cwfs without worm and a venti?
>
>Here's my thought:
>
>- Public accessible VPS with just cwfs, no worm
>- local/on-site, non-public-accessible, not-always-available venti server for backups/dumps
>
>Those dumps could happen manually. The benefit would be that a small local server is cheap and doesn't have to run all the time, so that would be one of the cheapest worms possible. You'd get the benefit of that worm, plus the convenience of a non-worm partition on the server.
>
>It could also be extended by running the cwfs+worm combination on the local server, and the publicly accessible VPS would only need a cfs.
>
>That's only theory. I never actually used venti in any way, and my fossil days are a long time ago. Let me know what you think.
>
>sirjofri
>

i'd be willing to try it out.

one problem with the advice i promised to give my younger self: the default cwfs already creates an other partition for storing un-dumped files. but with constrained disk space (vm prices increase steeply for huge ssd storage), we never have enough room to make the worm large enough AND make other large enough to do our business. it's easy to fill a "small" worm even with just normal system churn. and other is useless if it's small.

right now, 9front.org and cat-v.org live on separate servers, each one a 60gb cwfs with no dump. 9front.org has plenty of room, but all the cat-v.org stuff has a lot more files and hovers at over 50gb usage. if i'd set this up with a worm i'd already be fucked, but the way things are now i could conceivably just move some sites around and delete files off the existing installation. fortunately, the cat-v.org stuff is relatively static, so it's not a real emergency. 

sl

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [9front] AUTHENTICATE failed can't get challenge (imap4d)
  2024-05-06 16:03             ` qwx
@ 2024-05-06 16:24               ` Stanley Lieber
  0 siblings, 0 replies; 19+ messages in thread
From: Stanley Lieber @ 2024-05-06 16:24 UTC (permalink / raw)
  To: 9front

On May 6, 2024 12:03:36 PM EDT, qwx@sciops.net wrote:
>On Mon May  6 17:42:29 +0200 2024, sl@stanleylieber.com wrote:
>> for over ten years, i have been running 9front associated mailing lists, websites, etc., in virtual machines hosted at a series of different places. in that time, i have tried all kinds of different setups, including:
>> 
>> - hjfs, dump only triggered manually: 1k blocks, all on one partition, corruption can be surgically addressed if you're a disk doctor, gets very slow when a large partition passes half full
>> 
>> - cwfs, dump mandatory and automatic: 4k blocks, multiple partitions, very easy to fill cache or worm, corruption relatively easy to address
>> 
>> - cwfs, no dump: 4k blocks, all on one partition, corruption harder to address
>> 
>> in the context of managing the sites, corruption has never been a significant source of pain, even with lots of unclean shutdowns. my biggest problem (by far) with disks has been babysitting the cache/worm. moving lots of files onto the system means shoveling bits carefully into the cache until it's not quite full and then doing a dump to clean it out before proceeding. when corruption happens, i move the corrupted file out of the way and move on. when the worm fills up, you're fucked,  full stop.
>> 
>> all our disk file servers are vulnerable to unclean shutdowns. hjfs is good for constrained disk space because 1k blocks and everything being on one partition means no babysitting the cache/worm. cwfs is good for actually being a file system because it's fast. but, depending on your setup, dealing with the cache is a huge pain. on a system with a huge number of files, many of which are changing all the time, and all of which need to be available and readily accessible, the way cwfs' cache works and is proportioned is a nightmare.
>> 
>> in retrospect it's obvious the most advantageous setup would be to retain the worm for system files, and create a separate, non-worm partition for the huge number of files, many of which are changing all the time, and all of which need to be available and readily accessible.
>> 
>> i will now travel back in time and give myself this advice. 
>> 
>> sl
>
>Thanks for taking the time to describe your experience in detail.
>There's no ideal solution (yet!).  I've personally had more trouble
>with corruption and gave up on the no-dump configuration because of
>it.  Filling up the cache by accident and similar are not too easy to
>avoid and worm management can also be really frustrating as you said.
>
>imho your mail should be added to the fqa, even as is, since there's
>little public information on how to choose a filesystem and especially
>what the trade-offs or maintenance burdens are in the longer term,
>where exactly performance suffers, etc.
>
>Cheers,
>qwx
>

fqa already pretty much says this, but yeah, no reason not to include more testimonials.

sl

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [9front] Re: [9front] cwfs no-worm + venti
  2024-05-06 16:21               ` [9front] " Stanley Lieber
@ 2024-05-06 16:37                 ` Jacob Moody
  0 siblings, 0 replies; 19+ messages in thread
From: Jacob Moody @ 2024-05-06 16:37 UTC (permalink / raw)
  To: 9front

On 5/6/24 11:21, Stanley Lieber wrote:
> On May 6, 2024 12:08:03 PM EDT, sirjofri <sirjofri+ml-9front@sirjofri.de> wrote:
>> Hello,
>>
>> did anyone ever try to run a cwfs without worm and a venti?
>>
>> Here's my thought:
>>
>> - Public accessible VPS with just cwfs, no worm
>> - local/on-site, non-public-accessible, not-always-available venti server for backups/dumps
>>
>> Those dumps could happen manually. The benefit would be that a small local server is cheap and doesn't have to run all the time, so that would be one of the cheapest worms possible. You'd get the benefit of that worm, plus the convenience of a non-worm partition on the server.
>>
>> It could also be extended by running the cwfs+worm combination on the local server, and the publicly accessible VPS would only need a cfs.
>>
>> That's only theory. I never actually used venti in any way, and my fossil days are a long time ago. Let me know what you think.
>>
>> sirjofri
>>
> 
> i'd be willing to try it out.
> 
> one problem with the advice i promised to give my younger self: the default cwfs already creates an other partition for storing un-dumped files. but with constrained disk space (vm prices increase steeply for huge ssd storage), we never have enough room to make the worm large enough AND make other large enough to do our business. it's easy to fill a "small" worm even with just normal system churn. and other is useless if it's small.
> 
> right now, 9front.org and cat-v.org live on separate servers, each one a 60gb cwfs with no dump. 9front.org has plenty of room, but all the cat-v.org stuff has a lot more files and hovers at over 50gb usage. if i'd set this up with a worm i'd already be fucked, but the way things are now i could conceivably just move some sites around and delete files off the existing installation. fortunately, the cat-v.org stuff is relatively static, so it's not a real emergency. 
> 
> sl


Another dimension I want to add here is bls's devsnap. I talked about this on jitsi but never more public.
Last year's IWP9 had bls present his devsnap kernel device. I have had thoughts of trying it out underneath
a no-dump cwfs. That would have manual only snapshots at more of the block layer, which I think would make it
a bit easier to migrate The issue is that cwfs is not aware of this, so to do this "properly" you would need to
make some modifications to cwfs, and at which point are we just reinventing the worm.

The benefit here would just be the manual snapshot timing combined with there only being "one pool" so to say.
Gefs also addresses these issues depending on how brave someone is feeling. I've got the latest code from bls
and it does compile in 9front without any modifications, but likely needs some effort put in to more properly
integrating.


Thanks,
moody

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2024-05-06 16:38 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-05-03 11:12 [9front] AUTHENTICATE failed can't get challenge (imap4d) sirjofri
2024-05-03 11:52 ` Alex Musolino
2024-05-03 18:23   ` sirjofri
2024-05-03 19:51     ` Jacob Moody
2024-05-03 21:09 ` Roberto E. Vargas Caballero
2024-05-06 11:12   ` sirjofri
2024-05-06 12:59     ` qwx
2024-05-06 14:34       ` Stanley Lieber
2024-05-06 14:50         ` qwx
2024-05-06 15:30           ` sirjofri
2024-05-06 15:40           ` Stanley Lieber
2024-05-06 16:03             ` qwx
2024-05-06 16:24               ` Stanley Lieber
2024-05-06 16:03             ` Stanley Lieber
2024-05-06 16:08             ` [9front] cwfs no-worm + venti (was: AUTHENTICATE failed can't get challenge (imap4d)) sirjofri
2024-05-06 16:21               ` [9front] " Stanley Lieber
2024-05-06 16:37                 ` [9front] Re: [9front] cwfs no-worm + venti Jacob Moody
2024-05-06 13:56     ` [9front] AUTHENTICATE failed can't get challenge (imap4d) Jacob Moody
2024-05-06 14:32     ` Stanley Lieber

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).