From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <9front-bounces@9front.inri.net> X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI autolearn=ham autolearn_force=no version=3.4.4 Received: from 9front.inri.net (9front.inri.net [168.235.81.73]) by inbox.vuxu.org (Postfix) with ESMTP id 735B5216C5 for ; Mon, 6 May 2024 17:43:39 +0200 (CEST) Received: from gaff.inri.net ([168.235.71.243]) by 9front; Mon May 6 11:40:06 -0400 2024 Received: from [127.0.0.1] ([168.235.81.125]) by gaff; Mon May 6 11:40:06 -0400 2024 Date: Mon, 06 May 2024 11:40:04 -0400 From: Stanley Lieber To: 9front@9front.org In-Reply-To: <944CC9D807FB7E302CDDB114CBE15BD0@wopr.sciops.net> References: <944CC9D807FB7E302CDDB114CBE15BD0@wopr.sciops.net> Message-ID: <73EDEC59-DF58-4D74-A2C8-B2090C47F579@stanleylieber.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable List-ID: <9front.9front.org> List-Help: X-Glyph: ➈ X-Bullshit: converged proxy interface wrapper Subject: Re: [9front] AUTHENTICATE failed can't get challenge (imap4d) Reply-To: 9front@9front.org Precedence: bulk On May 6, 2024 10:50:17 AM EDT, qwx@sciops=2Enet wrote: >On Mon May 6 16:34:44 +0200 2024, sl@stanleylieber=2Ecom wrote: >> > It's a bad idea to run cwfs without a worm, especially in a vm=2E >>=20 >> why? >>=20 >> sl > >In my experience cwfs is very sensitive to unclean shutdowns, much >more than hjfs=2E VPS providers may sometimes reboot instances, or VMs >may go down, or 9front may panic and freeze, etc=2E I also remember >many instances of corruption (iirc with hjfs) under qemu but this >might have been fixed=2E Either way I think it's too risky to not have >any recovery method; if the check commands fail it's over=2E Of course >please correct me if I'm wrong, I'm not very familiar with the code=2E > >qwx > for over ten years, i have been running 9front associated mailing lists, w= ebsites, etc=2E, in virtual machines hosted at a series of different places= =2E in that time, i have tried all kinds of different setups, including: - hjfs, dump only triggered manually: 1k blocks, all on one partition, cor= ruption can be surgically addressed if you're a disk doctor, gets very slow= when a large partition passes half full - cwfs, dump mandatory and automatic: 4k blocks, multiple partitions, very= easy to fill cache or worm, corruption relatively easy to address - cwfs, no dump: 4k blocks, all on one partition, corruption harder to add= ress in the context of managing the sites, corruption has never been a signific= ant source of pain, even with lots of unclean shutdowns=2E my biggest probl= em (by far) with disks has been babysitting the cache/worm=2E moving lots o= f files onto the system means shoveling bits carefully into the cache until= it's not quite full and then doing a dump to clean it out before proceedin= g=2E when corruption happens, i move the corrupted file out of the way and = move on=2E when the worm fills up, you're fucked, full stop=2E all our disk file servers are vulnerable to unclean shutdowns=2E hjfs is g= ood for constrained disk space because 1k blocks and everything being on on= e partition means no babysitting the cache/worm=2E cwfs is good for actuall= y being a file system because it's fast=2E but, depending on your setup, de= aling with the cache is a huge pain=2E on a system with a huge number of fi= les, many of which are changing all the time, and all of which need to be a= vailable and readily accessible, the way cwfs' cache works and is proportio= ned is a nightmare=2E in retrospect it's obvious the most advantageous setup would be to retain = the worm for system files, and create a separate, non-worm partition for th= e huge number of files, many of which are changing all the time, and all of= which need to be available and readily accessible=2E i will now travel back in time and give myself this advice=2E=20 sl