From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <9front-bounces@9front.inri.net> X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI autolearn=ham autolearn_force=no version=3.4.4 Received: from 9front.inri.net (9front.inri.net [168.235.81.73]) by inbox.vuxu.org (Postfix) with ESMTP id 1BBA023A06 for ; Thu, 15 Aug 2024 11:05:46 +0200 (CEST) Received: from out-186.mta1.migadu.com ([95.215.58.186]) by 9front; Thu Aug 15 05:04:38 -0400 2024 Mime-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=trbsw.dev; s=key1; t=1723712667; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=ddmN8Fl1FLwel5Jhu1eS1vHu6LBzA58SWM85/b5KxXQ=; b=d6aw45/rCiH27SLQ9ZL73acGiUSrO+tmFUcGC5piN/m7YOZxyPsJObKLdBvpWvsrSEJOS8 pIl523Df0FVResGjuTarBACMLQv9YMvNuhYQnsRhEaTP1otS8Lq+U47UJSNJWwVsdqyMkh h3YlcCIBquNqE0X57hrvRPkl3f4//YQ= Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=UTF-8 Date: Thu, 15 Aug 2024 05:04:22 -0400 Message-Id: X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: "Timothy Robert Bednarzyk" To: <9front@9front.org> X-Migadu-Flow: FLOW_OUT List-ID: <9front.9front.org> List-Help: X-Glyph: ➈ X-Bullshit: lossless table reduce/map interface Subject: [9front] Questions/concerns on gefs Reply-To: 9front@9front.org Precedence: bulk Before I get into the questions I have, let me briefly describe my setup. I currently have a 9front (CPU+AUTH+File) server running on real hardware that I plan to primarily use for keeping my (mostly media) files in one place. I ended up performing the install with gefs, and I haven't had any problems with that in the three weeks since. Anyways, I also have four 12 TB SATA HDDs that are being used in a pseudo-RAID 10 setup having mostly followed the example in fs(3) except, notably, with gefs instead of hjfs. In terms of stability, I have not had any problems copying around 12 TB to those drives and with deleting about a TB from them. I also have a 4 TB NVMe SSD that I use for storing some videos with high bitrates; this drive also has a gefs filesystem. I use socat, tlsclient, and 9pfs (which I had to slightly modify) for viewing files on my Linux devices and primarily use drawterm for everything else related to the server (although I have just connected it straight to a monitor and keyboard). I doubt that anything else regarding my particular setup is relevant to my questions, but if any more information is desired, just ask. Having read https://orib.dev/gefs.html, I know that ori's intent is to one day implement RAID features as a part of gefs itself. I realize that the following questions can't truly be answered until the time comes, but I just want to gauge what my future may hold. When RAID-like features are eventually added to gefs, is there any intent to provide some way to migrate from an fs(3) setup to a gefs-native setup? If not, is it possible that I might be able to set up two of my four drives with gefs-native RAID 1, copy files from my existing gefs filesystem (after altering my fs(3) setup to only interleave between my other two drives), and then afterwards modify my new gefs filesystem to have a RAID 10 setup across all four drives (potentially after manually mirroring the two drives on the new filesystem to the two drives on the old)? Again, I fully realize that it's not _really_ possible to answer these hypothetical questions before the work to add RAID features to gefs is done, and I would be perfectly content with "I don't know" as an answer. While I have not had any _issues_ using gefs so far, I have some concerns regarding performance. When I was copying files to my pseudo- RAID 10 fs, the write speeds seemed about reasonable (I was copying from several USB HDDs plugged directly into the server and from the aforementioned NVMe SSD so that I could replace its ext4 filesystem with gefs, and while I didn't really profile the actual speeds, I estimate them to have been around 100 MiB/s based on the total size and the time it took to copy from each drive). However, when I was copying files from the pseudo-RAID 10 fs to the NVMe SSD, I noticed that it took somewhere between 3-4 times longer than the other way around (when the NVMe SSD was still using ext4). Additionally, it took about the same amount of time to delete those same files from the pseudo-RAID 10 fs later. Lastly, I tried moving (rather, cp followed by rm) a ~1 GB file from one directory on the pseudo-RAID 10 fs to another and that ended up taking a while. Shouldn't cp to different locations on the same drive be near- instant since gefs is COW? And shouldn't rm also be near-instant since all it needs to do is update the filesystem itself? -- trb