From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <9front-bounces@9front.inri.net> X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-1.1 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI autolearn=ham autolearn_force=no version=3.4.4 Received: from 9front.inri.net (9front.inri.net [168.235.81.73]) by inbox.vuxu.org (Postfix) with ESMTP id 7E47329AF4 for ; Thu, 15 Aug 2024 11:22:47 +0200 (CEST) Received: from mx2.mythic-beasts.com ([46.235.227.24]) by 9front; Thu Aug 15 05:21:18 -0400 2024 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=quintile.net; s=mythic-beasts-k1; h=To:Date:Subject:From; bh=ID7EtG0ez9d1AMUgTbrMELcpOvVeHfAG/JleW6eC2p8=; b=UarVWagAfAvunT6AhoQiFI3EPn PWkUgK5wdlfAb4elfsC6Ur189YdSZ+EDna+ZBMHGHUZ1culJfFDDgymp1qAxQ7X+MQMkg1EzHF1tH aWZ7ULG+Bhaelv091StMubKQcm3zd6Q4tYL/1hJ/wzVkwVKTXW/uBumF3sU7nPGgXw/rMZLy0FuO9 5ouqdrce6Nem6tQ/x+MTka+YV2HFPbvhwWZBuJJftQu6ZpgtkrmUuPL6NKllPN0801yTqvCetf333 b7mMJZOFk/1Bf7Xl7EdoxXaS/fN9MVYvPIhKFJqoTRhrq7xM0as0gmloLI56G4SEXb0PDqErGGUys YHlF1d9A==; Received: by mailhub-hex-d.mythic-beasts.com with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1seWfS-0040ae-CL for 9front@9front.org; Thu, 15 Aug 2024 10:21:16 +0100 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable From: Steve Simon Mime-Version: 1.0 (1.0) Date: Thu, 15 Aug 2024 12:21:01 +0300 Message-Id: <76549BE2-98F8-4781-A381-9A854C03798B@quintile.net> References: In-Reply-To: To: 9front@9front.org X-Mailer: iPhone Mail (20H343) X-BlackCat-Spam-Score: 4 List-ID: <9front.9front.org> List-Help: X-Glyph: ➈ X-Bullshit: secure lifecycle-aware realtime-java solution Subject: Re: [9front] Questions/concerns on gefs Reply-To: 9front@9front.org Precedence: bulk i apologies if this is out of date, however in my (old) plan9 system cp is s= ingle threaded, whilst fcp is multithreaded. if you want to do copy performa= nce tests fcp is a better tool to try. this does not explain why rm appears slow, though perhaps having rm block un= til the filesystem is stable is rather quite a good idea - rm being rare and= filesystem integrity being important. -Steve > On 15 Aug 2024, at 12:04 pm, Timothy Robert Bednarzyk wro= te: >=20 > =EF=BB=BFBefore I get into the questions I have, let me briefly describe m= y > setup. I currently have a 9front (CPU+AUTH+File) server running on real > hardware that I plan to primarily use for keeping my (mostly media) > files in one place. I ended up performing the install with gefs, and I > haven't had any problems with that in the three weeks since. Anyways, I > also have four 12 TB SATA HDDs that are being used in a pseudo-RAID 10 > setup having mostly followed the example in fs(3) except, notably, with > gefs instead of hjfs. In terms of stability, I have not had any problems > copying around 12 TB to those drives and with deleting about a TB from > them. I also have a 4 TB NVMe SSD that I use for storing some videos > with high bitrates; this drive also has a gefs filesystem. I use socat, > tlsclient, and 9pfs (which I had to slightly modify) for viewing files > on my Linux devices and primarily use drawterm for everything else > related to the server (although I have just connected it straight to a > monitor and keyboard). I doubt that anything else regarding my > particular setup is relevant to my questions, but if any more > information is desired, just ask. >=20 > Having read https://orib.dev/gefs.html, I know that ori's intent is to > one day implement RAID features as a part of gefs itself. I realize that > the following questions can't truly be answered until the time comes, > but I just want to gauge what my future may hold. When RAID-like > features are eventually added to gefs, is there any intent to provide > some way to migrate from an fs(3) setup to a gefs-native setup? If not, > is it possible that I might be able to set up two of my four drives with > gefs-native RAID 1, copy files from my existing gefs filesystem (after > altering my fs(3) setup to only interleave between my other two drives), > and then afterwards modify my new gefs filesystem to have a RAID 10 > setup across all four drives (potentially after manually mirroring the > two drives on the new filesystem to the two drives on the old)? Again, I > fully realize that it's not _really_ possible to answer these > hypothetical questions before the work to add RAID features to gefs is > done, and I would be perfectly content with "I don't know" as an answer. >=20 > While I have not had any _issues_ using gefs so far, I have some > concerns regarding performance. When I was copying files to my pseudo- > RAID 10 fs, the write speeds seemed about reasonable (I was copying from > several USB HDDs plugged directly into the server and from the > aforementioned NVMe SSD so that I could replace its ext4 filesystem with > gefs, and while I didn't really profile the actual speeds, I estimate > them to have been around 100 MiB/s based on the total size and the time > it took to copy from each drive). However, when I was copying files from > the pseudo-RAID 10 fs to the NVMe SSD, I noticed that it took somewhere > between 3-4 times longer than the other way around (when the NVMe SSD > was still using ext4). Additionally, it took about the same amount of > time to delete those same files from the pseudo-RAID 10 fs later. > Lastly, I tried moving (rather, cp followed by rm) a ~1 GB file from one > directory on the pseudo-RAID 10 fs to another and that ended up taking a > while. Shouldn't cp to different locations on the same drive be near- > instant since gefs is COW? And shouldn't rm also be near-instant since > all it needs to do is update the filesystem itself? >=20 > -- > trb