From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <9front-bounces@9front.inri.net> X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-1.1 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI autolearn=ham autolearn_force=no version=3.4.4 Received: from 9front.inri.net (9front.inri.net [168.235.81.73]) by inbox.vuxu.org (Postfix) with ESMTP id 95EB421381 for ; Thu, 15 Aug 2024 16:51:37 +0200 (CEST) Received: from mimir.eigenstate.org ([206.124.132.107]) by 9front; Thu Aug 15 10:50:01 -0400 2024 Received: from mimir.eigenstate.org (localhost [127.0.0.1]) by mimir.eigenstate.org (OpenSMTPD) with ESMTP id c97e0078 for <9front@9front.org>; Thu, 15 Aug 2024 07:49:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=eigenstate.org; h= message-id:to:subject:date:from:in-reply-to:mime-version :content-type:content-transfer-encoding; s=mail; bh=qFFZqZkmp/2m zHhfe2QHV4UYDZQ=; b=kE5Cxm9DUozkLHy6rzRXmVJfR3xu4x22qOif5n8svvxk /iTAVnZk4eHGdAq1a8SQDs01JmryJfI3fbmGdCK9jkZR7cXRDRiFVitEpHneZQMj ICi2fKOi+cbhy8AvJPjF9UpZLx2t5k4kMEzTt4jpBAbsePoR1GoqtmBshS4LccM= DomainKey-Signature: a=rsa-sha1; c=nofws; d=eigenstate.org; h=message-id :to:subject:date:from:in-reply-to:mime-version:content-type :content-transfer-encoding; q=dns; s=mail; b=JesrOBYZpY3sBPkcc81 pjCxIo9J1Qp8U95bZMzdGv2ZjBV/+qzN+8r834srFo2HjkKnai09eAM+HWBT6jfd 0MkckkM0DPM4gTiw1Y28ZQLgWWS6AhJ0mijPng3yobBgPR2yleAeWXuxzLm9DD0k 2kNtQuHyGyj7yZQVaTmsK/qM= Received: from chainsaw.dev (softbank126125142240.bbtec.net [126.125.142.240]) by mimir.eigenstate.org (OpenSMTPD) with ESMTPSA id 22a7f050 (TLSv1.2:ECDHE-RSA-AES256-SHA:256:NO) for <9front@9front.org>; Thu, 15 Aug 2024 07:49:58 -0700 (PDT) Message-ID: <3B8860B022BD029A9D7228F1AD2020D2@eigenstate.org> To: 9front@9front.org Date: Thu, 15 Aug 2024 10:47:20 -0400 From: ori@eigenstate.org In-Reply-To: <4078E999B9FAF877E6CC515C0243F5B8@eigenstate.org> MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit List-ID: <9front.9front.org> List-Help: X-Glyph: ➈ X-Bullshit: overflow-preventing patented injection service JSON over JSON software standard-oriented frontend Subject: Re: [9front] Questions/concerns on gefs Reply-To: 9front@9front.org Precedence: bulk Quoth ori@eigenstate.org: > Quoth Timothy Robert Bednarzyk : > > > > Again, I fully realize that it's not _really_ possible to answer these > > hypothetical questions before the work to add RAID features to gefs is > > done, and I would be perfectly content with "I don't know" as an answer. > > I don't know. > > > While I have not had any _issues_ using gefs so far, I have some > > concerns regarding performance. When I was copying files to my pseudo- > > RAID 10 fs, the write speeds seemed about reasonable (I was copying from > > several USB HDDs plugged directly into the server and from the > > aforementioned NVMe SSD so that I could replace its ext4 filesystem with > > gefs, and while I didn't really profile the actual speeds, I estimate > > them to have been around 100 MiB/s based on the total size and the time > > it took to copy from each drive). However, when I was copying files from > > the pseudo-RAID 10 fs to the NVMe SSD, I noticed that it took somewhere > > between 3-4 times longer than the other way around (when the NVMe SSD > > was still using ext4). Additionally, it took about the same amount of > > time to delete those same files from the pseudo-RAID 10 fs later. > > Lastly, I tried moving (rather, cp followed by rm) a ~1 GB file from one > > directory on the pseudo-RAID 10 fs to another and that ended up taking a > > while. Shouldn't cp to different locations on the same drive be near- > > instant since gefs is COW? And shouldn't rm also be near-instant since > > all it needs to do is update the filesystem itself? > > > > There's a lot of low hanging performance fruit -- especially for > writing and deletion. Writing currently copies blocks a lot more > than it theoretically needs to, and deletion is O(n) in file size, > when it could theoretically be O(1) with some complicated code. > > There's nothing in 9p that allows copy to be smart about COW, > so it isn't. > > https://orib.dev/gefs-future.html > also -- current priority is getting enough miles on it to be sure there aren't any serious bugs, space leaks, or other big issues; optimization comes once I'm happy enough with it to show as an option in the installer