From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <9front-bounces@9front.inri.net> X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-1.1 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI autolearn=ham autolearn_force=no version=3.4.4 Received: from 9front.inri.net (9front.inri.net [168.235.81.73]) by inbox.vuxu.org (Postfix) with ESMTP id D25972B8CE for ; Thu, 15 Aug 2024 16:26:22 +0200 (CEST) Received: from mimir.eigenstate.org ([206.124.132.107]) by 9front; Thu Aug 15 10:25:24 -0400 2024 Received: from mimir.eigenstate.org (localhost [127.0.0.1]) by mimir.eigenstate.org (OpenSMTPD) with ESMTP id d0357d31 for <9front@9front.org>; Thu, 15 Aug 2024 07:25:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=eigenstate.org; h= message-id:to:subject:date:from:in-reply-to:mime-version :content-type:content-transfer-encoding; s=mail; bh=vdBYxBpWRegN CFl6tWYNdVxR6hA=; b=Kxo5uy2o+qVrw6G+Bejs+VjsfQMxvwqvxLBzj7vNht0y 5vikfGZYbfN3Oy5ybsrgjm0akHQ455H6fLy5XF/GLTab8Tai++8XydJjeXXEuL70 LGAIdxt/7szhO4wwuvJqY39NbVMJvf/2PA4Wzq7p24Ew6ffw7rFO9pAEnd6U+QE= DomainKey-Signature: a=rsa-sha1; c=nofws; d=eigenstate.org; h=message-id :to:subject:date:from:in-reply-to:mime-version:content-type :content-transfer-encoding; q=dns; s=mail; b=VvT6LcP7TzD1UlSrkEl NYFseIZxqdprtkQXuIiRYiisZjYJG/Lmj88lkcRAepLbvVakHBL5LYS384z9BhYn OsvyZl1BEz1XDOm4N1b3G0ZtgJQQu9C6irdDHYSgfzArkViL6D6Yavz/N/AN1Qpu DwQl5Htf2081iKA2/MfMM4R0= Received: from chainsaw.dev (softbank126125142240.bbtec.net [126.125.142.240]) by mimir.eigenstate.org (OpenSMTPD) with ESMTPSA id 4b656367 (TLSv1.2:ECDHE-RSA-AES256-SHA:256:NO) for <9front@9front.org>; Thu, 15 Aug 2024 07:25:21 -0700 (PDT) Message-ID: <4078E999B9FAF877E6CC515C0243F5B8@eigenstate.org> To: 9front@9front.org Date: Thu, 15 Aug 2024 10:22:43 -0400 From: ori@eigenstate.org In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit List-ID: <9front.9front.org> List-Help: X-Glyph: ➈ X-Bullshit: module lifecycle enhancement-scale shader Subject: Re: [9front] Questions/concerns on gefs Reply-To: 9front@9front.org Precedence: bulk Quoth Timothy Robert Bednarzyk : > > Again, I fully realize that it's not _really_ possible to answer these > hypothetical questions before the work to add RAID features to gefs is > done, and I would be perfectly content with "I don't know" as an answer. I don't know. > While I have not had any _issues_ using gefs so far, I have some > concerns regarding performance. When I was copying files to my pseudo- > RAID 10 fs, the write speeds seemed about reasonable (I was copying from > several USB HDDs plugged directly into the server and from the > aforementioned NVMe SSD so that I could replace its ext4 filesystem with > gefs, and while I didn't really profile the actual speeds, I estimate > them to have been around 100 MiB/s based on the total size and the time > it took to copy from each drive). However, when I was copying files from > the pseudo-RAID 10 fs to the NVMe SSD, I noticed that it took somewhere > between 3-4 times longer than the other way around (when the NVMe SSD > was still using ext4). Additionally, it took about the same amount of > time to delete those same files from the pseudo-RAID 10 fs later. > Lastly, I tried moving (rather, cp followed by rm) a ~1 GB file from one > directory on the pseudo-RAID 10 fs to another and that ended up taking a > while. Shouldn't cp to different locations on the same drive be near- > instant since gefs is COW? And shouldn't rm also be near-instant since > all it needs to do is update the filesystem itself? > There's a lot of low hanging performance fruit -- especially for writing and deletion. Writing currently copies blocks a lot more than it theoretically needs to, and deletion is O(n) in file size, when it could theoretically be O(1) with some complicated code. There's nothing in 9p that allows copy to be smart about COW, so it isn't. https://orib.dev/gefs-future.html