9front - general discussion about 9front
 help / color / mirror / Atom feed
From: Steve Simon <steve@quintile.net>
To: 9front@9front.org
Subject: Re: [9front] Questions/concerns on gefs
Date: Thu, 15 Aug 2024 12:21:01 +0300	[thread overview]
Message-ID: <76549BE2-98F8-4781-A381-9A854C03798B@quintile.net> (raw)
In-Reply-To: <D3GD55L8A4BL.U0P76UH03ZQE@trbsw.dev>

i apologies if this is out of date, however in my (old)  plan9 system cp is single threaded, whilst fcp is multithreaded. if you want to do copy performance tests fcp is a better tool to try.

this does not explain why rm appears slow, though perhaps having rm block until the filesystem is stable is rather quite a good idea - rm being rare and filesystem integrity being important.

-Steve

> On 15 Aug 2024, at 12:04 pm, Timothy Robert Bednarzyk <mail@trbsw.dev> wrote:
> 
> Before I get into the questions I have, let me briefly describe my
> setup. I currently have a 9front (CPU+AUTH+File) server running on real
> hardware that I plan to primarily use for keeping my (mostly media)
> files in one place. I ended up performing the install with gefs, and I
> haven't had any problems with that in the three weeks since. Anyways, I
> also have four 12 TB SATA HDDs that are being used in a pseudo-RAID 10
> setup having mostly followed the example in fs(3) except, notably, with
> gefs instead of hjfs. In terms of stability, I have not had any problems
> copying around 12 TB to those drives and with deleting about a TB from
> them. I also have a 4 TB NVMe SSD that I use for storing some videos
> with high bitrates; this drive also has a gefs filesystem. I use socat,
> tlsclient, and 9pfs (which I had to slightly modify) for viewing files
> on my Linux devices and primarily use drawterm for everything else
> related to the server (although I have just connected it straight to a
> monitor and keyboard). I doubt that anything else regarding my
> particular setup is relevant to my questions, but if any more
> information is desired, just ask.
> 
> Having read https://orib.dev/gefs.html, I know that ori's intent is to
> one day implement RAID features as a part of gefs itself. I realize that
> the following questions can't truly be answered until the time comes,
> but I just want to gauge what my future may hold. When RAID-like
> features are eventually added to gefs, is there any intent to provide
> some way to migrate from an fs(3) setup to a gefs-native setup? If not,
> is it possible that I might be able to set up two of my four drives with
> gefs-native RAID 1, copy files from my existing gefs filesystem (after
> altering my fs(3) setup to only interleave between my other two drives),
> and then afterwards modify my new gefs filesystem to have a RAID 10
> setup across all four drives (potentially after manually mirroring the
> two drives on the new filesystem to the two drives on the old)? Again, I
> fully realize that it's not _really_ possible to answer these
> hypothetical questions before the work to add RAID features to gefs is
> done, and I would be perfectly content with "I don't know" as an answer.
> 
> While I have not had any _issues_ using gefs so far, I have some
> concerns regarding performance. When I was copying files to my pseudo-
> RAID 10 fs, the write speeds seemed about reasonable (I was copying from
> several USB HDDs plugged directly into the server and from the
> aforementioned NVMe SSD so that I could replace its ext4 filesystem with
> gefs, and while I didn't really profile the actual speeds, I estimate
> them to have been around 100 MiB/s based on the total size and the time
> it took to copy from each drive). However, when I was copying files from
> the pseudo-RAID 10 fs to the NVMe SSD, I noticed that it took somewhere
> between 3-4 times longer than the other way around (when the NVMe SSD
> was still using ext4). Additionally, it took about the same amount of
> time to delete those same files from the pseudo-RAID 10 fs later.
> Lastly, I tried moving (rather, cp followed by rm) a ~1 GB file from one
> directory on the pseudo-RAID 10 fs to another and that ended up taking a
> while. Shouldn't cp to different locations on the same drive be near-
> instant since gefs is COW? And shouldn't rm also be near-instant since
> all it needs to do is update the filesystem itself?
> 
> --
> trb

  reply	other threads:[~2024-08-15  9:22 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-15  9:04 Timothy Robert Bednarzyk
2024-08-15  9:21 ` Steve Simon [this message]
2024-08-15  9:42 ` Stuart Morrow
2024-08-15 14:22 ` ori
2024-08-15 14:47   ` ori
2024-08-26  4:45   ` James Cook
2024-08-26  7:43     ` Steve Simon
2024-08-26  8:10       ` Frank D. Engel, Jr.
2024-08-26  8:26         ` Alex Musolino
2024-08-26 14:36         ` ron minnich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=76549BE2-98F8-4781-A381-9A854C03798B@quintile.net \
    --to=steve@quintile.net \
    --cc=9front@9front.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).