9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
* [9fans] 9Front / cwfs64x and hjfs storage
@ 2020-12-28 23:55 joey
  2020-12-29  2:27 ` Ethan Gardener
  2020-12-29 13:06 ` Alex Musolino
  0 siblings, 2 replies; 17+ messages in thread
From: joey @ 2020-12-28 23:55 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 1418 bytes --]

Hello,

While it is not yet a concern, I am trying to figure something out that does not seem to be well documented in the man pages or the fqa about the file systems.

I am currently running a plan9front instance with cwfs64x (the whole "hjfs is experimental, you could loose your files" seemed to be a bit dangerous when I started everything) and I understand that it is a WORM file system.  My question is for the end game.  If the storage gets full with all of the diffs, is there a way for the oldest ones to roll off, or do you need to expand the storage or export them or ?  I come from the linux world where this is not a feature file system wise and worst case I would have lvm's that I could just grow or with repos I could cull the older diffs, if needed.

If there is additional features for this in hjfs, that would be nice to know too.  I am just really trying to understand the limits of the technology and what expectations to have.  Otherwise, I love the plan9 environment and knowing what options I have for when I inevitably get to that point would put me more at ease in trusting more operations to be conducted on Plan9 systems.

Best and thank you!
~Joey
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/Tc951a224dde6dde5-M58a27bc5d68fd6f7b293a922
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

[-- Attachment #2: Type: text/html, Size: 2076 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] 9Front / cwfs64x and hjfs storage
  2020-12-28 23:55 [9fans] 9Front / cwfs64x and hjfs storage joey
@ 2020-12-29  2:27 ` Ethan Gardener
  2020-12-29  8:53   ` sirjofri
  2021-02-08  1:35   ` Ryan
  2020-12-29 13:06 ` Alex Musolino
  1 sibling, 2 replies; 17+ messages in thread
From: Ethan Gardener @ 2020-12-29  2:27 UTC (permalink / raw)
  To: 9fans

You can add disks. CWFS config allows multiple devices/partitions to form the WORM. It's like a simple form of LVM. I forget the exact syntax and I don't think there's a man page documenting cwfs's particular variant syntax, but I think it's something like (/dev/sdE0/worm /dev/sdF0/worm) in place of just /dev/sdE0/worm

------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/Tc951a224dde6dde5-M57af3d63f67cdbfadc721ff2
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] 9Front / cwfs64x and hjfs storage
  2020-12-29  2:27 ` Ethan Gardener
@ 2020-12-29  8:53   ` sirjofri
  2020-12-29  9:15     ` Kurt H Maier
  2021-01-07 21:28     ` Stuart Morrow
  2021-02-08  1:35   ` Ryan
  1 sibling, 2 replies; 17+ messages in thread
From: sirjofri @ 2020-12-29  8:53 UTC (permalink / raw)
  To: 9fans

Hello,

29.12.2020 03:27:19 Ethan Gardener <eekee57@fastmail.fm>:
> You can add disks. CWFS config allows multiple devices/partitions to 
form the WORM. It's like a simple form of LVM. I forget the exact syntax 
and I don't think there's a man page documenting cwfs's particular 
variant syntax, but I think it's something like (/dev/sdE0/worm 
/dev/sdF0/worm) in place of just /dev/sdE0/worm

Is it then also possible to remove older disks at some point 
(physically)? Something like this:

- WORM1 (full)
- WORM2 (full)
- WORM3 (not full)
- cache

Then removing WORM1, storing it as backup or reformat it as a new WORM4:

- WORM2 (full)
- WORM3 (not full)
- WORM4 (new, empty)
- cache

Is something like that possible? If not, it still could be an inspiration 
for ori's new filesystem, maybe?

sirjofri

------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/Tc951a224dde6dde5-Md728027c4f6ed59af86e6aaf
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] 9Front / cwfs64x and hjfs storage
  2020-12-29  8:53   ` sirjofri
@ 2020-12-29  9:15     ` Kurt H Maier
  2020-12-29  9:21       ` sirjofri
  2021-01-07 21:28     ` Stuart Morrow
  1 sibling, 1 reply; 17+ messages in thread
From: Kurt H Maier @ 2020-12-29  9:15 UTC (permalink / raw)
  To: 9fans

On Tue, Dec 29, 2020 at 08:53:55AM +0000, sirjofri wrote:
> 
> Then removing WORM1, storing it as backup or reformat it as a new WORM4:
> 
...
> Is something like that possible? If not, it still could be an inspiration 
> for ori's new filesystem, maybe?

If he implements this and the resulting filesystem is not called
Oriborous I will be extraordinarily, possibly fatally, disappointed.

khm

------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/Tc951a224dde6dde5-M7f2afbac7f1adf7ebd1019ee
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] 9Front / cwfs64x and hjfs storage
  2020-12-29  9:15     ` Kurt H Maier
@ 2020-12-29  9:21       ` sirjofri
  0 siblings, 0 replies; 17+ messages in thread
From: sirjofri @ 2020-12-29  9:21 UTC (permalink / raw)
  To: 9fans


29.12.2020 10:15:29 Kurt H Maier <khm@sciops.net>:
> On Tue, Dec 29, 2020 at 08:53:55AM +0000, sirjofri wrote:
>> for ori's new filesystem, maybe?
>
> If he implements this and the resulting filesystem is not called
> Oriborous I will be extraordinarily, possibly fatally, disappointed.

😂😂😂

Absolutely 😂

sirjofri

------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/Tc951a224dde6dde5-Me30a5774c03d500dc7781c2f
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] 9Front / cwfs64x and hjfs storage
  2020-12-28 23:55 [9fans] 9Front / cwfs64x and hjfs storage joey
  2020-12-29  2:27 ` Ethan Gardener
@ 2020-12-29 13:06 ` Alex Musolino
  2020-12-29 16:14   ` Ethan Gardener
  1 sibling, 1 reply; 17+ messages in thread
From: Alex Musolino @ 2020-12-29 13:06 UTC (permalink / raw)
  To: 9fans

> While it is not yet a concern, I am trying to figure something out
> that does not seem to be well documented in the man pages or the fqa
> about the file systems.

Parts of fs(4), fs(8), and fsconfig(8) can be applied to cwfs.  The
syntax that Ethan talked about for concatenating WORM devices is
described in fsconfig(8).

> I am currently running a plan9front instance with cwfs64x (the whole
> "hjfs is experimental, you could loose your files" seemed to be a
> bit dangerous when I started everything) and I understand that it is
> a WORM file system.  My question is for the end game.  If the
> storage gets full with all of the diffs, is there a way for the
> oldest ones to roll off, or do you need to expand the storage or
> export them or ?  I come from the linux world where this is not a
> feature file system wise and worst case I would have lvm's that I
> could just grow or with repos I could cull the older diffs, if
> needed.

Cwfs doesn't know anything about diffs as such, it just keeps track of
dirty blocks and writes these out to the WORM partition when a dump is
requested.  The plan 9 approach to storage is to just keep adding
capacity since the price of storage falls faster than you can use it
up.

I recently upgraded my home file server from an 80GB HDD to a 240GB
SSD and documented the process [1].  The WORM partition contained 25GB
and dates back to 2016-04-12.  Now, maybe you'll generate much more
data than me over less time, but in this day and age of cheap
multi-terrabyte HDDs and hundred-gigabyte SSDs I think it's still
perfectly reasonable to just keep adding capacity as you need it.

Another thing to consider is how much data you really need to be
sending to the WORM in the first place.  Multimedia, for example,
might be better stored on a more convential file system since the lack
of content-based deduping in cwfs might result in these files being
dumped multiple times as they are moved around or have their metadata
edited.  Even venti won't dedup in the latter case as it doesn't do
content-defined chunking.

Plan 9 is really good at combining multiple filesystems from multiple
machines (running different operating systems!) together into a single
namespace.  My music collection lives on an ext4 filesystem mirrored
across 2 drives (and backed up elsewhere) but can be easily accessed
from 9front using sshfs(4).  I just run `9fs music` and the entire
collection appears under /n/music.

[1] http://docs.a-b.xyz/migrating-cwfs.html

------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/Tc951a224dde6dde5-Md9c09155880fcdeb3dc85cfc
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] 9Front / cwfs64x and hjfs storage
  2020-12-29 13:06 ` Alex Musolino
@ 2020-12-29 16:14   ` Ethan Gardener
  2020-12-30  8:08     ` joey
  0 siblings, 1 reply; 17+ messages in thread
From: Ethan Gardener @ 2020-12-29 16:14 UTC (permalink / raw)
  To: 9fans

On Tue, Dec 29, 2020, at 1:06 PM, Alex Musolino wrote:
> > While it is not yet a concern, I am trying to figure something out
> > that does not seem to be well documented in the man pages or the fqa
> > about the file systems.
> 
> Parts of fs(4), fs(8), and fsconfig(8) can be applied to cwfs.  The
> syntax that Ethan talked about for concatenating WORM devices is
> described in fsconfig(8).

Only approximately. This page puts several paragraphs into describing syntax specific to Ken Thompson's standalone fileserver which we no longer have. CWFS is a port of this, so the syntax is similar, but the device specifications are different. Looking at a real config will help make the transition, along with some guesswork. There was something about adding option letters to the beginning of paths...

I'm *really* not happy with the state of cwfs documentation and I'm not qualified to fix it.

> > I am currently running a plan9front instance with cwfs64x (the whole
> > "hjfs is experimental, you could loose your files" seemed to be a
> > bit dangerous when I started everything) and I understand that it is
> > a WORM file system.  My question is for the end game.  If the
> > storage gets full with all of the diffs, is there a way for the
> > oldest ones to roll off, or do you need to expand the storage or
> > export them or ?  I come from the linux world where this is not a
> > feature file system wise and worst case I would have lvm's that I
> > could just grow or with repos I could cull the older diffs, if
> > needed.
> 
> Cwfs doesn't know anything about diffs as such, it just keeps track of
> dirty blocks and writes these out to the WORM partition when a dump is
> requested.  The plan 9 approach to storage is to just keep adding
> capacity since the price of storage falls faster than you can use it
> up.
> 
> I recently upgraded my home file server from an 80GB HDD to a 240GB
> SSD and documented the process [1].  The WORM partition contained 25GB
> and dates back to 2016-04-12.  Now, maybe you'll generate much more
> data than me over less time, but in this day and age of cheap
> multi-terrabyte HDDs and hundred-gigabyte SSDs I think it's still
> perfectly reasonable to just keep adding capacity as you need it.
> [1] http://docs.a-b.xyz/migrating-cwfs.html

Good link.

Incidentally, I bought 2 720GB drives in January 2007 and partitioned them as 500GB/remainder. (I always keep spare partitions.) I never even filled one of those 500GB partitions in 10 years. ;) In my case, it would be well worth migrating rather than extending the space.

> Another thing to consider is how much data you really need to be
> sending to the WORM in the first place.  Multimedia, for example,
> might be better stored on a more convential file system since the lack
> of content-based deduping in cwfs might result in these files being
> dumped multiple times as they are moved around or have their metadata
> edited.  Even venti won't dedup in the latter case as it doesn't do
> content-defined chunking.

Yes. 9p lacks any sort of move command and moving is done by copying, so you'll end up with multiple copies of these large unchanging files in the dump for no good reason. I'm not so sure about metadata because the dump is block-based, not file-based, but if the metadata is variable-length, changing it will pollute any block-based WORM, dedup'd or not.

It might seem more reasonable to keep such large, unchanging files on cwfs "other", except "other" reacts badly to being accidentally filled. (Other is really just the cwfs cache partition code which also can't handle being filled due to some cache-specific practical issue.) It's better to use another filesystem entirely. I haven't heard of many problems with hjfs, but filesystems in general are the worst for unpleasant surprises. There is also dossrv (very heavily used and tested), extsrv (possibly not so much?), and kfs (if we still have it). Kfs is a bit lightweight, but I had no problems using it for root in a 9vx setup years ago. It may have smaller limits. Dossrv supports fat32, so the file size limit is 2 or 4GB.

> Plan 9 is really good at combining multiple filesystems from multiple
> machines (running different operating systems!) together into a single
> namespace.  My music collection lives on an ext4 filesystem mirrored
> across 2 drives (and backed up elsewhere) but can be easily accessed
> from 9front using sshfs(4).  I just run `9fs music` and the entire
> collection appears under /n/music.

Yes indeed! I haven't used sshfs myself, I used u9fs. (The needed updates to ssh only happened in recent years.) We have so many options for networked filesystems. :) Anyway, it's recommended to use u9fs over ssh so there's probably no point to it, but I dug up the links anyway for nostalgic reasons.

source
http://9p.io/sources/plan9/sys/src/cmd/unix/u9fs/
man page:
http://9p.io/magic/man2html/4/u9fs

------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/Tc951a224dde6dde5-M57c34b15e92e249de36db745
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] 9Front / cwfs64x and hjfs storage
  2020-12-29 16:14   ` Ethan Gardener
@ 2020-12-30  8:08     ` joey
  2021-02-04 23:10       ` rt9f.3141
  0 siblings, 1 reply; 17+ messages in thread
From: joey @ 2020-12-30  8:08 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 476 bytes --]

Thank you everyone for all of your knowledge!

I have a much better understanding of the WORM file systems for Plan9 and I never thought of using external storage as a solution/ tiering the storage based on what is stored.

Thanks again,
~Joey
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/Tc951a224dde6dde5-M7f625973d98a0edcfaa8b37d
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

[-- Attachment #2: Type: text/html, Size: 1040 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] 9Front / cwfs64x and hjfs storage
  2020-12-29  8:53   ` sirjofri
  2020-12-29  9:15     ` Kurt H Maier
@ 2021-01-07 21:28     ` Stuart Morrow
  2021-01-07 23:25       ` ori
  1 sibling, 1 reply; 17+ messages in thread
From: Stuart Morrow @ 2021-01-07 21:28 UTC (permalink / raw)
  To: 9fans

On 29/12/2020, sirjofri <sirjofri+ml-9fans@sirjofri.de> wrote:
> ori's new filesystem

What's this? and why is it needed? Hjfs already fixes the worst thing
about cwfs already (needing to copy files from one partition to
another on the same disk).

Though speaking of new file servers, the Irssi /upgrade trick would be
good to have. (Save needed state to disk, exec new version in current
process, and tell it to pick up where we left off. The 'trick' is we
need not close any network connections.)

------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/Tc951a224dde6dde5-Mb12f611a58a194f4bac92d97
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] 9Front / cwfs64x and hjfs storage
  2021-01-07 21:28     ` Stuart Morrow
@ 2021-01-07 23:25       ` ori
  0 siblings, 0 replies; 17+ messages in thread
From: ori @ 2021-01-07 23:25 UTC (permalink / raw)
  To: 9fans

Quoth Stuart Morrow <morrow.stuart@gmail.com>:
> On 29/12/2020, sirjofri <sirjofri+ml-9fans@sirjofri.de> wrote:
> > ori's new filesystem
> 
> What's this? and why is it needed? Hjfs already fixes the worst thing
> about cwfs already (needing to copy files from one partition to
> another on the same disk).

crash safety, corruption detection, performance,
and ultra-fast deletable/forkable snapshots.

also, it's fun to write.

it's currently vaporware. don't hold your breath.

------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/Tc951a224dde6dde5-M77ef2ba0a1f5be5cc4bf3d83
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] 9Front / cwfs64x and hjfs storage
  2020-12-30  8:08     ` joey
@ 2021-02-04 23:10       ` rt9f.3141
  2021-02-05  2:51         ` rt9f.3141
  2021-02-05  3:06         ` Alex Musolino
  0 siblings, 2 replies; 17+ messages in thread
From: rt9f.3141 @ 2021-02-04 23:10 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 362 bytes --]

As as filesystem becomes full, what's the plan9/9front way to report disk utilization?  i.e. the equivalent of 'df'?
Thanks,
-Shiro
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/Tc951a224dde6dde5-M218b0c173b5b6f42b8fb8645
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

[-- Attachment #2: Type: text/html, Size: 891 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] 9Front / cwfs64x and hjfs storage
  2021-02-04 23:10       ` rt9f.3141
@ 2021-02-05  2:51         ` rt9f.3141
  2021-02-05  3:06         ` Alex Musolino
  1 sibling, 0 replies; 17+ messages in thread
From: rt9f.3141 @ 2021-02-05  2:51 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 405 bytes --]

On Thursday, February 04, 2021, at 3:10 PM, rt9f.3141 wrote:
> 9front
On Thursday, February 04, 2021, at 3:10 PM, rt9f.3141 wrote:
> As as filesystem becomes full,
s/as/the/
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/Tc951a224dde6dde5-M0d7b0c2060d875078332e5ec
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

[-- Attachment #2: Type: text/html, Size: 1018 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] 9Front / cwfs64x and hjfs storage
  2021-02-04 23:10       ` rt9f.3141
  2021-02-05  2:51         ` rt9f.3141
@ 2021-02-05  3:06         ` Alex Musolino
  2021-02-07  6:37           ` rt9f.3141
  1 sibling, 1 reply; 17+ messages in thread
From: Alex Musolino @ 2021-02-05  3:06 UTC (permalink / raw)
  To: 9fans

> As as filesystem becomes full, what's the plan9/9front way to report
> disk utilization? i.e. the equivalent of 'df'?

Use the "statw" command for cwfs(4) and the "df" command for hjfs(4).


------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/Tc951a224dde6dde5-M1291b63ed5787ba6ea74be98
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] 9Front / cwfs64x and hjfs storage
  2021-02-05  3:06         ` Alex Musolino
@ 2021-02-07  6:37           ` rt9f.3141
  2021-02-07 22:54             ` rt9f.3141
  0 siblings, 1 reply; 17+ messages in thread
From: rt9f.3141 @ 2021-02-07  6:37 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 609 bytes --]

 I'm running 9front on amd64, a version I recently compiled from hg tip (changes 8311).  Originally installed from the Oct release (9front-8013.d9e940a768d1.amd64.iso) following the installation guide http://fqa.9front.org/fqa4.html replacing 'mountcwfs' with http://a-b.xyz/23/666a

using 'con -C /srv/cwfs.cmd', I get "cmd_exec: unknown command: statw".  What am I missing?
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/Tc951a224dde6dde5-M18b7376b35cc20d8f6fe4b2e
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

[-- Attachment #2: Type: text/html, Size: 1322 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] 9Front / cwfs64x and hjfs storage
  2021-02-07  6:37           ` rt9f.3141
@ 2021-02-07 22:54             ` rt9f.3141
  0 siblings, 0 replies; 17+ messages in thread
From: rt9f.3141 @ 2021-02-07 22:54 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 447 bytes --]

OK, a bit more digging, statw works with the cwfs installed by the stock image and scripts (not using 666a).  Any pointers on deciphering the info provided would be much appreciated.

Sorry for hijacking this thread.
------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/Tc951a224dde6dde5-Md21a254a356bad2371a3b792
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

[-- Attachment #2: Type: text/html, Size: 964 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] 9Front / cwfs64x and hjfs storage
  2020-12-29  2:27 ` Ethan Gardener
  2020-12-29  8:53   ` sirjofri
@ 2021-02-08  1:35   ` Ryan
  2021-02-08  6:14     ` Ethan Gardener
  1 sibling, 1 reply; 17+ messages in thread
From: Ryan @ 2021-02-08  1:35 UTC (permalink / raw)
  To: 9fans

im not seeing how to unsubscribe from this on
https://9fans.topicbox.com/groups/9fans/subscription

------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/Tc951a224dde6dde5-Md91fe68958ac1491248b995e
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] 9Front / cwfs64x and hjfs storage
  2021-02-08  1:35   ` Ryan
@ 2021-02-08  6:14     ` Ethan Gardener
  0 siblings, 0 replies; 17+ messages in thread
From: Ethan Gardener @ 2021-02-08  6:14 UTC (permalink / raw)
  To: 9fans

On Mon, Feb 8, 2021, at 1:35 AM, Ryan wrote:
> im not seeing how to unsubscribe from this on
> https://9fans.topicbox.com/groups/9fans/subscription

Me either. I can say the group admins don't respond to the sort of mail you'd send to an automated mailing list server. :p (I was a bit confused that day.) Personally, I'm no longer sure I want to unsubscribe.

------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/Tc951a224dde6dde5-Mf68f8aed5459199f53f4ffd0
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2021-02-08  6:15 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-28 23:55 [9fans] 9Front / cwfs64x and hjfs storage joey
2020-12-29  2:27 ` Ethan Gardener
2020-12-29  8:53   ` sirjofri
2020-12-29  9:15     ` Kurt H Maier
2020-12-29  9:21       ` sirjofri
2021-01-07 21:28     ` Stuart Morrow
2021-01-07 23:25       ` ori
2021-02-08  1:35   ` Ryan
2021-02-08  6:14     ` Ethan Gardener
2020-12-29 13:06 ` Alex Musolino
2020-12-29 16:14   ` Ethan Gardener
2020-12-30  8:08     ` joey
2021-02-04 23:10       ` rt9f.3141
2021-02-05  2:51         ` rt9f.3141
2021-02-05  3:06         ` Alex Musolino
2021-02-07  6:37           ` rt9f.3141
2021-02-07 22:54             ` rt9f.3141

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).