From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: <3f7562ec89169ece4254f4e77fc3aeca@collyer.net> Date: Fri, 17 Sep 2004 00:09:29 -0700 From: geoff@collyer.net To: 9fans@cse.psu.edu Subject: Re: [9fans] OT: ZFS In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Topicbox-Message-UUID: e34c1724-eacd-11e9-9e20-41e7f4b1d025 adaptive endian-ness is pretty easy; I nearly put it into Ken's file server a few weeks ago and am now motivated to do it to deny Sun the ability to claim that only ZFS does it. I don't know how Sun does it, but you can do it pretty simply: at a known offset in the first superblock of a file system, mkfs or equivalent writes a long or vlong containing a pattern like 0x01020304 (if the superblock already contains a suitable magic number, that can be used instead). When mounting the file system, if that long looks like 0x04030201 instead, you push the existing "x" byte-swapping pseudo file-system that knows the format of all meta-data and byte-swaps blocks as they are read and written. I believe that "x" was added to let x86 file servers serve from disks written by MIPS file servers. 2^128 bytes really is vast. Even 2^64 bytes is an awful lot of data. I'm sure that a few people can find a use for such large files and a small fraction of those people probably even have legitimate uses for them. I think that most people will find 64-bit file servers to be adequate for quite a while (though obviously Microsoft will continue to push hard here). Unless you're making clever use of sparse files, you'll have to wait until you get afford 8 exabytes (it's actually 63-bit file sizes if off_t or equivalent is signed) of storage (that's 9,223,372,036,854,775,808 - 8 binary exabytes - of disk, not 8 video tapes) before you can make use of file sizes larger than 63 bits.