From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: <9f3897940704131439t56e781a2s7b2fb3497299c31e@mail.gmail.com> Date: Fri, 13 Apr 2007 23:39:45 +0200 From: "=?UTF-8?Q?Pawe=C5=82_Lasek?=" To: "Fans of the OS Plan 9 from Bell Labs" <9fans@cse.psu.edu> Subject: Re: [9fans] Re: [sources] 20070410: % cat In-Reply-To: <20070413110748.B32425@orthanc.ca> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <20070413110748.B32425@orthanc.ca> Topicbox-Message-UUID: 471cd3d4-ead2-11e9-9d60-3106f5b1d025 On 4/13/07, Lyndon Nerenberg wrote: > > for 250k messages, either solution is poor. deleting the > > first of 250k messages requires rewriting the whole mbox. > > dealing with 250k directory entries can be painful, too, > > as many fs keep directores as arrays. > > > > if you have that many messages, you might want an index. ;-) > These days I don't find large directories to be a problem. A few months > ago I did some performance tests on large directory operations (create, > unlink, readdir) on Linux and FreeBSD. For both OSes, directory > enumeration times were negligible for the tests I ran (in the vicinity of > 200K entries). And with everyone caching directory entries these days, > the only real difference between directory I/O and file I/O is the > locking overhead for directory modifications. (This all assumes local > disk. Throw in NFS and everything implodes.) That's also thanks to improved filesystems. Modern ones (at least in Linux, I don't know about *BSD as I didn't use them that extentively) use weird/complicated/WTF trees to keep directory (and not only) data. I am not sure if HFS+ doesn't use it also for several other things, as at least one of the developers working on it came from Be where he worked on BeFS, which was basically a database with hierarchical interface... I don't know about other fs'es, but I have never had problem with directory entries on XFS. Or anything else, as I never managed to break it despite being designed for UPS-backed servers (It caches very aggressively, especially writes) while ReiserFS gave up very fast :-) BTW, wouldn't XFS be a nice addition to Plan9? Plan9-specific things can be easily put into extended attributes, just make sure that inodes will have larger-than-default size so that xattr access would be blazing fast.... > --lyndon > -- Paul Lasek