From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.io/gmane.emacs.gnus.general/24254 Path: main.gmane.org!not-for-mail From: Jim Pick Newsgroups: gmane.emacs.gnus.general Subject: Re: [PATCH] Mail fetching on memory-poor machines Date: 14 Jul 1999 20:23:36 -0700 Sender: owner-ding@hpc.uh.edu Message-ID: <87vhbmzjd3.fsf@pepper.jimpick.com> References: <87hfn7stjr.fsf@pepper.jimpick.com> <87673mapfs.fsf@pepper.jimpick.com> NNTP-Posting-Host: coloc-standby.netfonds.no Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Trace: main.gmane.org 1035161852 7733 80.91.224.250 (21 Oct 2002 00:57:32 GMT) X-Complaints-To: usenet@main.gmane.org NNTP-Posting-Date: Mon, 21 Oct 2002 00:57:32 +0000 (UTC) Cc: "(ding)" Return-Path: Original-Received: from farabi.math.uh.edu (farabi.math.uh.edu [129.7.128.57]) by sclp3.sclp.com (8.8.5/8.8.5) with ESMTP id XAA18277 for ; Wed, 14 Jul 1999 23:24:54 -0400 (EDT) Original-Received: from sina.hpc.uh.edu (lists@Sina.HPC.UH.EDU [129.7.3.5]) by farabi.math.uh.edu (8.9.1/8.9.1) with ESMTP id WAB08842; Wed, 14 Jul 1999 22:24:29 -0500 (CDT) Original-Received: by sina.hpc.uh.edu (TLB v0.09a (1.20 tibbs 1996/10/09 22:03:07)); Wed, 14 Jul 1999 22:25:23 -0500 (CDT) Original-Received: from sclp3.sclp.com (root@sclp3.sclp.com [204.252.123.139]) by sina.hpc.uh.edu (8.9.3/8.9.3) with ESMTP id WAA25068 for ; Wed, 14 Jul 1999 22:25:13 -0500 (CDT) Original-Received: from tia.jimpick.com (tia.jimpick.com [204.209.212.111]) by sclp3.sclp.com (8.8.5/8.8.5) with ESMTP id XAA18261 for ; Wed, 14 Jul 1999 23:24:09 -0400 (EDT) Original-Received: from (pepper.jimpick.com) [204.209.212.121] by tia.jimpick.com with esmtp (Exim 2.05 #1) id 114c7s-0003pZ-00 (Debian); Wed, 14 Jul 1999 20:24:03 -0700 Original-Received: from jim by pepper.jimpick.com with local (Exim 3.02 #1 (Debian)) id 114c7U-0003ow-00; Wed, 14 Jul 1999 20:23:36 -0700 Original-To: Stainless Steel Rat X-Url: http://www.jimpick.com/ In-Reply-To: Stainless Steel Rat's message of "14 Jul 1999 21:21:00 -0400" Original-Lines: 61 User-Agent: Gnus/5.070095 (Pterodactyl Gnus v0.95) Emacs/20.3 Precedence: list X-Majordomo: 1.94.jlt7 Xref: main.gmane.org gmane.emacs.gnus.general:24254 X-Report-Spam: http://spam.gmane.org/gmane.emacs.gnus.general:24254 Stainless Steel Rat writes: > * Jim Pick on Wed, 14 Jul 1999 > | Why not fix the problem? The results of the problem are so bad that > | it's basically a bug. Fixing it seems like a more logical thing to > | do, rather than force everybody to use the program in a certain way. > > Because, if I understand what your code does (which might not be the case), > it only works for people who use Gnus "your" way while slowing it down > unnecessarilly for those who are not on memory-starved systems, have large > nnfolders, and do use Gnus to split mail sources. I know. I stated that the patch wasn't for everybody when I posted it! It wouldn't be too hard to put in a customize option to enable/disable it - but I haven't gotten any feedback on whether this is wanted. The patch was ugly enough that I'm not sure that if it should be integrated as is. I can think of several different ways of achieving a similar effect. For example, it might make sense for Gnus to track the size of the buffers currently in use, and only save the least-recently-used ones to disk to free up some space. That's overkill for my case though. I didn't put a lot of effort into it because Lars might want to do it some other way. The patch was just posted to illuminate the problem (hopefully). I'll live if the problem isn't fixed upstream - but I fixed it here, so I thought I'd share. > Personally, I am all for lean and mean, which is one of the reasons why I > don't use nnfolder. I tried it, briefly, back when I was on a 12MB system. > That was a mistake :). I still like nnfolder. I don't find it very objectionable at all (with my patch). > For what its worth, inode starvation is not as likely as you might think. > The average mail message is ~3-4kB; the average inode size on a modern > filesystem is 4kB. One message, one inode. On an ~1GB filesystem you will > have around 256k inodes (more, actually). In practical terms, you will run > out of space before you run out of inodes, unless your filesystem is tuned > with unusually large inodes or you are suffering under a woefully > restrictive inode quota. My primary objection to nnml is primarily that it is painful to do anything that scans the disk. I've got a very large mail archive. As an aside - I did manage to run out of inodes once when I was using nnml before. I compressed my mail files, and was using jka-compr. When the files are compressed, they average out to be smaller than than default 4kB per inode, and it suddenly becomes possible to run out of inodes. In that case, I just backed everything up, and reformatted the partition with more inodes though. No big deal. Cheers, - Jim