Gnus development mailing list
 help / color / mirror / Atom feed
* Multiple article buffers
@ 1995-12-17 21:32 Lars Magne Ingebrigtsen
  1995-12-18 14:11 ` Brian Edmonds
  0 siblings, 1 reply; 2+ messages in thread
From: Lars Magne Ingebrigtsen @ 1995-12-17 21:32 UTC (permalink / raw)


I've now added multiple article buffers, and it seems to work ok.
It's off by default; `gnus-single-article-buffer' is the variable to
use.  Seems kinda neat, actually, having two summary buffers and two
article buffers open at the same time...  Even useful, perhaps.  :-)

I thought I'd finally write that "multiple directory nnml" thing,
where nnml would look into sub-directories called "2101-2200", and
look at the overview file there and stuff.  But it seems to me that
this'd be a totally new backend since not much of the code could be
shared between the two different methods.  Or perhaps...  (This just
occurred to me...)  We could have an "nnmulti" backend that combined
several groups (of random formats) into a single group.  A but like
nnvirtual, but not at all, I think.  This would be useful for, say,
combining several digests into one biiig group...  Hm.  

-- 
(domestic pets only, the antidote for overdose, milk.)
  larsi@ifi.uio.no * Lars Ingebrigtsen


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Multiple article buffers
  1995-12-17 21:32 Multiple article buffers Lars Magne Ingebrigtsen
@ 1995-12-18 14:11 ` Brian Edmonds
  0 siblings, 0 replies; 2+ messages in thread
From: Brian Edmonds @ 1995-12-18 14:11 UTC (permalink / raw)
  Cc: Lars Magne Ingebrigtsen

>>>>> "Lars" == Lars Magne Ingebrigtsen <larsi@ifi.uio.no> writes:

Lars> I thought I'd finally write that "multiple directory nnml" thing,
Lars> where nnml would look into sub-directories called "2101-2200", and
Lars> look at the overview file there and stuff.

I was thinking of this the other day, and it occurred to me that with
latency and such, it typically only takes a little longer to transfer
20k than it does 2k via ftp.  This led me to conclude that a scheme
(which I've dubbed nnbucket) much like nnml, but bucketing n messages
sequentially together (a la nnfolder?) would be more efficient for the
archives.  This way you'll have files called 1, n+1, 2n+1, 3n+1, etc.

As a .overview optimization, you can then split the .overview every m
buckets, so you'll have .overview.1, .overview.mn+1, .overview.mn+2,
etc.  You could of course just number sequentially in both these cases,
but the computer should have no trouble numbering in jump like this, and
it makes it more accessible to humans browsing the raw directory
structure.

This, I think, should make ftp access to archives much smoother, as the
buckets can be cached, so you get something like the asynch lookahead
for free, and the .overview files should stay reasonably bounded in
size.  Lastly, you don't get such an ugly proliferation of directories.

Now all it needs is a lisp god to implement it.   A perl script to
convert from nnml to nnbucket would also be a good idea.

Brian.


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~1995-12-18 14:11 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1995-12-17 21:32 Multiple article buffers Lars Magne Ingebrigtsen
1995-12-18 14:11 ` Brian Edmonds

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).