From: "Simon Josefsson" <jas@extundo.com>
Cc: ding@gnus.org
Subject: Re: two agent nits
Date: Mon, 1 Dec 2003 20:45:26 +0100 (CET) [thread overview]
Message-ID: <39991.217.208.175.28.1070307926.squirrel@yxa.extundo.com> (raw)
In-Reply-To: <uzneca1jd.fsf@xpediantsolutions.com>
> Simon Josefsson <jas@extundo.com> writes:
>
>> Kevin Greiner <kgreiner@xpediantsolutions.com> writes:
>>
>>> Simon Josefsson <jas@extundo.com> writes:
>>>
>>>> Kevin Greiner <kgreiner@xpediantsolutions.com> writes:
>>>>
>>>>> Simon Josefsson <jas@extundo.com> writes:
>>>>>
>>>>>> * I can't seem to get the agent to fetch read articles.
>>>>>> gnus-agent-consider-all-articles's value is t
>>>>>> The default agent category predicate is 'true'.
>>>>>> `J s' and `J u' just download unread or ticked articles.
>>>>>
>>>>> Have to tried setting gnus-agent-consider-all-articles to nil? That
>>>>> seems to work for me.
>>>>
>>>> I tried, but still the same. Pressing `J u' on groups just say it is
>>>> finished, but only the unread/ticked articles are in my local cache.
>>>>
>>>> Hm. The manual and the docstring for the variable doesn't seem to be
>>>> in sync. I thought the variable did what the docstring said, but the
>>>> manual just discuss missing headers. Which one is correct?
>>>
>>> The variable gnus-agent-consider-all-articles appears in
>>> gnus-agent-fetch-headers but NOT gnus-agent-fetch-articles. However,
>>> gnus-agent-fetch-group-1 calls gnus-agent-fetch-headers to get the
>>> list of new articles so gnus-agent-consider-all-articles may, by
>>> modifying the return value of gnus-agent-fetch-headers, effect the
>>> list of articles being fetched by gnus-agent-fetch-articles.
>>
>> Thanks for the pointers, I think I isolated the problem, from g-a-f-a:
>>
>> (let* ((fetch-all (and gnus-agent-consider-all-articles
>> ;; Do not fetch all headers if the predicate
>> ;; implies that we only consider unread
>> articles.
>> (not (gnus-predicate-implies-unread
>> (gnus-agent-find-parameter group
>> 'agent-predicate)))))
>>
>> Fetch-all evaluate to nil for me, causing the function to only return
>> a list of unread articles. Since g-a-c-a-a is t, the reason fetch-al
>> is nil is because of the second statement.
>>
>> (gnus-predicate-implies-unread 'true)
>> => ignore
>>
>> (not (gnus-predicate-implies-unread 'true))
>> => nil
>>
>> Reading the docstring:
>>
>> (gnus-predicate-implies-unread PREDICATE)
>> Say whether PREDICATE implies unread articles only.
>> It is okay to miss some cases, but there must be no false positives.
>> That is, if this function returns true, then indeed the predicate must
>> return only unread articles.
>>
>> The 'ignore return value is not documented, and because it is non-nil
>> it is treated as true by the caller in this case.
>>
>> If I apply this change, everything works, but I can't tell if it is
>> the right thing.
>
> Thanks for isolating the problem. I'm also not certain that this is
> the right thing to do. It is certainly the correct answer if the
> function is gnus-agent-true but what if the function is a compiled
> function returned by gnus-category-make-function?
Returning nil may be a false negative, in that case. But the docstring
says that's fine, just as long as there aren't any false positives. I
guess the only bad thing that happens is that the agent downloads all
headers?
>> Now, continuing, I found that most of my groups had .agentviews
>> looking like:
>>
>> ((731544 66) (731540 64 . 65) ...
>> 2
>>
>> I.e., they have large integers in them, which look like corruption.
>
> That isn't corruption. The large numbers are the timestamps when the
> articles were fetched. The original format looked like
> ((66 . 731544) (64 . 731540) (65 . 731540)) but it was slow to load
> for large groups as it repeated the large numbers endlessly. The
> gnus-agent-load-alist function now loads either format and converts
> internally to the alist form.
Ah, right, thanks. Weird I had to remove the .agentview file to trigger
re-downloading articles then. Too late to debug now. Gnus has filled my
disk with 300MB worth of data, and counting, so I think it works now...
next prev parent reply other threads:[~2003-12-01 19:45 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2003-11-30 16:53 Simon Josefsson
2003-11-30 19:53 ` Harry Putnam
2003-11-30 20:18 ` Simon Josefsson
2003-12-01 3:13 ` Harry Putnam
2003-12-01 4:47 ` Kevin Greiner
2003-12-01 11:39 ` Simon Josefsson
2003-12-01 15:56 ` Kevin Greiner
2003-12-01 16:46 ` Simon Josefsson
2003-12-01 17:30 ` Kevin Greiner
2003-12-01 19:45 ` Simon Josefsson [this message]
2003-12-01 20:35 ` Harry Putnam
2003-12-02 3:02 ` Wes Hardaker
2003-12-02 6:01 ` Kevin Greiner
2003-12-02 17:40 ` Simon Josefsson
2003-12-03 21:47 ` Harry Putnam
2003-12-03 22:14 ` Simon Josefsson
2003-12-04 1:50 ` Harry Putnam
2003-12-04 2:29 ` Simon Josefsson
2003-12-04 5:54 ` Harry Putnam
2003-12-05 2:57 ` Harry Putnam
2003-12-05 3:25 ` Simon Josefsson
2003-12-05 3:54 ` Harry Putnam
2003-12-05 4:13 ` Simon Josefsson
2003-12-05 13:33 ` Harry Putnam
2003-12-02 20:35 ` Simon Josefsson
2003-12-02 22:28 ` Kevin Greiner
2003-12-01 12:30 ` Harry Putnam
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=39991.217.208.175.28.1070307926.squirrel@yxa.extundo.com \
--to=jas@extundo.com \
--cc=ding@gnus.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).