Gnus development mailing list
 help / color / mirror / Atom feed
* 0.3 and async pre-fetch
@ 1996-08-01 23:41 Sudish Joseph
  1996-08-02  1:31 ` 0.3 and async pre-fetch (and YA bug report) Raja R Harinath
  1996-08-02 17:35 ` 0.3 and async pre-fetch Lars Magne Ingebrigtsen
  0 siblings, 2 replies; 11+ messages in thread
From: Sudish Joseph @ 1996-08-01 23:41 UTC (permalink / raw)


I'm curious, why isn't anyone else reporting this?  Is it that the
bug is only visible over a slow connect?  Has anyone else seen this?

It's still got the problem that I sent in the patch for.  Here's what
I see.

I snipped the beginning of this article (the article I was actually
reading). 

--------------------------------------------------
Eh, the UNIX biggots won't flame you for porting a _client_.  If you
really want to piss them off, though, port the server next.  ;-)

Regards,
Andy

-- 
_____________________________________________________________________________
Andy Gough                      |      "Knowledge is power."
Internet: agough@primenet.com   |                -- Francis Bacon
ICBM: 33^18'29" N  111^48'39" W | PGP Key: finger agough@primenet.com
-----------------------------------------------------------------------------

.
220 56134 <bluddoDvG08y.5wu@netcom.com> article
Newsgroups: rec.games.netrek

[ snipped the rest of the nntp results from 6 articles ]

netrek ftp site: ftp.best.com /pub/doosh/netrek
netrek home page: http://www.isn.net/~farnorth/netrek.html
.
200 news.mindspring.com InterNetNews NNRP server INN 1.4 22-Dec-93 ready (posting ok).
--------------------------------------------------

-Sudish


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: 0.3 and async pre-fetch (and YA bug report)
  1996-08-01 23:41 0.3 and async pre-fetch Sudish Joseph
@ 1996-08-02  1:31 ` Raja R Harinath
  1996-08-02 17:35 ` 0.3 and async pre-fetch Lars Magne Ingebrigtsen
  1 sibling, 0 replies; 11+ messages in thread
From: Raja R Harinath @ 1996-08-02  1:31 UTC (permalink / raw)


Sudish Joseph <sudish@mindspring.com> writes:
> I'm curious, why isn't anyone else reporting this?  Is it that the
> bug is only visible over a slow connect?  Has anyone else seen this?

[snipped description/example.  Parts of subsequent articles are being
 shown in current article buffer]

Yes.  I've seen it, for one.  It doesn't look like a slow connect
problem, though the slow connect may make it more apparent.  I saw it on
a large group (`soc.culture.indian' in fact.  ~200 new articles, I was
in about the 75th, after generous use of `n' and `T o T k').

BTW, I got the following backtrace when I did `^' on an article, and the
parent wasn't in the summary buffer.

Signaling: (error "format specifier doesn't match argument type")
  format("%s-%d" "gnu.misc.discuss" "<91c2r1ur.fsf@mihalis.demon.co.uk>")
  (intern (format "%s-%d" group article))
  (assq (intern (format "%s-%d" group article)) gnus-async-article-alist)
  gnus-async-prefetched-article-entry("gnu.misc.discuss" "<91c2r1ur.fsf@mihalis.demon.co.uk>")
  (let ((entry ...)) (when entry (save-excursion ... ... ... ... ... ... ... t)))
  gnus-async-request-fetched-article("gnu.misc.discuss" "<91c2r1ur.fsf@mihalis.demon.co.uk>" #<buffer *Article*>)
  gnus-request-article-this-buffer(-1 "gnu.misc.discuss")
  gnus-article-prepare(-1 nil)
  gnus-summary-display-article(-1 nil)
  gnus-summary-goto-article(-1 nil [-1 "Re: BOYCOTT commercial software! (was: NOTSCAPE!)" "" "" "<91c2r1ur.fsf@mihalis.demon.co.uk>" "<4sm43v$jt8@solutions.solon.com>" 0 0 ""])
  gnus-summary-refer-article("<91c2r1ur.fsf@mihalis.demon.co.uk>")
  gnus-summary-refer-parent-article(1)
  call-interactively(gnus-summary-refer-parent-article)

- Hari

-- 
Raja R Harinath ------------------------------ harinath@cs.umn.edu
"When all else fails, read the instructions."      -- Cahn's Axiom
"Our policy is, when in doubt, do the right thing."   -- Roy L Ash


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: 0.3 and async pre-fetch
  1996-08-01 23:41 0.3 and async pre-fetch Sudish Joseph
  1996-08-02  1:31 ` 0.3 and async pre-fetch (and YA bug report) Raja R Harinath
@ 1996-08-02 17:35 ` Lars Magne Ingebrigtsen
  1996-08-03  0:35   ` Sudish Joseph
  1 sibling, 1 reply; 11+ messages in thread
From: Lars Magne Ingebrigtsen @ 1996-08-02 17:35 UTC (permalink / raw)


Sudish Joseph <sudish@mindspring.com> writes:

> I'm curious, why isn't anyone else reporting this?  Is it that the
> bug is only visible over a slow connect?  Has anyone else seen this?

There was a missing `(goto-char (point-max))' somewhere.  The prefetch
seems to work ok in 0.4.

-- 
(domestic pets only, the antidote for overdose, milk.)
  larsi@ifi.uio.no * Lars Ingebrigtsen


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: 0.3 and async pre-fetch
  1996-08-02 17:35 ` 0.3 and async pre-fetch Lars Magne Ingebrigtsen
@ 1996-08-03  0:35   ` Sudish Joseph
  1996-08-04 22:09     ` [ patch ] async stuff fix(?) (was Re: 0.3 and async pre-fetch) Sudish Joseph
  1996-08-05 17:00     ` 0.3 and async pre-fetch Lars Magne Ingebrigtsen
  0 siblings, 2 replies; 11+ messages in thread
From: Sudish Joseph @ 1996-08-03  0:35 UTC (permalink / raw)


In article <w8s91bx3czp.fsf@hrym.ifi.uio.no>,
Lars Magne Ingebrigtsen <larsi@ifi.uio.no> writes:
> Sudish Joseph <sudish@mindspring.com> writes:
>> I'm curious, why isn't anyone else reporting this?  Is it that the
>> bug is only visible over a slow connect?  Has anyone else seen this?

> There was a missing `(goto-char (point-max))' somewhere.  The prefetch
> seems to work ok in 0.4.

Nope.  I'm still seeing this in 0.4.  The problem remains the same as
what my patch tried to address (not very well, the problem lies in
nntp-process-filter, fixing the generated callback is a workaround).

I believe I've finally understood why this is happening: it has
nothing to do with o/p arriving too quickly as I first suggested.
(Rather stupidly, considering my PPP link :-)

Here's the essence: 
nntp-process-filter should *never* use re-search-backward to identify
the end of a command's output.

And here's my usual verbiage:
Emacs cannot make any guarantees about the string passed to any
process filter.  When the filter runs, the stack might have passed
back a buffer containing the results of two separate commands -- the
stack is driven by TCP timers and buffers.

The reason I see the problem more often than any of you is because my
link is slower from home -- there's a large amount of data waiting in
TCP buffers on the nntp server.  My link is running at full capacity
(a whopping 31.2Kbs) and therefore the buffers being passed to
nntp-process-filter will quite often contain the results of multiple
commands.

Conversely, those sitting on 10BaseT or better will rarely send out
commands faster than the network can respond -- timeouts in the stack
will pass those buffers back to you before your next command goes out
or close enough to make no difference.

Basically, the process filter should save the place the last o/p
passed back to gnus-async ended at across invocations and use
re-search-forward from that marker (the current stuff is neat in that
it avoids saving that state, but I think that won't work).  Sorta like
I was doing in my patch (in the wrong place).

I'm quite certain this will fix it.  I'll code this tomorrow, if I get
the time.

-Sudish


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [ patch ] async stuff fix(?) (was Re: 0.3 and async pre-fetch)
  1996-08-03  0:35   ` Sudish Joseph
@ 1996-08-04 22:09     ` Sudish Joseph
  1996-08-05  0:39       ` Sudish Joseph
                         ` (2 more replies)
  1996-08-05 17:00     ` 0.3 and async pre-fetch Lars Magne Ingebrigtsen
  1 sibling, 3 replies; 11+ messages in thread
From: Sudish Joseph @ 1996-08-04 22:09 UTC (permalink / raw)


In article <m2lofxi9tl.fsf@atreides.erehwon.org>,
Sudish Joseph <sudish@mindspring.com> writes:
> nntp-process-filter should *never* use re-search-backward to identify
> the end of a command's output.

Here's a patch that uses forward search over the string received by
the filter.  My limited testing (3 groups of 50-100 articles) shows
things as being ok, but I think this could do with a rewrite, more
below. 

Fixing this exposed another bug: gnus-async-prefetch-article could get
confused about which part of the async buffer was the actual article.
This was really painful, coz it was unstable (one munged article meant
that a lot of following articles would be munged).  After reading
about 100 articles of a 1,500 article news.groups summary, it
completely lost track of which article was which.

The problem here was that there were two different functions trying to
define "this is the article" independent of the other -- the filter
and gnus-async-prefetch-article.  I've rewritten the callback to have
the filter pass it the required markers so that it no longer tries to
assume anything about which part of the async buffer is the article.
No problem, so far...even if one shows up, this should be a stable
error as subsequent articles won't get munged.

The rewrite I mentioned: 

1) A cleaner interface between gnus-async and nntp would have
gnus-async sending requests to nntp and nntp passing back each
response as a string; i.e., they will each manipulate their own
buffers and markers, with *no* exceptions...the current interface has
them doing each others laundry.  What I've done is a weak compromise.

The downside to using strings is that we'll get -large- strings that
will hog memory until garbage collection.  Is there any way to
truncate the actual string to get around this?  (It has to be there,
I'm too lazy to dig right now.)  Even better, any way to totally
obliterate the in-memory string; i.e., gc a single string?  I don't
think the number of actual strings generated will increase gc costs.
If so, we might keep a constant number of strings that we grow and
truncate as needed.  Hmm, do buffers cost more/less than strings?  We
might pass buffers as our result from the filter if that's better.
Buffers aren't gc'ed I think, but we can manually maintain those quite
easily.

2) Eliminate the need to generate callbacks dynamically in
gnus-async-prefetch-article by having that function keep a queue of
sent commands that'll be used by a static callback (a defun'ed,
non-backquote generated, function) to match group/article combinations
to received responses (assuming that responses are received in the
correct order...seems reasonable).  Keep that queue buffer-local to
allow for multiple summary prefetches to work at once.

This would a) make debugging this stuff stop being a major pita, 

b) make it easier to understand, c) reduce gc (there's a lambda
discarded for every article prefetched currently), d) make it easier
to guarantee that responses are tagged with the correct group-article
id.

a + b are good enough for me, though c + d are the reasons that might
influence you. :)

There'll definitely be more wrinkles in this stuff, rewriting it to be
simpler to debug/understand is worth doing right now.

The above doesn't involve much coding, though my usual bombasticity
might make it seem so.

-Sudish

PS:  If anyone would care to explain: can buffers be gc'ed?  are
lambda's gc'ed (if not, gnus currently leaks)?  can an arbitrary
object be completely destroyed/gc'ed all by itself?

Sun Aug  4 17:31:53 1996  Sudish Joseph  <sudish@mindspring.com>

	* nntp.el (nntp-tmp-first): Unused variable, deleted.
	(nntp-process-filter): Determine end-of-response using a forward
	search over STRING.  Don't assume that this in the only response in
	the process buffer.  Generate markers for the beginning and end
	of the response inserted into the async prefetch buffer.

	* gnus-async.el (gnus-async-prefetch-article): The generated 
	callback now assumes that the process filter will pass it the
	required markers as (optional) arguments.


*** rgnus-0.05/lisp/gnus-async.el	Fri Aug  2 14:04:55 1996
--- lisp/gnus-async.el	Sun Aug  4 17:14:42 1996
***************
*** 94,112 ****
        ;; We want to fetch some more articles.
        (save-excursion
  	(set-buffer summary)
! 	(let ((next (caadr (gnus-data-find-list article)))
! 	      mark)
  	  (nnheader-set-temp-buffer gnus-async-prefetch-article-buffer t)
- 	  (goto-char (point-max))
- 	  (setq mark (point-marker))
  	  (let ((nnheader-callback-function
! 		 `(lambda (arg)
  		    (save-excursion
  		      (nnheader-set-temp-buffer
  		       gnus-async-prefetch-article-buffer t)
  		      (push (list ',(intern (format "%s-%d" group article))
! 				  ,mark (set-marker (make-marker) (point-max))
! 				  ,group ,article)
  			    gnus-async-article-alist)
  		      (when (gnus-buffer-live-p ,summary)
  			,(when next
--- 94,108 ----
        ;; We want to fetch some more articles.
        (save-excursion
  	(set-buffer summary)
! 	(let ((next (caadr (gnus-data-find-list article))))
  	  (nnheader-set-temp-buffer gnus-async-prefetch-article-buffer t)
  	  (let ((nnheader-callback-function
! 		 `(lambda (arg &optional begin-marker end-marker)
  		    (save-excursion
  		      (nnheader-set-temp-buffer
  		       gnus-async-prefetch-article-buffer t)
  		      (push (list ',(intern (format "%s-%d" group article))
! 				  begin-marker end-marker ,group ,article)
  			    gnus-async-article-alist)
  		      (when (gnus-buffer-live-p ,summary)
  			,(when next
*** rgnus-0.05/lisp/nntp.el	Sat Aug  3 18:50:09 1996
--- lisp/nntp.el	Sun Aug  4 17:14:42 1996
***************
*** 476,482 ****
  	    (eval (cadr entry))
  	  (funcall (cadr entry)))))))
  
- (defvar nntp-tmp-first)
  (defvar nntp-tmp-wait-for)
  (defvar nntp-tmp-callback)
  (defvar nntp-tmp-buffer)
--- 476,481 ----
***************
*** 492,500 ****
    "Process filter used for waiting a calling back."
    (let ((old-buffer (current-buffer)))
      (unwind-protect
! 	(let (point)
  	  (set-buffer (process-buffer proc))
! 	  ;; Insert the text, moving the process-marker.
  	  (setq point (goto-char (process-mark proc)))
  	  (insert string)
  	  (set-marker (process-mark proc) (point))
--- 491,499 ----
    "Process filter used for waiting a calling back."
    (let ((old-buffer (current-buffer)))
      (unwind-protect
! 	(let (point eoresponse b e)
  	  (set-buffer (process-buffer proc))
! 	  ;; Insert string, moving the process-marker.
  	  (setq point (goto-char (process-mark proc)))
  	  (insert string)
  	  (set-marker (process-mark proc) (point))
***************
*** 504,520 ****
  		(nntp-snarf-error-message)
  		(set-process-filter proc nil)
  		(funcall nntp-tmp-callback nil))
! 	    (setq nntp-tmp-first nil)
! 	    (if (re-search-backward nntp-tmp-wait-for nil t)
  		(progn
  		  (if (buffer-name (get-buffer nntp-tmp-buffer))
  		      (save-excursion
  			(set-buffer (get-buffer nntp-tmp-buffer))
  			(goto-char (point-max))
! 			(insert-buffer-substring (process-buffer proc))))
! 		  (set-process-filter proc nil)
! 		  (erase-buffer)
! 		  (funcall nntp-tmp-callback t)))))
        (set-buffer old-buffer))))
  
  (defun nntp-retrieve-data (command address port buffer
--- 503,541 ----
  		(nntp-snarf-error-message)
  		(set-process-filter proc nil)
  		(funcall nntp-tmp-callback nil))
! 	    ;; Check if the inserted string contained the end of a response.
! 	    (goto-char point)
! 	    (if (re-search-forward nntp-tmp-wait-for nil t)
! 		;; we have a complete response, copy it into the specified
! 		;; buffer
  		(progn
+ 		  (setq eoresponse (point))
  		  (if (buffer-name (get-buffer nntp-tmp-buffer))
  		      (save-excursion
  			(set-buffer (get-buffer nntp-tmp-buffer))
+ 			;; we insert at the end; this is safe coz we're
+ 			;; the only one inserting into this buffer
  			(goto-char (point-max))
! 			;; We have what we know to be a full
! 			;; response. Since we're the ones inserting
! 			;; it, we know exactly where it began and
! 			;; ended.  We pass this information to the
! 			;; callback, eliminating the need for
! 			;; the callback to guess this for itself and
! 			;; eliminating unnecessary complexity there.
! 			;; This also has the advantage of making prefetch
! 			;; self-stabilizing in the face of errors -- i.e.,
! 			;; if an error occurs, it'll only affect one article
! 			;; and subsequent articles won't be munged. -- sj
! 			(setq b (point-marker))
! 			(insert-buffer-substring (process-buffer proc)
! 						 1 eoresponse)
! 			(setq e (point-marker))))
! 		  ;; delete the processed response
! 		  (delete-region 1 eoresponse)
! 		  ;; notify the async system that it has a full response to
! 		  ;; process, passing it the beginning and end of that resp.
! 		  (funcall nntp-tmp-callback t b e)))))
        (set-buffer old-buffer))))
  
  (defun nntp-retrieve-data (command address port buffer


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [ patch ] async stuff fix(?) (was Re: 0.3 and async pre-fetch)
  1996-08-04 22:09     ` [ patch ] async stuff fix(?) (was Re: 0.3 and async pre-fetch) Sudish Joseph
@ 1996-08-05  0:39       ` Sudish Joseph
  1996-08-05 17:11       ` Lars Magne Ingebrigtsen
  1996-08-05 18:20       ` Ken Raeburn
  2 siblings, 0 replies; 11+ messages in thread
From: Sudish Joseph @ 1996-08-05  0:39 UTC (permalink / raw)


More fixes.  These need to be applied over the last patch.

The first one is for a weird condition that I'm not sure of the cause
of.  The only explanation I can come up with is that the filter is
being called by XEmacs while the last filter is still active.  I
couldn't find anything in Lispref about this.  Anyone know how to
protect a few calls against this?  We don't need to stop it
altogether, the critical section is very small in the filter.  

The first patch below should make this safe -- unless the reentrant
call occurs before the 'insert is executed.  Making it safe in the
face of that takes some more thinking (don't define a filter and use
an after-change hook like Lars suggested?  or keep a variable that
tracks how many instances of the filter are active and accumulate the
o/p from the second and later calls in a string which is handled by
the first instance of the filter before it exits?)

The second one half-handles the case in the callback when it's arg is
nil (indicating that the buffer doesn't contain a valid article).  I'm
not sure of what to do here.  Note that the checking for error codes
in the filter isn't very robust in the face of filter race conditions
either.

-Sudish

Sun Aug  4 20:33:36 1996  Sudish Joseph  <sudish@mindspring.com>

	* gnus-async.el (gnus-async-prefetch-article): Check ARG before
 	assuming that the buffer contains an article.

	* nntp.el (nntp-process-filter): Insert all o/p at eob.


--- nntp.el~	Sun Aug  4 20:25:41 1996
+++ nntp.el	Sun Aug  4 20:25:41 1996
@@ -494,9 +494,9 @@
 	(let (point eoresponse b e)
 	  (set-buffer (process-buffer proc))
 	  ;; Insert string, moving the process-marker.
-	  (setq point (goto-char (process-mark proc)))
+	  (setq point (goto-char (set-marker (process-mark proc) (point-max))))
 	  (insert string)
-	  (set-marker (process-mark proc) (point))
+	  (set-marker (process-mark proc) (point-max))
 	  (if (and (= point (point-min))
 		   (string-match "^45" string))
 	      (progn

--- gnus-async.el~	Sun Aug  4 20:35:40 1996
+++ gnus-async.el	Sun Aug  4 20:35:40 1996
@@ -101,9 +101,10 @@
 		    (save-excursion
 		      (nnheader-set-temp-buffer
 		       gnus-async-prefetch-article-buffer t)
-		      (push (list ',(intern (format "%s-%d" group article))
-				  begin-marker end-marker ,group ,article)
-			    gnus-async-article-alist)
+		      (when (and arg begin-marker end-marker)
+			(push (list ',(intern (format "%s-%d" group article))
+				    begin-marker end-marker ,group ,article)
+			      gnus-async-article-alist))
 		      (when (gnus-buffer-live-p ,summary)
 			,(when next
 			   `(gnus-async-prefetch-article 


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: 0.3 and async pre-fetch
  1996-08-03  0:35   ` Sudish Joseph
  1996-08-04 22:09     ` [ patch ] async stuff fix(?) (was Re: 0.3 and async pre-fetch) Sudish Joseph
@ 1996-08-05 17:00     ` Lars Magne Ingebrigtsen
  1 sibling, 0 replies; 11+ messages in thread
From: Lars Magne Ingebrigtsen @ 1996-08-05 17:00 UTC (permalink / raw)



Sudish Joseph <sudish@mindspring.com> writes:

> And here's my usual verbiage:
> Emacs cannot make any guarantees about the string passed to any
> process filter.  When the filter runs, the stack might have passed
> back a buffer containing the results of two separate commands -- the
> stack is driven by TCP timers and buffers.

But -- Gnus sends out the request for the next article only *after*
the previous fetch has completed, so there's no way (or should be no
way) for data from more than one article to be present in the buffer
at once...

-- 
  "Yes.  The journey through the human heart 
     would have to wait until some other time."


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [ patch ] async stuff fix(?) (was Re: 0.3 and async pre-fetch)
  1996-08-04 22:09     ` [ patch ] async stuff fix(?) (was Re: 0.3 and async pre-fetch) Sudish Joseph
  1996-08-05  0:39       ` Sudish Joseph
@ 1996-08-05 17:11       ` Lars Magne Ingebrigtsen
  1996-08-06  6:08         ` Sudish Joseph
  1996-08-05 18:20       ` Ken Raeburn
  2 siblings, 1 reply; 11+ messages in thread
From: Lars Magne Ingebrigtsen @ 1996-08-05 17:11 UTC (permalink / raw)



Sudish Joseph <sudish@mindspring.com> writes:

> The problem here was that there were two different functions trying to
> define "this is the article" independent of the other -- the filter
> and gnus-async-prefetch-article. 

No, the filter has no idea that it's fetching an article at all.  The
filter (and only the filter) fuzzes around in its process buffer
(which is called " *server <server> <port> *<associated-buffer>**"),
and when an article (or whatever) has been received, it copies its
contents over to <associated-buffer> (which is the asynch article
buffer in this case) and calls the callback function.

> The downside to using strings is that we'll get -large- strings that
> will hog memory until garbage collection.  Is there any way to
> truncate the actual string to get around this? 

Nope.

> Hmm, do buffers cost more/less than strings? 

"Less, but the same."  The memory occupied by buffers is returned to
the system when they are killed.

> We might pass buffers as our result from the filter if that's
> better.  Buffers aren't gc'ed I think, but we can manually maintain
> those quite easily.

Well, killed buffers take no space.

> 2) Eliminate the need to generate callbacks dynamically in
> gnus-async-prefetch-article by having that function keep a queue of
> sent commands that'll be used by a static callback (a defun'ed,
> non-backquote generated, function) to match group/article combinations
> to received responses (assuming that responses are received in the
> correct order...seems reasonable).  Keep that queue buffer-local to
> allow for multiple summary prefetches to work at once.
> 
> This would a) make debugging this stuff stop being a major pita, 
> 
> b) make it easier to understand, c) reduce gc (there's a lambda
> discarded for every article prefetched currently), d) make it easier
> to guarantee that responses are tagged with the correct group-article
> id.

It'd make it harder to guarantee anything, because things would then
rely on variables that might have changed in the meantime.  Generating
a fresh function with the proper values already there is much nicer, I
think.

> PS:  If anyone would care to explain: can buffers be gc'ed?

Yes.

> are lambda's gc'ed (if not, gnus currently leaks)?

A `(whatever) form is just a list, so, yes.

> can an arbitrary object be completely destroyed/gc'ed all by itself?

No.

-- 
  "Yes.  The journey through the human heart 
     would have to wait until some other time."


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [ patch ] async stuff fix(?) (was Re: 0.3 and async pre-fetch)
  1996-08-04 22:09     ` [ patch ] async stuff fix(?) (was Re: 0.3 and async pre-fetch) Sudish Joseph
  1996-08-05  0:39       ` Sudish Joseph
  1996-08-05 17:11       ` Lars Magne Ingebrigtsen
@ 1996-08-05 18:20       ` Ken Raeburn
  2 siblings, 0 replies; 11+ messages in thread
From: Ken Raeburn @ 1996-08-05 18:20 UTC (permalink / raw)
  Cc: ding


Sudish Joseph <sudish@mindspring.com> writes:

> The downside to using strings is that we'll get -large- strings that
> will hog memory until garbage collection.

I think that'll be bad for performance.  Remember emacs does gc
roughly after each N bytes allocated.  So if you get a large string
per article, say, you'll be doing gc very often as you go through one
newsgroup.  (Just building a summary buffer for 1000 articles causes
me to run gc several times.  That's part of the reason for the
simplified-subject-caching speedups I sent mail about a while back.
At some point interning in a symbol table once is cheaper than
regenerating multiple copies of the same strings over and over.)

Also, the cost of doing a gc is, I think, dependent on the currently
allocated storage and the max-so-far allocated storage.  (Has anyone
looked at it more closely?  I think only buffers are allocated from
storage that can get freed, but I could be wrong.)  So allocating big
strings that can't be gc'ed right away means the price of future gc
passes probably goes up.

Can you return articles or other big stuff as buffers or buffer
portions?  (Carefully arranging to always have them cleaned up, of
course.)

>	  Is there any way to
> truncate the actual string to get around this?  (It has to be there,
> I'm too lazy to dig right now.)  Even better, any way to totally
> obliterate the in-memory string; i.e., gc a single string?

None I know of.

>	  I don't
> think the number of actual strings generated will increase gc costs.
> If so, we might keep a constant number of strings that we grow and
> truncate as needed.  Hmm, do buffers cost more/less than strings?  We
> might pass buffers as our result from the filter if that's better.
> Buffers aren't gc'ed I think, but we can manually maintain those quite
> easily.

A buffer should be more expensive than a string, but passing around
lots of mark in a single buffer may be cheaper than manipulating lots
of substrings.  The buffer contents don't go through the alloc/gc
interface that strings and other objects use.

> PS:  If anyone would care to explain: can buffers be gc'ed?  are
> lambda's gc'ed (if not, gnus currently leaks)?  can an arbitrary
> object be completely destroyed/gc'ed all by itself?

I think you have to explicitly kill buffers first; yes; no.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [ patch ] async stuff fix(?) (was Re: 0.3 and async pre-fetch)
  1996-08-05 17:11       ` Lars Magne Ingebrigtsen
@ 1996-08-06  6:08         ` Sudish Joseph
  1996-08-06 20:51           ` Lars Magne Ingebrigtsen
  0 siblings, 1 reply; 11+ messages in thread
From: Sudish Joseph @ 1996-08-06  6:08 UTC (permalink / raw)


In article <x6wwzdwyau.fsf@eyesore.no>,
Lars Magne Ingebrigtsen <larsi@ifi.uio.no> writes:
>> discarded for every article prefetched currently), d) make it easier
>> to guarantee that responses are tagged with the correct group-article
>> id.

> It'd make it harder to guarantee anything, because things would then
> rely on variables that might have changed in the meantime.  

There's only one shared variable: the queue tracking group/article
pairs.  This variable has nice properties that make it easy to share.

a) It's a strict producer-consumer buffer with the very useful
property of being unbounded (we look at a finite subset of the buffer
at any given time, but that subset moves monotonically along the
unbounded buffer).  This means that the thornier of the two issues
requiring a lock in P-C problems is eliminated: we never have to check
for overflow.  This leaves underflow.

b) The producer (gnus-async-prefetch-article) always stays a fixed
number of items ahead of the consumer (*).  More importantly, the
producer stops producing at a point when gnus-use-article-prefetch
items are still left in the buffer.  So, underflow is trivially
testable in the consumer w/o needing a lock.

(*) Well, it's guaranteed to be at least one full item ahead of the
consumer, until it's never going to produce anymore.  (I think...:-)
This is because the producer kicks off each consumer.

Note that the current async prefetch buffer stuff makes the same
assumptions as above and works for the same reasons.

I knew all those OS courses would be useful someday. :-)

> Generating a fresh function with the proper values already there is
> much nicer, I think.

Well, it doesn't have any advantages over a static filter as far as
concurrency management is concerned: the marks in the buffer have the
same problems as any shared variables.  (One way of eliminating this
was what I wrote about the interfacwe: have the filter return a
string/generate a new buffer for each response and let the producer be
the only one using the async buffer.)

Either way is good, of course.  I prefer a static buffer because it's
conceptually simpler.  Your callbacks are kewler, of course.

-Sudish


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [ patch ] async stuff fix(?) (was Re: 0.3 and async pre-fetch)
  1996-08-06  6:08         ` Sudish Joseph
@ 1996-08-06 20:51           ` Lars Magne Ingebrigtsen
  0 siblings, 0 replies; 11+ messages in thread
From: Lars Magne Ingebrigtsen @ 1996-08-06 20:51 UTC (permalink / raw)



Sudish Joseph <sudish@mindspring.com> writes:

> There's only one shared variable: the queue tracking group/article
> pairs.  This variable has nice properties that make it easy to share.
> 
> a) It's a strict producer-consumer buffer with the very useful
> property of being unbounded (we look at a finite subset of the buffer
> at any given time, but that subset moves monotonically along the
> unbounded buffer).  This means that the thornier of the two issues
> requiring a lock in P-C problems is eliminated: we never have to check
> for overflow.  This leaves underflow.
> 
> b) The producer (gnus-async-prefetch-article) always stays a fixed
> number of items ahead of the consumer (*).  More importantly, the
> producer stops producing at a point when gnus-use-article-prefetch
> items are still left in the buffer.  So, underflow is trivially
> testable in the consumer w/o needing a lock.

Well, but all this isn't really an issue -- we don't have a general
producer/consumer situation.  We instead have an, uhm, link of actions
-- fetch, wait, fetch, wait, etc.  Sort of.  So you seem to be adding
complexity to a situation I thought was rather simple...

-- 
  "Yes.  The journey through the human heart 
     would have to wait until some other time."


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~1996-08-06 20:51 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1996-08-01 23:41 0.3 and async pre-fetch Sudish Joseph
1996-08-02  1:31 ` 0.3 and async pre-fetch (and YA bug report) Raja R Harinath
1996-08-02 17:35 ` 0.3 and async pre-fetch Lars Magne Ingebrigtsen
1996-08-03  0:35   ` Sudish Joseph
1996-08-04 22:09     ` [ patch ] async stuff fix(?) (was Re: 0.3 and async pre-fetch) Sudish Joseph
1996-08-05  0:39       ` Sudish Joseph
1996-08-05 17:11       ` Lars Magne Ingebrigtsen
1996-08-06  6:08         ` Sudish Joseph
1996-08-06 20:51           ` Lars Magne Ingebrigtsen
1996-08-05 18:20       ` Ken Raeburn
1996-08-05 17:00     ` 0.3 and async pre-fetch Lars Magne Ingebrigtsen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).