9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
* [9fans] 9pserv leaking?
@ 2005-10-01 19:24 erik quanstrom
  2005-10-02 14:52 ` Russ Cox
  0 siblings, 1 reply; 3+ messages in thread
From: erik quanstrom @ 2005-10-01 19:24 UTC (permalink / raw)
  To: 9fans

; ps auxwww | grep 9pserv
quanstro  4432  0.0  0.4 137036  1140 tty1     Sl   Sep17   0:42 9pserve -u unix!/tmp/ns.quanstro.:0/plumb
quanstro 19386  0.2  0.2  84180   760 pts/32   Sl   14:09   0:00 9pserve -u unix!/tmp/ns.quanstro.:0/acme
quanstro 19453  0.0  0.1   1524   500 pts/4    S+   14:10   0:00 grep 9pserv


i looked into this the other day, but didn't get very far.
the only thing i did notice is that in sendq() the allocated
bytes for the Qel* are not freed if q->hungup:

int
sendq(Queue *q, void *p)
{
	Qel *e;

	e = emalloc(sizeof(Qel));
	qlock(&q->lk);
	if(q->hungup){
		werrstr("hungup queue");
		qunlock(&q->lk);
		return -1;
	}
[...]

any ideas?

erik


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [9fans] 9pserv leaking?
  2005-10-01 19:24 [9fans] 9pserv leaking? erik quanstrom
@ 2005-10-02 14:52 ` Russ Cox
  2005-10-05 12:17   ` erik quanstrom
  0 siblings, 1 reply; 3+ messages in thread
From: Russ Cox @ 2005-10-02 14:52 UTC (permalink / raw)
  To: erik quanstrom, Fans of the OS Plan 9 from Bell Labs

> ; ps auxwww | grep 9pserv
> quanstro  4432  0.0  0.4 137036  1140 tty1     Sl   Sep17   0:42 9pserve -u unix!/tmp/ns.quanstro.:0/plumb
> quanstro 19386  0.2  0.2  84180   760 pts/32   Sl   14:09   0:00 9pserve -u unix!/tmp/ns.quanstro.:0/acme
> quanstro 19453  0.0  0.1   1524   500 pts/4    S+   14:10   0:00 grep 9pserv

This isn't convincing that there's a memory leak.
Maybe you've just had more simultaneous messages
outstanding than acme connections.  I on the other
hand run a lot of wins all the time, so my acme 9pserve is
bigger.

rsc  4410 09:32  0:00   732K Sleep  9pserve -u unix!/tmp/ns.x40/factotum
rsc  4506 09:32  0:00   772K Sleep  9pserve -u unix!/tmp/ns.x40/plumb
rsc  4560 09:32  0:15   904K Sleep  9pserve -u unix!/tmp/ns.x40/acme

I tried to reproduce this using plumber but can't find anything:

; NAMESPACE=/tmp/xxx
; mkdir $NAMESPACE
; @{rfork s; plumber >/dev/null >[2=1]}
; fn z {psu -a|grep 9pserve |grep /tmp/xxx}
; fn 1000 { for(i in `{seq 1000}) $* }
; z
rsc 20413  10:24  0:00   624K Sleep  9pserve -u unix!/tmp/xxx/plumb
; 1000 9p read plumb/rules >/dev/null
; z
rsc 20413  10:24  0:39   732K Sleep  9pserve -u unix!/tmp/xxx/plumb
; 1000 9p read plumb/rules >/dev/null
; z
rsc 20413  10:24  1:18   740K Sleep  9pserve -u unix!/tmp/xxx/plumb
; 1000 9p read plumb/rules >/dev/null
; z
rsc 20413  10:24  1:58   740K Sleep  9pserve -u unix!/tmp/xxx/plumb
; 1000 9p readfd plumb/rules >/dev/null
; z
rsc 20413  10:24  2:06    748K Sleep  9pserve -u unix!/tmp/xxx/plumb
; 1000 9p readfd plumb/rules >/dev/null
; z
rsc 20413  10:24  2:14    748K Sleep  9pserve -u unix!/tmp/xxx/plumb
; 1000 9p stat plumb/rules >/dev/null
; z
rsc 20413  10:24  2:52    748K Sleep  9pserve -u unix!/tmp/xxx/plumb
; 1000 9p ls plumb >/dev/null
; z
rsc 20413  10:24  3:35    748K Sleep  9pserve -u unix!/tmp/xxx/plumb
; 9p read plumb/rules >/tmp/a
; for(i in `{seq 1000}) 9p write plumb/rules </tmp/a
9p: fsopen rules: file already open
; z
rsc 20413  10:24  4:13    748K Sleep  9pserve -u unix!/tmp/xxx/plumb
; for(i in `{seq 1000}) 9p writefd plumb/rules </tmp/a
9p: write error: Bad file descriptor
9p: write error: Bad file descriptor
9p: write error: Bad file descriptor
; z
rsc 20413  10:24  4:42    748K Sleep  9pserve -u unix!/tmp/xxx/plumb
; for(i in `{seq 1000}) 9p writefd plumb/rules </tmp/a
; z
rsc 20413  10:24  5:09    748K Sleep  9pserve -u unix!/tmp/xxx/plumb
;

Those errors are odd, of course, but not a memory leak.

Russ





> i looked into this the other day, but didn't get very far.
> the only thing i did notice is that in sendq() the allocated
> bytes for the Qel* are not freed if q->hungup:

That's not it.  q->hungup is always zero.  (Good thing, too,
since no one looks at the return value of sendq.)  I should
clean that up.


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [9fans] 9pserv leaking?
  2005-10-02 14:52 ` Russ Cox
@ 2005-10-05 12:17   ` erik quanstrom
  0 siblings, 0 replies; 3+ messages in thread
From: erik quanstrom @ 2005-10-05 12:17 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs, Russ Cox

russ, 

sorry about the delay in answering but i've been pretty busy and i thought
that i'd suffered a brain fart. i /thought/ i saw that the size of 9pserve
was 137M not 137k, as the email seems to indicate. (ps seems to be confused.)

however, i found the core today and it's nearly 190M. it grew 45M in 
the two hrs i was away from my computer before i sent gdb after it.

what should i be looking at in this core?

erik

Russ Cox <rsc@swtch.com> writes

| 
| > ; ps auxwww | grep 9pserv
| > quanstro  4432  0.0  0.4 137036  1140 tty1     Sl   Sep17   0:42 9pserve -u unix!/tmp/ns.quanstro.:0/plumb
| > quanstro 19386  0.2  0.2  84180   760 pts/32   Sl   14:09   0:00 9pserve -u unix!/tmp/ns.quanstro.:0/acme
| > quanstro 19453  0.0  0.1   1524   500 pts/4    S+   14:10   0:00 grep 9pserv
| 
| This isn't convincing that there's a memory leak.
| Maybe you've just had more simultaneous messages
| outstanding than acme connections.  I on the other
| hand run a lot of wins all the time, so my acme 9pserve is
| bigger.
| 
| rsc  4410 09:32  0:00   732K Sleep  9pserve -u unix!/tmp/ns.x40/factotum
| rsc  4506 09:32  0:00   772K Sleep  9pserve -u unix!/tmp/ns.x40/plumb
| rsc  4560 09:32  0:15   904K Sleep  9pserve -u unix!/tmp/ns.x40/acme
| 
| I tried to reproduce this using plumber but can't find anything:
| 


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2005-10-05 12:17 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2005-10-01 19:24 [9fans] 9pserv leaking? erik quanstrom
2005-10-02 14:52 ` Russ Cox
2005-10-05 12:17   ` erik quanstrom

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).