9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
From: Francisco J Ballesteros <nemo@lsub.org>
To: Fans of the OS Plan 9 from Bell Labs <9fans@cse.psu.edu>
Subject: Re: [9fans] clunk clunk
Date: Thu,  5 Jan 2006 16:26:41 +0100	[thread overview]
Message-ID: <8ccc8ba40601050726p42d08511mf72e6b299c51ef5d@mail.gmail.com> (raw)
In-Reply-To: <775b8d190601050400y4a59f1a9je6dac56d1726e83c@mail.gmail.com>

In Plan B (the one with the changed kernel) I had a kernel
process doing cclose for channels that were on gone volumes.
Perhaps that was the same thing. I also noticed that the list
of "closing chans" could grow a lot.

But, isn´t the problem the failure to declare a connection
as broken, even when it is? [e.g., when its server process is
dead or broken].

For example, while we were using IL, the connections
were nicely collected and declared broken, however, when
we switched to tcp, some endpoints stayed for a very long
time.

In short, wouldn´t it be better to try to "break" officially an
already broken connection, than to use a pool of closing
processes? [NB: I used just one process, not a pool, but
I think the problem and workaround remains the same].

Note that if a connection is broken, there is no need to
send clunks for chans going through it.

What do you think?
Thanks

On 1/5/06, Bruce Ellis <bruce.ellis@gmail.com> wrote:
> thanks.  very precise.
>
> On 1/5/06, Russ Cox <rsc@swtch.com> wrote:
> > > Could we all see that code as well?
> >
> > The code looks at c->dev in cclose to see if the
> > chan is from devmnt.  If so, cclose places the chan on
> > a queue rather than calling devtab[c->dev]->close()
> > and chanfree() directly.  A pool of worker processes
> > tend the queue, like in exportfs, calling close() and
> > chanfree() themselves.
> >
> > Russ
>
>

  parent reply	other threads:[~2006-01-05 15:26 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-01-03 19:28 Bruce Ellis
2006-01-03 19:38 ` Sascha Retzki
2006-01-03 19:46 ` Russ Cox
2006-01-03 20:07   ` Bruce Ellis
2006-01-03 20:24     ` Russ Cox
2006-01-03 21:33       ` Bruce Ellis
2006-01-03 21:47         ` jmk
2006-01-03 22:09           ` Bruce Ellis
2006-01-03 22:14             ` jmk
2006-01-03 22:16               ` Bruce Ellis
2006-01-03 22:36             ` Russ Cox
2006-01-03 22:45               ` Bruce Ellis
2006-01-04 12:15                 ` Russ Cox
2006-01-04 12:25                   ` Bruce Ellis
2006-01-04 15:36                     ` Ronald G Minnich
2006-01-04 15:41                       ` Bruce Ellis
2006-01-05  9:36                     ` Francisco J Ballesteros
2006-01-05  9:39                       ` Russ Cox
2006-01-05 12:00                         ` Bruce Ellis
2006-01-05 12:36                           ` Charles Forsyth
2006-01-05 15:26                           ` Francisco J Ballesteros [this message]
2006-01-06 21:34                         ` Dave Eckhardt
2006-01-06 21:57                           ` Bruce Ellis
2006-01-06 22:00                           ` Francisco J Ballesteros

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8ccc8ba40601050726p42d08511mf72e6b299c51ef5d@mail.gmail.com \
    --to=nemo@lsub.org \
    --cc=9fans@cse.psu.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).