* [9fans] Plan9 panicking
@ 2002-08-20 17:39 Roman V. Shaposhnick
2002-08-25 23:02 ` Roman V. Shaposhnick
0 siblings, 1 reply; 2+ messages in thread
From: Roman V. Shaposhnick @ 2002-08-20 17:39 UTC (permalink / raw)
To: 9fans
Trying to find out how devmnt.c handles timeouts I attached to the u9fs
via debugger and made it stop in the middle of read transaction. What
happened later, however, reminded me greatly of NFS. As you might imagine
Plan9 client was stuck waiting for the reply, so I tried killing it
by pressing "Del" several times. It changed nothing, except that for every
"Del" pressed there was one "flush" message generated. Of course, since
u9fs was still stopped client got no reply, so it continued to generate
"flush" every time I hit "Del".
Since there is only 65535 different "tags" I was curious what would happen
if I generate enough "flush" messages to use every possible tag. Going
to the neighbor window to write a small script I accidentally hit
"Del" once again and Plan9 kernel lost its grip on reality. It was constantly
printing "no procs" on the console and even continuing u9fs was of no help.
The only possible thing I was able to do -- reboot.
After going through the sourcecode I still can't understand why it was
printing "no procs". Any ideas ? And as a more generic question -- can
anybody tell me what was the design of handling failing servers ?
Thanks,
Roman.
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [9fans] Plan9 panicking
2002-08-20 17:39 [9fans] Plan9 panicking Roman V. Shaposhnick
@ 2002-08-25 23:02 ` Roman V. Shaposhnick
0 siblings, 0 replies; 2+ messages in thread
From: Roman V. Shaposhnick @ 2002-08-25 23:02 UTC (permalink / raw)
To: 9fans
Though, I hate to reply to my own emails, I do it here, because the issue
seems to be rather important. First of all, it was not too hard to identify
the source of "no procs" messages being this part of rio (wind.c):
switch(r){
case 0x7F: /* send interrupt */
.....
proccreate(interruptproc, notefd, 4096);
return;
And since the process was wedged in the kernel each Del was generating
yet another interruptproc. Now the questions:
Why does rio need to spawn new interruptproc if it can see that we
already have one ( or plenty, as in my case ) ? Seems to be something
like a superfork, only without writing any code.
Thanks,
Roman.
On Tue, Aug 20, 2002 at 09:39:18PM +0400, Roman V. Shaposhnick wrote:
> Trying to find out how devmnt.c handles timeouts I attached to the u9fs
> via debugger and made it stop in the middle of read transaction. What
> happened later, however, reminded me greatly of NFS. As you might imagine
> Plan9 client was stuck waiting for the reply, so I tried killing it
> by pressing "Del" several times. It changed nothing, except that for every
> "Del" pressed there was one "flush" message generated. Of course, since
> u9fs was still stopped client got no reply, so it continued to generate
> "flush" every time I hit "Del".
>
> Since there is only 65535 different "tags" I was curious what would happen
> if I generate enough "flush" messages to use every possible tag. Going
> to the neighbor window to write a small script I accidentally hit
> "Del" once again and Plan9 kernel lost its grip on reality. It was constantly
> printing "no procs" on the console and even continuing u9fs was of no help.
> The only possible thing I was able to do -- reboot.
>
> After going through the sourcecode I still can't understand why it was
> printing "no procs". Any ideas ? And as a more generic question -- can
> anybody tell me what was the design of handling failing servers ?
>
> Thanks,
> Roman.
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2002-08-25 23:02 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2002-08-20 17:39 [9fans] Plan9 panicking Roman V. Shaposhnick
2002-08-25 23:02 ` Roman V. Shaposhnick
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).