9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
From: Geoff Collyer <geoff@collyer.net>
To: 9fans@cse.psu.edu
Subject: Re: [9fans] IDE FS failure
Date: Wed, 25 Sep 2002 15:40:52 -0700	[thread overview]
Message-ID: <d91b6a8f029466243ad47ab8479f4839@collyer.net> (raw)

[-- Attachment #1: Type: text/plain, Size: 948 bytes --]

It looks like you're right; my apologies for the bad advice.  I
remembered that all the startsb block numbers had been corrected (to
"2") in 4e but had forgotten about firstsb (and I'd initialised it in
my file servers to "2").  firstsb (if non-zero) is primarily an
optimisation and it depends on the specific jukebox and on time, so in
either /sys/src/fs/emelie/9pcfs.c or /sys/src/fs/sony/9sonyfs.c,

	conf.firstsb = 13219302;

is presumably wrong (I'd bet on 9sonyfs.c being wrong) and could be
deleted or changed to

	conf.firstsb = 0;

Also, /sys/src/fs/words should probably be amended to advise zeroing
firstsb in the spun-off subtree's 9*fs.c.

I noticed just now that conf.wcpsize is not used in 4e; it was used in
3e in /sys/src/fs/port/worm.c in wcpinit(), but worm.c is now gone.
So all those

	conf.wcpsize = 10;

lines in /sys/src/fs/*/9*fs.c and

	port/main.c:	conf.wcpsize = 1024*1024;

can be deleted.

[-- Attachment #2: Type: message/rfc822, Size: 2440 bytes --]

From: David Swasey <swasey@cs.cmu.edu>
To: 9fans@cse.psu.edu
Subject: Re: [9fans] IDE FS failure
Date: Wed, 25 Sep 2002 09:15:11 -0400
Message-ID: <a6e7e85c0618745ab14cb05bbc3c8f91@cs.cmu.edu>

If you can compile a new fs kernel using a stand-alone machine, then
you can probably get past this particular panic.  I believe this
happens because of the line

	conf.firstsb = 13219302;

in your file server's localconfinit.  (This function is part of the
file-server-specific .c file; it may be called 9pcfs.c.)

If firstsb is non-zero, then it is the address of the first super
block to consult when performing a recovery; otherwise, the value in
the startsb array (near the top of 9pcfs.c) is used.

I suggest setting firstsb to 0 and recompiling the file server kernel.
With the new kernel, "recover main" should actually start the recovery
process.

-dave

> On Tue, Sep 24, 2002 at 04:54:03PM -0700, Geoff Collyer wrote:
>>
>> 	recover main
>> 	end
>>
> I get a different error.  I wonder if I didn't configure "archive"
> badly and somehow overlapped it onto main or the cache.  The error is
> now
>
> 	panic: fworm: rbounds 13219302

             reply	other threads:[~2002-09-25 22:40 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-09-25 22:40 Geoff Collyer [this message]
2002-09-27 12:32 ` Lucio De Re
  -- strict thread matches above, loose matches on Subject: below --
2002-09-30  9:48 Fco.J.Ballesteros
2002-09-30 10:21 ` Lucio De Re
2002-09-27 14:18 Russ Cox
2002-09-27 13:50 Russ Cox
2002-09-27 14:09 ` Lucio De Re
2002-09-27 14:17 ` Axel Belinfante
2002-09-25 13:15 David Swasey
2002-09-25 13:36 ` Lucio De Re
2002-09-24 23:54 Geoff Collyer
2002-09-25  4:18 ` Lucio De Re
2002-09-24 10:54 Lucio De Re

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d91b6a8f029466243ad47ab8479f4839@collyer.net \
    --to=geoff@collyer.net \
    --cc=9fans@cse.psu.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).