9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
* [9fans] fossil question
@ 2004-05-17 12:31 Fco.J.Ballesteros
  0 siblings, 0 replies; 30+ messages in thread
From: Fco.J.Ballesteros @ 2004-05-17 12:31 UTC (permalink / raw)
  To: 9fans

We have this df for our main fossil file system:
main: 16,451,764,224 used + 7,533,838,336 free = 23,985,602,560 (68% used)

Which says that it's using 16G.

Also, a du -a on / reports 1127112	Kbytes

We keep just two temporary snapshots.

I think that what happens is that (somehow) we have been reading from archived
snapshots enough to get up to 16G. Is this right?

I'd like to lower the %used, but I don't know exactly now to proceed.
Any clue?

thanks



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-06-07 13:36                   ` Russ Cox
@ 2004-06-07 13:38                     ` Fco. J. Ballesteros
  0 siblings, 0 replies; 30+ messages in thread
From: Fco. J. Ballesteros @ 2004-06-07 13:38 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 47 bytes --]

Cool. It's great to see you found it so fast.

[-- Attachment #2: Type: message/rfc822, Size: 2764 bytes --]

From: Russ Cox <russcox@gmail.com>
To: Fans of the OS Plan 9 from Bell Labs <9fans@cse.psu.edu>
Subject: Re: [9fans] fossil question
Date: Mon, 7 Jun 2004 08:36:46 -0500
Message-ID: <ee9e417a040607063631f45f3b@mail.gmail.com>

If you want to see it leak, create a large (say, 10MB) file,
take a snapshot, and then delete the file.  All the blocks
at the bottom of the tree will leak.

I've fixed fossil (I think!) but I want to run the code here
for a few days before I put it on sources.  I also put in a
"check" console command and tossed flchk.

Russ

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-06-07  6:47                 ` Kenji Okamoto
@ 2004-06-07 13:36                   ` Russ Cox
  2004-06-07 13:38                     ` Fco. J. Ballesteros
  0 siblings, 1 reply; 30+ messages in thread
From: Russ Cox @ 2004-06-07 13:36 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

If you want to see it leak, create a large (say, 10MB) file,
take a snapshot, and then delete the file.  All the blocks
at the bottom of the tree will leak.

I've fixed fossil (I think!) but I want to run the code here
for a few days before I put it on sources.  I also put in a
"check" console command and tossed flchk.

Russ


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-25  2:46               ` Kenji Okamoto
@ 2004-06-07  6:47                 ` Kenji Okamoto
  2004-06-07 13:36                   ` Russ Cox
  0 siblings, 1 reply; 30+ messages in thread
From: Kenji Okamoto @ 2004-06-07  6:47 UTC (permalink / raw)
  To: 9fans

Hi, 

>does fossil leak?

Now, I'm running networked fossil+Venti file server, and haven't seen
this leaks.   Mine is
snaptime -a 0500 -s 60 -t 1440
and has 3.5GB fossil partition.
df of my file server is being around several percents, and rarely upto
fifteen percents, etc.

Did your leak occur suddenly or gradually?

Kenji



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-26 10:04         ` Bruce Ellis
@ 2004-05-26 15:19           ` ron minnich
  0 siblings, 0 replies; 30+ messages in thread
From: ron minnich @ 2004-05-26 15:19 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs; +Cc: lucio

On Wed, 26 May 2004, Bruce Ellis wrote:

> "inconvenience" isn't the right word when your
> machine doesn't have a CD drive.  and the bfrees
> were disastrous.

I never had much luck with them. Once your fossil is hosed I think the
only useful step is flfmt.

ron



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-26  5:05       ` lucio
@ 2004-05-26 10:04         ` Bruce Ellis
  2004-05-26 15:19           ` ron minnich
  0 siblings, 1 reply; 30+ messages in thread
From: Bruce Ellis @ 2004-05-26 10:04 UTC (permalink / raw)
  To: lucio, Fans of the OS Plan 9 from Bell Labs

"inconvenience" isn't the right word when your
machine doesn't have a CD drive.  and the bfrees
were disastrous.

brucee

lucio@proxima.alt.za wrote:

>>>I normally do this as a first resort, rather than running thousands
>>>of bfrees.  Then I know I'm starting from a clean state.  Is there
>>>any disadvantage in doing it this way?
>>
>>nope.  just the inconvenience of booting an install cd.
>
>
> Is it out of the question to include or at least document a procedure
> to do this as painfully and risk free as possible?  It sounds terribly
> error prone, otherwise.
>
> ++L



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-25 21:28     ` Russ Cox
  2004-05-26  5:05       ` lucio
@ 2004-05-26  8:05       ` Richard Miller
  1 sibling, 0 replies; 30+ messages in thread
From: Richard Miller @ 2004-05-26  8:05 UTC (permalink / raw)
  To: 9fans

> just the inconvenience of booting an install cd.

Booting from a remote fileserver is quicker (if there's one handy).



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-25 21:28     ` Russ Cox
@ 2004-05-26  5:05       ` lucio
  2004-05-26 10:04         ` Bruce Ellis
  2004-05-26  8:05       ` Richard Miller
  1 sibling, 1 reply; 30+ messages in thread
From: lucio @ 2004-05-26  5:05 UTC (permalink / raw)
  To: 9fans

>> I normally do this as a first resort, rather than running thousands
>> of bfrees.  Then I know I'm starting from a clean state.  Is there
>> any disadvantage in doing it this way?
>
> nope.  just the inconvenience of booting an install cd.

Is it out of the question to include or at least document a procedure
to do this as painfully and risk free as possible?  It sounds terribly
error prone, otherwise.

++L



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-25  8:31   ` Richard Miller
@ 2004-05-25 21:28     ` Russ Cox
  2004-05-26  5:05       ` lucio
  2004-05-26  8:05       ` Richard Miller
  0 siblings, 2 replies; 30+ messages in thread
From: Russ Cox @ 2004-05-25 21:28 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

> I normally do this as a first resort, rather than running thousands
> of bfrees.  Then I know I'm starting from a clean state.  Is there
> any disadvantage in doing it this way?

nope.  just the inconvenience of booting an install cd.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-25  2:08       ` Bruce Ellis
  2004-05-25  2:08         ` Kenji Okamoto
@ 2004-05-25 14:22         ` ron minnich
  1 sibling, 0 replies; 30+ messages in thread
From: ron minnich @ 2004-05-25 14:22 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

On Tue, 25 May 2004, Bruce Ellis wrote:

> warning.  don't try this at home.
>
> Russ Cox wrote:
>
> > well, to run flfmt you'd have to boot from an install cd.
> > to run the bfrees, there's no problem with doing it
> > while fossil is running.

you mean this? It's how I recovered my dead fossil, actually.

ron



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-24 15:56 ` Russ Cox
  2004-05-24 22:13   ` Bruce Ellis
@ 2004-05-25  8:31   ` Richard Miller
  2004-05-25 21:28     ` Russ Cox
  1 sibling, 1 reply; 30+ messages in thread
From: Richard Miller @ 2004-05-25  8:31 UTC (permalink / raw)
  To: 9fans

> If the worst happens (very unlikely) you can always
> flfmt with the snap -a score.

I normally do this as a first resort, rather than running thousands
of bfrees.  Then I know I'm starting from a clean state.  Is there
any disadvantage in doing it this way?

-- Richard



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-25  2:33             ` Kenji Okamoto
@ 2004-05-25  2:50               ` Bruce Ellis
  0 siblings, 0 replies; 30+ messages in thread
From: Bruce Ellis @ 2004-05-25  2:50 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

i'll monitor it closely after the reinstall.
maybe it's a bug that's been fixed.  maybe it's
a bug i can help find.

brucee

Kenji Okamoto wrote:

>>the machine had been up for 76 days when i
>>rebooted yesterday.  other than that it was
>>only rebooted a few times during installation.
>>
>>now it's time to reinstall ...
>
>
> Then, I may have same problem here for our
> Fossil+Venti server started to run about a week ago...
>
> Kenji



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-25  2:36             ` Kenji Okamoto
@ 2004-05-25  2:46               ` Kenji Okamoto
  2004-06-07  6:47                 ` Kenji Okamoto
  0 siblings, 1 reply; 30+ messages in thread
From: Kenji Okamoto @ 2004-05-25  2:46 UTC (permalink / raw)
  To: 9fans

> I'm not using it as my mail server.   It's just for my home work,

I meant this machine is powered only a or couple of hours each day,
even not everyday.☺

Kenji



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-25  2:12           ` boyd, rounin
@ 2004-05-25  2:36             ` Kenji Okamoto
  2004-05-25  2:46               ` Kenji Okamoto
  0 siblings, 1 reply; 30+ messages in thread
From: Kenji Okamoto @ 2004-05-25  2:36 UTC (permalink / raw)
  To: 9fans

> 100?  i had a uVAX II that was up for 8 months  -- NO reboots.

My home server is not neccessary to run-up whole day, because
I'm not using it as my mail server.   It's just for my home work,
if I had to do.  I don't like to do so though.

Kenji



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-25  2:21           ` Bruce Ellis
@ 2004-05-25  2:33             ` Kenji Okamoto
  2004-05-25  2:50               ` Bruce Ellis
  0 siblings, 1 reply; 30+ messages in thread
From: Kenji Okamoto @ 2004-05-25  2:33 UTC (permalink / raw)
  To: 9fans

> the machine had been up for 76 days when i
> rebooted yesterday.  other than that it was
> only rebooted a few times during installation.
>
> now it's time to reinstall ...

Then, I may have same problem here for our
Fossil+Venti server started to run about a week ago...

Kenji



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-25  2:08         ` Kenji Okamoto
  2004-05-25  2:12           ` boyd, rounin
@ 2004-05-25  2:21           ` Bruce Ellis
  2004-05-25  2:33             ` Kenji Okamoto
  1 sibling, 1 reply; 30+ messages in thread
From: Bruce Ellis @ 2004-05-25  2:21 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

the machine had been up for 76 days when i
rebooted yesterday.  other than that it was
only rebooted a few times during installation.

now it's time to reinstall ...

brucee

Kenji Okamoto wrote:

>>warning.  don't try this at home.
>
>
> By the way, how many times you think you rebooted
> fossil?   I'm using fossil+Venti at home, but I don't see
> it yet.   Maybe I rebooted about 100 times...
>
> Kenji
>
>
>


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-25  2:08         ` Kenji Okamoto
@ 2004-05-25  2:12           ` boyd, rounin
  2004-05-25  2:36             ` Kenji Okamoto
  2004-05-25  2:21           ` Bruce Ellis
  1 sibling, 1 reply; 30+ messages in thread
From: boyd, rounin @ 2004-05-25  2:12 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

> Maybe I rebooted about 100 times...

100?  i had a uVAX II that was up for 8 months processing 2000 SMTP messages a month -- NO reboots.



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-24 22:25     ` Russ Cox
  2004-05-24 23:07       ` Bruce Ellis
@ 2004-05-25  2:08       ` Bruce Ellis
  2004-05-25  2:08         ` Kenji Okamoto
  2004-05-25 14:22         ` ron minnich
  1 sibling, 2 replies; 30+ messages in thread
From: Bruce Ellis @ 2004-05-25  2:08 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

warning.  don't try this at home.

Russ Cox wrote:

>>how can this work if fossil is main and it's full
>>(as it is now, though i've done no work since last mail)?
>>aren't i overwriting an active filesystem that contains
>>the pages of everything executing?  i hope i don't have
>>to do this every second month.  the "slow" leak is a bit
>>too fast for me.
>
>
> well, to run flfmt you'd have to boot from an install cd.
> to run the bfrees, there's no problem with doing it
> while fossil is running.
>
> i'm surprised that your setup tickles the leak so well.
> that's good though.  maybe we'll be able to figure
> out what it is.
>
> russ



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-25  2:08       ` Bruce Ellis
@ 2004-05-25  2:08         ` Kenji Okamoto
  2004-05-25  2:12           ` boyd, rounin
  2004-05-25  2:21           ` Bruce Ellis
  2004-05-25 14:22         ` ron minnich
  1 sibling, 2 replies; 30+ messages in thread
From: Kenji Okamoto @ 2004-05-25  2:08 UTC (permalink / raw)
  To: 9fans

> warning.  don't try this at home.

By the way, how many times you think you rebooted
fossil?   I'm using fossil+Venti at home, but I don't see
it yet.   Maybe I rebooted about 100 times...

Kenji



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-24 23:07       ` Bruce Ellis
@ 2004-05-24 23:32         ` Russ Cox
  0 siblings, 0 replies; 30+ messages in thread
From: Russ Cox @ 2004-05-24 23:32 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

> at the moment i'm doing a rdarena onto external media
> to save my stuff (after a venti/sync just in case).
> i did get some disturbing stuff from checkarenas ...

that's okay.  it just means that the clump count
in the arena tail is out-of-date.  it only gets updated every
512 blocks.  it's okay for it to be out-of-date.  what matters
is that the blocks are on the disk.  everything else is
brought up-to-date by syncindex or when venti starts.

venti/sync just makes sure that all the data is on disk,
which is the bare minimum necessary to be sure that
you will be able to retrieve the blocks should venti crash
and restart immediately after the sync returns.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-24 22:25     ` Russ Cox
@ 2004-05-24 23:07       ` Bruce Ellis
  2004-05-24 23:32         ` Russ Cox
  2004-05-25  2:08       ` Bruce Ellis
  1 sibling, 1 reply; 30+ messages in thread
From: Bruce Ellis @ 2004-05-24 23:07 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

at the moment i'm doing a rdarena onto external media
to save my stuff (after a venti/sync just in case).
i did get some disturbing stuff from checkarenas ...

% venti/checkarenas -v /dev/sdC0/arenas0
arena='arenas00' [335872,524623872)
	version=4 created=1078054912 modified=1085395310
	clumps=63,516 compressed clumps=52,981 data=322,342,330 compressed
data=151,780,198 disk storage=155,781,706
unwritten clump info for clump=63516
score=0df06cf42ba8bfbc0642fcaee0f8b7e9bd2492e5 type=13
unwritten clump info for clump=63517
score=48ef6d1ef683cc0079fae947e0d84ce860f68420 type=2
unwritten clump info for clump=63518
score=584a352e9f229353e1287b78fb6cf21fa9198941 type=13
...
unwritten clump info for clump=63707
score=9802dcf5c497267dfbace0c0cb1ca083d1344d44 type=1
.
incorrect arena header fields

then when the coffee hits in i'll try some bfrees.  the only
thing i can think of that is "strange" about my setup is the
busy way that i use it when writing stuff (regression tests,
etc.) but i'm sure you busy-beaver it too.  maybe some problem
with closing a block when it belongs to something executing?

thanks for the help.

brucee

Russ Cox wrote:

>>how can this work if fossil is main and it's full
>>(as it is now, though i've done no work since last mail)?
>>aren't i overwriting an active filesystem that contains
>>the pages of everything executing?  i hope i don't have
>>to do this every second month.  the "slow" leak is a bit
>>too fast for me.
>
>
> well, to run flfmt you'd have to boot from an install cd.
> to run the bfrees, there's no problem with doing it
> while fossil is running.
>
> i'm surprised that your setup tickles the leak so well.
> that's good though.  maybe we'll be able to figure
> out what it is.
>
> russ



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-24 22:13   ` Bruce Ellis
@ 2004-05-24 22:25     ` Russ Cox
  2004-05-24 23:07       ` Bruce Ellis
  2004-05-25  2:08       ` Bruce Ellis
  0 siblings, 2 replies; 30+ messages in thread
From: Russ Cox @ 2004-05-24 22:25 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

> how can this work if fossil is main and it's full
> (as it is now, though i've done no work since last mail)?
> aren't i overwriting an active filesystem that contains
> the pages of everything executing?  i hope i don't have
> to do this every second month.  the "slow" leak is a bit
> too fast for me.

well, to run flfmt you'd have to boot from an install cd.
to run the bfrees, there's no problem with doing it
while fossil is running.

i'm surprised that your setup tickles the leak so well.
that's good though.  maybe we'll be able to figure
out what it is.

russ


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-24 15:56 ` Russ Cox
@ 2004-05-24 22:13   ` Bruce Ellis
  2004-05-24 22:25     ` Russ Cox
  2004-05-25  8:31   ` Richard Miller
  1 sibling, 1 reply; 30+ messages in thread
From: Bruce Ellis @ 2004-05-24 22:13 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

how can this work if fossil is main and it's full
(as it is now, though i've done no work since last mail)?
aren't i overwriting an active filesystem that contains
the pages of everything executing?  i hope i don't have
to do this every second month.  the "slow" leak is a bit
too fast for me.

brucee

Russ Cox wrote:

> The best thing about fossil is snap -a.  If it gets to the point
> where you've leaked away your entire disk (which seems to
> happen very slowly), record the snap, use vacfs to double-check
> that your data and dumps are there, and then run the bfree
> commands.  The disk is just a write buffer, so as long as
> everything shows up in vacfs, you can't destroy anything
> important.  If the worst happens (very unlikely) you can always
> flfmt with the snap -a score.
>
> Russ



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
       [not found] <e1912b6dcca61d19ae435ec37a36817d@hamnavoe.com>
@ 2004-05-24 15:56 ` Russ Cox
  2004-05-24 22:13   ` Bruce Ellis
  2004-05-25  8:31   ` Richard Miller
  0 siblings, 2 replies; 30+ messages in thread
From: Russ Cox @ 2004-05-24 15:56 UTC (permalink / raw)
  To: 9fans

A couple people have sent me flchk outputs showing the leaks.
They all look approximately like the ones I saw when I was
working on fossil a year ago.  I'm not sure where the leaked
blocks come from.  I can't seem to build a test script that
produces leaked blocks reliably, and I've tried for quite a while.
It could be that the leaks are from the disk not being completely
synced when the machine is shut down -- since blocks are written
out before the pointers to those blocks, if you shut down in between,
you're going to leak some blocks.

The best thing about fossil is snap -a.  If it gets to the point
where you've leaked away your entire disk (which seems to
happen very slowly), record the snap, use vacfs to double-check
that your data and dumps are there, and then run the bfree
commands.  The disk is just a write buffer, so as long as
everything shows up in vacfs, you can't destroy anything
important.  If the worst happens (very unlikely) you can always
flfmt with the snap -a score.

Russ


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-24  4:59       ` Bruce Ellis
@ 2004-05-24  5:19         ` Russ Cox
  0 siblings, 0 replies; 30+ messages in thread
From: Russ Cox @ 2004-05-24  5:19 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

can you send me the flchk output off-list?


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-24  4:47     ` Russ Cox
@ 2004-05-24  4:59       ` Bruce Ellis
  2004-05-24  5:19         ` Russ Cox
  0 siblings, 1 reply; 30+ messages in thread
From: Bruce Ellis @ 2004-05-24  4:59 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

sorry, i read 'reasonable close' as 'reasonably close'.
they are all ~0.

Russ Cox wrote:

> the idea is that each block has a start and end epoch.
> start is the epoch it was created in, and end is the epoch
> that it was unlinked from the tree.  once the system has tossed
> or archived the snapshots up through the close epoch, the block
> can be reused.  precise accounting isn't necessary as long as
> the close epoch is correct.  when a block isn't yet closed it has
> a close epoch of ~0.  that's why i asked.
>
> what does the epoch command print?  maybe the epoch isn't
> moving forward as it should.  this can happen if a scan of the
> file system turns up trees from earlier epochs.  the epoch command
> prints them out along with suggested clri messages that would
> remove them.
>
> if the epoch command prints clris of /archive, it means the
> archiver hasn't finished yet-- just wait.  if it prints clris of old
> /snapshot trees, run them and try epoch again.
>
> one more thing.  fossil doesn't keep a precise accounting of
> blocks that might have been made free by moving the epoch forward.
> the df command rescans the disk to update the count if necessary.
> that's why df is so slow.  the original df didn't do this, but it's been
> this way for more than a year.  if you do not have a call to
> cacheCountUsed inside fsysDf in 9fsys.c, then df is printing a
> (perhaps gross) underestimate.
>
> russ
>
>


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-24  4:28   ` Bruce Ellis
@ 2004-05-24  4:47     ` Russ Cox
  2004-05-24  4:59       ` Bruce Ellis
  0 siblings, 1 reply; 30+ messages in thread
From: Russ Cox @ 2004-05-24  4:47 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

> no, the epochs seem random (777, 1083, etc) - the current
> epoch is 3442.  the first N blocks are all unreachable
> and then it starts thinning out.

they should be (somewhat) random.  as long as they're not ~0.
if the close number is less than the epoch, they should be automatically
found and freed.

the idea is that each block has a start and end epoch.
start is the epoch it was created in, and end is the epoch
that it was unlinked from the tree.  once the system has tossed
or archived the snapshots up through the close epoch, the block
can be reused.  precise accounting isn't necessary as long as
the close epoch is correct.  when a block isn't yet closed it has
a close epoch of ~0.  that's why i asked.

what does the epoch command print?  maybe the epoch isn't
moving forward as it should.  this can happen if a scan of the
file system turns up trees from earlier epochs.  the epoch command
prints them out along with suggested clri messages that would
remove them.

if the epoch command prints clris of /archive, it means the
archiver hasn't finished yet-- just wait.  if it prints clris of old
/snapshot trees, run them and try epoch again.

one more thing.  fossil doesn't keep a precise accounting of
blocks that might have been made free by moving the epoch forward.
the df command rescans the disk to update the count if necessary.
that's why df is so slow.  the original df didn't do this, but it's been
this way for more than a year.  if you do not have a call to
cacheCountUsed inside fsysDf in 9fsys.c, then df is printing a
(perhaps gross) underestimate.

russ


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-24  2:22 ` Russ Cox
@ 2004-05-24  4:28   ` Bruce Ellis
  2004-05-24  4:47     ` Russ Cox
  0 siblings, 1 reply; 30+ messages in thread
From: Bruce Ellis @ 2004-05-24  4:28 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

no, the epochs seem random (777, 1083, etc) - the current
epoch is 3442.  the first N blocks are all unreachable
and then it starts thinning out.

at 98% now, and clri of an epoch frees nothing.

Russ Cox wrote:

>>if i run "fossil/flchk -f" i get thousands of
>>bfrees which all belong to long gone epochs.
>>i'm very reluctant to do an update as fossil
>>is the root filesystem and if i screw it up
>>i'm screwed - i'm also reluctant to do a few
>>thousand bfrees.
>
>
> do they have reasonable close numbers?



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [9fans] fossil question
  2004-05-24  0:14 Bruce Ellis
@ 2004-05-24  2:22 ` Russ Cox
  2004-05-24  4:28   ` Bruce Ellis
  0 siblings, 1 reply; 30+ messages in thread
From: Russ Cox @ 2004-05-24  2:22 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

> does fossil leak?

i think it leaks superblocks but not many and not often.

> if i run "fossil/flchk -f" i get thousands of
> bfrees which all belong to long gone epochs.
> i'm very reluctant to do an update as fossil
> is the root filesystem and if i screw it up
> i'm screwed - i'm also reluctant to do a few
> thousand bfrees.

do they have reasonable close numbers?

> PS "epoch" seems to always show more than a day's
> snaps, sometimes a few days.  what's that all about?

the cleanup happens once a day, so you might see
two days worth; you shouldn't see three.

russ


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [9fans] fossil question
@ 2004-05-24  0:14 Bruce Ellis
  2004-05-24  2:22 ` Russ Cox
  0 siblings, 1 reply; 30+ messages in thread
From: Bruce Ellis @ 2004-05-24  0:14 UTC (permalink / raw)
  To: 9fans

does fossil leak?

*caveat* i am not running the current version
so this may have been fixed.

my fossil has passed 90% full even though it
shouldn't have much more data in it than when
it was 20% full.

i have ...

	snaptime -a 0030 -s 30 -t 1440

which i believe means "snap -a at 0030,
snap every 30 minutes, and discard them
after a day".

if i run "fossil/flchk -f" i get thousands of
bfrees which all belong to long gone epochs.
i'm very reluctant to do an update as fossil
is the root filesystem and if i screw it up
i'm screwed - i'm also reluctant to do a few
thousand bfrees.

any help welcome.

brucee

PS "epoch" seems to always show more than a day's
snaps, sometimes a few days.  what's that all about?


^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2004-06-07 13:38 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2004-05-17 12:31 [9fans] fossil question Fco.J.Ballesteros
2004-05-24  0:14 Bruce Ellis
2004-05-24  2:22 ` Russ Cox
2004-05-24  4:28   ` Bruce Ellis
2004-05-24  4:47     ` Russ Cox
2004-05-24  4:59       ` Bruce Ellis
2004-05-24  5:19         ` Russ Cox
     [not found] <e1912b6dcca61d19ae435ec37a36817d@hamnavoe.com>
2004-05-24 15:56 ` Russ Cox
2004-05-24 22:13   ` Bruce Ellis
2004-05-24 22:25     ` Russ Cox
2004-05-24 23:07       ` Bruce Ellis
2004-05-24 23:32         ` Russ Cox
2004-05-25  2:08       ` Bruce Ellis
2004-05-25  2:08         ` Kenji Okamoto
2004-05-25  2:12           ` boyd, rounin
2004-05-25  2:36             ` Kenji Okamoto
2004-05-25  2:46               ` Kenji Okamoto
2004-06-07  6:47                 ` Kenji Okamoto
2004-06-07 13:36                   ` Russ Cox
2004-06-07 13:38                     ` Fco. J. Ballesteros
2004-05-25  2:21           ` Bruce Ellis
2004-05-25  2:33             ` Kenji Okamoto
2004-05-25  2:50               ` Bruce Ellis
2004-05-25 14:22         ` ron minnich
2004-05-25  8:31   ` Richard Miller
2004-05-25 21:28     ` Russ Cox
2004-05-26  5:05       ` lucio
2004-05-26 10:04         ` Bruce Ellis
2004-05-26 15:19           ` ron minnich
2004-05-26  8:05       ` Richard Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).