zsh-users
 help / color / mirror / code / Atom feed
* rolling over high-traffic logfiles?
@ 1998-07-16  3:49 Sweth Chandramouli
  1998-07-16  5:08 ` Bart Schaefer
  1998-07-16 13:45 ` Duncan Sinclair
  0 siblings, 2 replies; 10+ messages in thread
From: Sweth Chandramouli @ 1998-07-16  3:49 UTC (permalink / raw)
  To: ZSH Users

	is there some easy way to roll over high-traffic logfiles
in zsh, without losing any possible incoming data?  i have a machine,
for example, that receives syslog reports from many other machines on
a network, which syslog.conf puts into the file /var/adm/extlog.  for
similar such files, i have a crontab entry that simply copies the log
file to the file "log.`date`", and then copies /dev/null over the current
logfile.  this syslog file, however, is almost constatnly being updated.
in the ideal world, i'd like to not have to worry about one or two
entries getting added (and thus lost) to /var/adm/extlog in the time
between when /var/adm/extlog.`date` is created and /dev/null gets
copied over /var/adm/extlog.  can zsh do some sort of file locking here?
if not, does anyone have any other ideas on how to not lose and log entires?

	tia,
	sweth.

-- 
Sweth Chandramouli
IS Coordinator, The George Washington University
<sweth@gwu.edu> / (202) 994 - 8521 (V) / (202) 994 - 0458 (F)
<a href="http://astaroth.nit.gwu.edu/~sweth/disc.html">*</a>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: rolling over high-traffic logfiles?
  1998-07-16  3:49 rolling over high-traffic logfiles? Sweth Chandramouli
@ 1998-07-16  5:08 ` Bart Schaefer
  1998-07-16  8:36   ` Zefram
  1998-07-16 12:46   ` Brian Harvell
  1998-07-16 13:45 ` Duncan Sinclair
  1 sibling, 2 replies; 10+ messages in thread
From: Bart Schaefer @ 1998-07-16  5:08 UTC (permalink / raw)
  To: Sweth Chandramouli, ZSH Users

On Jul 15, 11:49pm, Sweth Chandramouli wrote:
} Subject: rolling over high-traffic logfiles?
}
} 	is there some easy way to roll over high-traffic logfiles
} in zsh, without losing any possible incoming data?

There isn't any magic for this that is specific to zsh.

} can zsh do some sort of file locking here?

It wouldn't help.  The processes that are writing to the log file would
also have to use the equivalent locking.  (There is such a thing in unix
as "mandatory" file locking, but using that would likely just cause the
logging processes to get errors.  All other locking is "advisory," which
means all processes involved must have been written so as to use the same
cooperative calls.)

} does anyone have any other ideas on how to not lose and log entires?

Use "ln" and "mv" to replace the file, rather than "cp" over it.

	cp /dev/null extlog.new &&
	ln extlog extlog.`date` &&
	mv -f extlog.new extlog ||
	rm -f extlog.new

There's still a race condition where some process could attempt to write
to extlog during the execution of "mv", that is, between unlinking the
old extlog and renaming extlog.new to extlog.  However, the window for
failure is much smaller, and could be made smaller still by using the
"files" module with zsh 3.1.4 so that "ln" and "mv" are shell builtins.

-- 
Bart Schaefer                                 Brass Lantern Enterprises
http://www.well.com/user/barts              http://www.brasslantern.com


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: rolling over high-traffic logfiles?
  1998-07-16  5:08 ` Bart Schaefer
@ 1998-07-16  8:36   ` Zefram
  1998-07-16 12:46   ` Brian Harvell
  1 sibling, 0 replies; 10+ messages in thread
From: Zefram @ 1998-07-16  8:36 UTC (permalink / raw)
  To: Bart Schaefer; +Cc: sweth, zsh-users

Bart Schaefer wrote:
>There's still a race condition where some process could attempt to write
>to extlog during the execution of "mv", that is, between unlinking the
>old extlog and renaming extlog.new to extlog.

mv should use rename() to atomically replace the log file, in which case
there is no race condition.

-zefram


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: rolling over high-traffic logfiles?
  1998-07-16  5:08 ` Bart Schaefer
  1998-07-16  8:36   ` Zefram
@ 1998-07-16 12:46   ` Brian Harvell
  1998-07-16 15:26     ` Bart Schaefer
  1 sibling, 1 reply; 10+ messages in thread
From: Brian Harvell @ 1998-07-16 12:46 UTC (permalink / raw)
  To: Bart Schaefer; +Cc: Sweth Chandramouli, ZSH Users

> 
> } does anyone have any other ideas on how to not lose and log entires?
> 
> Use "ln" and "mv" to replace the file, rather than "cp" over it.
> 
> 	cp /dev/null extlog.new &&
> 	ln extlog extlog.`date` &&
> 	mv -f extlog.new extlog ||
> 	rm -f extlog.new
> 
> There's still a race condition where some process could attempt to write
> to extlog during the execution of "mv", that is, between unlinking the
> old extlog and renaming extlog.new to extlog.  However, the window for
> failure is much smaller, and could be made smaller still by using the
> "files" module with zsh 3.1.4 so that "ln" and "mv" are shell builtins.
> 

That won't work. You are going to have to send syslog a sighup before it
will release it's current file handle and reopen to a new one.

All you need to do is
mv the logfile to the new name. (syslog will continue to write to it even
after the mv has completed)
then send syslog a sighup. (It will then open a new log file)

There is still a small chance that you will loose entries but that's the nature
of syslog anyway. Syslogd can easily get overloaded and drop entries on the
floor.

Brian

Brian Harvell         harvell@aol.net         http://boondoggle.web.aol.com/
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq'|dc




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: rolling over high-traffic logfiles?
  1998-07-16  3:49 rolling over high-traffic logfiles? Sweth Chandramouli
  1998-07-16  5:08 ` Bart Schaefer
@ 1998-07-16 13:45 ` Duncan Sinclair
  1 sibling, 0 replies; 10+ messages in thread
From: Duncan Sinclair @ 1998-07-16 13:45 UTC (permalink / raw)
  To: zsh-users


>	is there some easy way to roll over high-traffic logfiles
>in zsh, without losing any possible incoming data?

As it is a syslog log file there is a standard way to do this:

  mv /var/adm/extlog /var/adm/extlog.`date`
  cp /dev/null /var/adm/extlog
  kill -HUP `cat /etc/syslog.pid`

syslogd will continue writing to the old log file until the
kill signal is sent to it.

Nothing is ever lost - as long as syslog is behaving itself!

Hope this helps.



Duncan Sinclair.

From zsh-workers-request@math.gatech.edu Thu Jul 16 14:07:47 1998
Return-Path: <zsh-workers-request@math.gatech.edu>
Delivered-To: mason-zsh@primenet.com.au
Received: (qmail 13065 invoked from network); 16 Jul 1998 14:07:46 -0000
Received: from math.gatech.edu (root@130.207.146.50)
  by ns1.primenet.com.au with SMTP; 16 Jul 1998 14:07:46 -0000
Received: (from list@localhost)
	by math.gatech.edu (8.9.1/8.9.1) id JAA17764;
	Thu, 16 Jul 1998 09:53:44 -0400 (EDT)
Resent-Date: Thu, 16 Jul 1998 09:43:30 -0400 (EDT)
To: zsh-users@math.gatech.edu
Subject: Re: rolling over high-traffic logfiles?
Newsgroups: dis.lists.zsh
References: <19980715234957.51065@astaroth.nit.gwu.edu>
Reply-To: sinclair@dis.strath.ac.uk
Date: Thu, 16 Jul 1998 14:45:20 +0100
Message-ID: <15546.900596720@dis.strath.ac.uk>
From: Duncan Sinclair <sinclair@dis.strath.ac.uk>
Resent-Message-ID: <"ikc1-1.0.RI4.2EWhr"@math>
Resent-From: zsh-users@math.gatech.edu
X-Mailing-List: <zsh-users@math.gatech.edu> archive/latest/1678
X-Loop: zsh-users@math.gatech.edu
X-Loop: zsh-workers@math.gatech.edu
Precedence: list
Resent-Sender: zsh-workers-request@math.gatech.edu


>	is there some easy way to roll over high-traffic logfiles
>in zsh, without losing any possible incoming data?

As it is a syslog log file there is a standard way to do this:

  mv /var/adm/extlog /var/adm/extlog.`date`
  cp /dev/null /var/adm/extlog
  kill -HUP `cat /etc/syslog.pid`

syslogd will continue writing to the old log file until the
kill signal is sent to it.

Nothing is ever lost - as long as syslog is behaving itself!

Hope this helps.



Duncan Sinclair.

From zsh-workers-request@math.gatech.edu Thu Jul 16 14:12:59 1998
Return-Path: <zsh-workers-request@math.gatech.edu>
Delivered-To: mason-zsh@primenet.com.au
Received: (qmail 13119 invoked from network); 16 Jul 1998 14:12:58 -0000
Received: from math.gatech.edu (root@130.207.146.50)
  by ns1.primenet.com.au with SMTP; 16 Jul 1998 14:12:58 -0000
Received: (from list@localhost)
	by math.gatech.edu (8.9.1/8.9.1) id JAA17764;
	Thu, 16 Jul 1998 09:53:44 -0400 (EDT)
Resent-Date: Thu, 16 Jul 1998 09:43:30 -0400 (EDT)
To: zsh-users@math.gatech.edu
Subject: Re: rolling over high-traffic logfiles?
Newsgroups: dis.lists.zsh
References: <19980715234957.51065@astaroth.nit.gwu.edu>
Reply-To: sinclair@dis.strath.ac.uk
Date: Thu, 16 Jul 1998 14:45:20 +0100
Message-ID: <15546.900596720@dis.strath.ac.uk>
From: Duncan Sinclair <sinclair@dis.strath.ac.uk>
Resent-Message-ID: <"ikc1-1.0.RI4.2EWhr"@math>
Resent-From: zsh-users@math.gatech.edu
X-Mailing-List: <zsh-users@math.gatech.edu> archive/latest/1678
X-Loop: zsh-users@math.gatech.edu
X-Loop: zsh-workers@math.gatech.edu
Precedence: list
Resent-Sender: zsh-workers-request@math.gatech.edu


>	is there some easy way to roll over high-traffic logfiles
>in zsh, without losing any possible incoming data?

As it is a syslog log file there is a standard way to do this:

  mv /var/adm/extlog /var/adm/extlog.`date`
  cp /dev/null /var/adm/extlog
  kill -HUP `cat /etc/syslog.pid`

syslogd will continue writing to the old log file until the
kill signal is sent to it.

Nothing is ever lost - as long as syslog is behaving itself!

Hope this helps.



Duncan Sinclair.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: rolling over high-traffic logfiles?
  1998-07-16 12:46   ` Brian Harvell
@ 1998-07-16 15:26     ` Bart Schaefer
  1998-07-16 16:14       ` Zefram
  1998-07-24 23:42       ` Todd Graham Lewis
  0 siblings, 2 replies; 10+ messages in thread
From: Bart Schaefer @ 1998-07-16 15:26 UTC (permalink / raw)
  To: zsh-users; +Cc: sweth

On Jul 16,  9:36am, Zefram wrote:
} Subject: Re: rolling over high-traffic logfiles?
}
} mv should use rename() to atomically replace the log file, in which case
} there is no race condition.

Fair enough.  There are two or more filesystem operations necessary even
if only only system call, but I suppose that's only a possible problem
on a multiprocessor system.

On Jul 16,  8:46am, Brian Harvell wrote:
} Subject: Re: rolling over high-traffic logfiles?
}
} > Use "ln" and "mv" to replace the file, rather than "cp" over it.
} 
} That won't work. You are going to have to send syslog a sighup before it
} will release it's current file handle and reopen to a new one.

I assumed that this was some other kind of log, not syslog output, and
that therefore the file had to exist in order to be correctly opened and
written by the logger.  Now that I think about it, Brian's answer is more
likely the correct one, and I should have mentioned it, too.

-- 
Bart Schaefer                                 Brass Lantern Enterprises
http://www.well.com/user/barts              http://www.brasslantern.com


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: rolling over high-traffic logfiles?
  1998-07-16 15:26     ` Bart Schaefer
@ 1998-07-16 16:14       ` Zefram
  1998-07-16 16:41         ` Nitin P Garg
  1998-07-24 23:42       ` Todd Graham Lewis
  1 sibling, 1 reply; 10+ messages in thread
From: Zefram @ 1998-07-16 16:14 UTC (permalink / raw)
  To: Bart Schaefer; +Cc: zsh-users, sweth

Bart Schaefer wrote:
>} mv should use rename() to atomically replace the log file, in which case
>} there is no race condition.
>
>Fair enough.  There are two or more filesystem operations necessary even
>if only only system call, but I suppose that's only a possible problem
>on a multiprocessor system.

POSIX.1 defines rename() to be atomic, as is historical practice.
On multiprocessor systems the kernel just has to take a little extra care.
POSIX.2 defines mv to behave as if it were calling rename().

-zefram


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: rolling over high-traffic logfiles?
  1998-07-16 16:14       ` Zefram
@ 1998-07-16 16:41         ` Nitin P Garg
  1998-07-17  7:36           ` Jos Backus
  0 siblings, 1 reply; 10+ messages in thread
From: Nitin P Garg @ 1998-07-16 16:41 UTC (permalink / raw)
  To: zsh-users, sweth


[snipped...lots of suggestions]

hmm...maybe there is a simple solution,

how about making the file a named pipe ? 
then a process could simply read from it and write to a somefile.date and the
reading process could simply keep changing .date part as frequent as one
programs it to.
 
 any problems with this ???

++garg


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: rolling over high-traffic logfiles?
  1998-07-16 16:41         ` Nitin P Garg
@ 1998-07-17  7:36           ` Jos Backus
  0 siblings, 0 replies; 10+ messages in thread
From: Jos Backus @ 1998-07-17  7:36 UTC (permalink / raw)
  To: zsh-users

On Thu, Jul 16, 1998 at 10:11:17PM +0530, Nitin P Garg wrote:
> how about making the file a named pipe ? 
> then a process could simply read from it and write to a somefile.date and the
> reading process could simply keep changing .date part as frequent as one
> programs it to.

Tbis is what fifo and cyclog (part of Dan Bernstein's daemontools package, see
http://www.pobox.com/~djb/daemontools.html) do.

-- 
Jos Backus                          _/  _/_/_/    "Reliability means never
                                   _/  _/   _/     having to say you're sorry."
                                  _/  _/_/_/               -- D. J. Bernstein
                             _/  _/  _/    _/
Jos.Backus@nl.origin-it.com  _/_/   _/_/_/        use Std::Disclaimer;


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: rolling over high-traffic logfiles?
  1998-07-16 15:26     ` Bart Schaefer
  1998-07-16 16:14       ` Zefram
@ 1998-07-24 23:42       ` Todd Graham Lewis
  1 sibling, 0 replies; 10+ messages in thread
From: Todd Graham Lewis @ 1998-07-24 23:42 UTC (permalink / raw)
  To: Bart Schaefer; +Cc: zsh-users

On Thu, 16 Jul 1998, Bart Schaefer wrote:

> On Jul 16,  9:36am, Zefram wrote:
> } Subject: Re: rolling over high-traffic logfiles?
> }
> } mv should use rename() to atomically replace the log file, in which case
> } there is no race condition.
> 
> Fair enough.  There are two or more filesystem operations necessary even
> if only only system call, but I suppose that's only a possible problem
> on a multiprocessor system.

This is why kernel locking is such a big issue in SMP.  The semantics don't
change; rename() is always atomic under Unix.

--
Todd Graham Lewis                                     (800) 719-4664, x2804
******Linux******         MindSpring Enterprises      tlewis@mindspring.net


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~1998-07-24 23:54 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1998-07-16  3:49 rolling over high-traffic logfiles? Sweth Chandramouli
1998-07-16  5:08 ` Bart Schaefer
1998-07-16  8:36   ` Zefram
1998-07-16 12:46   ` Brian Harvell
1998-07-16 15:26     ` Bart Schaefer
1998-07-16 16:14       ` Zefram
1998-07-16 16:41         ` Nitin P Garg
1998-07-17  7:36           ` Jos Backus
1998-07-24 23:42       ` Todd Graham Lewis
1998-07-16 13:45 ` Duncan Sinclair

Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/zsh/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).