9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
From: Eric Van Hensbergen <ericvh@gmail.com>
To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
Subject: Re: [9fans] venti conf
Date: Tue, 10 Mar 2009 09:34:05 -0500	[thread overview]
Message-ID: <DA186DBD-3438-4CE3-91F4-ADCF59B9E2B4@gmail.com> (raw)
In-Reply-To: <138575260903100719m1a3001bdld235cf0ae26f059d@mail.gmail.com>

You want to look at vbackup - that's what I'm using for my systems now.
Here's the script I use on my Linux system, at some point I may write
up something a bit more comprehensive about how to set this up, but
you can figure most of it out by looking at the man pages and code if
necessary.  I use lvm to help get a sound snapshot to backup, for my
70G home dir it takes about 30 minutes.   The "config" necessary to do
vnfs is in the home.vac.log - that'll give you a dump-like view.  At
some point I'm gonna write a write-logger for lvm so that I can only
scan over the changed blocks in the file system more efficiently, that
should reduce backup time and allow tighter granularity.

#!/bin/bash
export PLAN9=/usr/plan9
export PATH=$PATH:$PLAN9/bin:/sbin
export venti=tcp\!9.3.61.250\!venti
lastscore=' '

if [ -f /etc/venti/home.vac.log ]; then
	lastscore=`tail -1 /etc/venti/home.vac.log | cut -d ' ' -f 5`
fi
/sbin/lvremove -f /dev/lvm/homesnap
/sbin/lvcreate -s -n homesnap -L 20g /dev/lvm/home
echo Starting Venti Snapshot of /home/ericvh `date` $lastscore >> /etc/
venti/home-time.log
/usr/plan9/bin/vbackup -f -w 4 /dev/lvm/homesnap $lastscore >> /etc/
venti/home.vac.log
echo Finished Venti Snapshot of /home/ericvh `date` >> /etc/venti/home-
time.log
/sbin/lvremove -f /dev/lvm/homesnap
cp -rf /etc/venti/* /home/ericvh/etc/venti

On Mar 10, 2009, at 9:19 AM, hugo rivera wrote:

> Hello,
> I am a little confused about setting up venti (on linux).
> I followed the instructions found on the wiki, and venti is up. But
> now I am lost; as far as I understood (from the man pages) I have to
> run vac every time I want to backup something and then unvac it every
> time I want to recover it, right? I heard many times, here and
> elsewhere, that you can configure venti to perform a backup of the
> whole system say at 3:00 am, then am I supposed to create some kind of
> rc script to do this (using vac, of course)? and where yesterday fits
> into this? I feel that I am missing a big part here. I want to be able
> to backup my home directory every day at 3:00 am.
> Sorry if the question has an obvious answer, but I cannot see the
> whole venti picture yet.
> --
> Hugo
>




  reply	other threads:[~2009-03-10 14:34 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-03-10 14:19 hugo rivera
2009-03-10 14:34 ` Eric Van Hensbergen [this message]
2009-03-10 14:52   ` hugo rivera
2009-03-11  8:52   ` hugo rivera
2009-03-11 12:28     ` Eric Van Hensbergen
2009-03-11 12:56       ` hugo rivera
2009-03-10 14:35 ` Robert Raschke
2009-03-10 14:43   ` hugo rivera
2009-03-10 16:08     ` Anthony Sorace
2009-03-10 15:32 ` Latchesar Ionkov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DA186DBD-3438-4CE3-91F4-ADCF59B9E2B4@gmail.com \
    --to=ericvh@gmail.com \
    --cc=9fans@9fans.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).