9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
From: cam@9.SQUiSH.org
To: vu3rdd@gmail.com, 9fans@9fans.net
Subject: Re: [9fans] file server speed
Date: Thu, 17 Jul 2014 14:44:42 -0700	[thread overview]
Message-ID: <e325ede5c5cfacc68b8b3d6533bc8f09@9.SQUiSH.org> (raw)
In-Reply-To: <CAA6Yd9U4u42UtN1bLvyBeoTZ104_PwRUY2jmo3EffOaPLPN_FA@mail.gmail.com>

>> That was in an office environment. At home I use
>> fossil+(plan9port)venti running on linux-based NAS. This ends up
>> working very well for me since I have resources to spare on that
>> machine. This also lets me backup my arenas via CrashPlan. I use a
> 
> I am very interested to use such a setup. Could you please add more
> about the setup? What hardware do you use for the NAS? Any scripts
> etc?

(disclaimer: i am fairly new to the plan9 universe, so
my apologies for anything i missed.  most of this is 
from memory.)

my setup is slightly different, but similar in concept.  
my only machine is a mac, so i run plan9 under qemu.␣  i
have a p9p venti running on the mac with a few qemu vm's 
using it.

qemu also kind of solves the problem of the availability 
of compatible hardware for me.

when i first installed plan9, i installed using the 
fossil+venti option.  when i wanted to expand beyond a
simple terminal, i decided to get venti out of the vm
and run it off the mac using p9p.  i also wanted to keep
the data from the original venti.

the old venti was ~8G, but used the default arena size of
512M.  only 4 of the previous arenas were sealed with the 
the fifth active.  i also added another 10G worth of 
arenas at 1G each, so this is roughly what i did.

on plan9:

for (i in 0 1 2 3 4)
	rdarena /dev/sdC0/arenas arenas0$i > /tmp/arenas0$i

on the mac:

dd if=/dev/zero of=arenas1.img bs=1024 count=2622212
dd if=/dev/zero of=arenas2.img bs=1024 count=10486532
dd if=/dev/zero of=isect.img bs=1024k count=1024
dd if=/dev/zero of=bloom.img bs=1024k count=128
fmtarenas arenas1 arenas1.img
fmtarenas -a 1024m arenas2 arenas2.img
fmtisect isect isect.img
fmtbloom -N 24 bloom.img
fmtindex venti.conf
buildindex -b venti.conf
$PLAN9/bin/venti/venti -c venti.conf

back to plan9:

for (i in /tmp/arenas*)
	venti/wrarena -h tcp!$ventiaddr!17034 $i

----------
i keep the files on an apple raid mirror for venti.  this 
is my venti.conf:

index main
arenas /Volumes/venti/arenas1.img
arenas /Volumes/venti/arenas2.img
isect /Volumes/venti/isect.img
bloom /Volumes/venti/bloom.img
mem 48m
bcmem 96m
icmem 128m
addr tcp!*!17034
httpaddr tcp!*!8000
webroot /Volumes/venti/webroot

----------
the end result is that i had a 12.5G venti with a 1G isect
and all my previous data content.  the venti memory footprint 
is approximately 650M.  it works quite well for me.

in all instances below, i have adjusted my configuration
files to what one would use if using separate machines.
we have static ip addressing here, so my config will 
reflect that.  substitute $ventiaddr for the ip address of 
your p9p venti server, $gateway for your router, $ipmask 
for your netmask, and $clientip for the machine(s) using the
p9p venti.

my auth+fs vm (the main p9p venti client) has the usual
9fat, nvram, swap, and fossil partitions with plan9.ini
as follows:

bootfile=sdC0!9fat!9pccpu
console=0 b115200 l8 pn s1
*noe820scan=1
bootargs=local!#S/sdC0/fossi
bootdisk=local!#S/sdC0/fossil
nobootprompt=local!#S/sdC0/fossil
venti=tcp!$ventiaddr!17034 -g $gateway ether /net/ether0 $clientip $ipmask
sysname=p9auth

----------
the other cpu vm's pxe boot off the auth+fs vm using pxe
config files like this:

bootfile=ether0!/386/9pccpu
console=0 b115200 l8 pn s1
readparts=
nvram=/dev/sdC0/nvram
*noe820scan=1
nobootprompt=tcp
sysname=whatever

----------
since two of my cpu vm's have local fossils that i would like
to keep archived, i added the following to /bin/cpurc.local:

venti=tcp!$ventiaddr!17034

and then i start the fossils from /cfg/$sysname/cpurc.

hope this is helpful in some way.




  parent reply	other threads:[~2014-07-17 21:44 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-07-15  6:13 kokamoto
2014-07-15 13:02 ` Anthony Sorace
2014-07-15 16:30 ` cinap_lenrek
2014-07-15 16:56   ` erik quanstrom
2014-07-16  0:04   ` kokamoto
2014-07-16  0:36     ` kokamoto
2014-07-16 17:26       ` erik quanstrom
2014-07-16 14:23     ` Steven Stallion
2014-07-16 14:53       ` Ramakrishnan Muthukrishnan
2014-07-17  1:13         ` Steven Stallion
2014-07-17 17:13           ` Ramakrishnan Muthukrishnan
2014-07-17 21:44         ` cam [this message]
2014-07-16 17:41       ` erik quanstrom
2014-07-16 23:15       ` kokamoto
2014-07-17  0:29         ` Steven Stallion
2014-07-17  0:31           ` Steven Stallion
2014-07-17  1:10             ` john francis lee
2014-07-17  1:09           ` john francis lee
2014-07-17  1:10           ` Bakul Shah
2014-07-17 16:56           ` erik quanstrom
2014-07-17 18:14             ` Bakul Shah
2014-07-17 18:39               ` erik quanstrom
2014-07-17 19:01               ` Bakul Shah
2014-07-17 19:10                 ` erik quanstrom
2014-07-17 19:26                   ` Bakul Shah
2014-07-16 17:23     ` erik quanstrom
2014-07-16 23:01       ` kokamoto
2014-07-17 17:03         ` erik quanstrom
2014-07-18 16:50           ` hiro
2014-07-18 19:36             ` erik quanstrom
2014-07-18 20:11               ` Bakul Shah

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e325ede5c5cfacc68b8b3d6533bc8f09@9.SQUiSH.org \
    --to=cam@9.squish.org \
    --cc=9fans@9fans.net \
    --cc=vu3rdd@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).