From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: To: vu3rdd@gmail.com, 9fans@9fans.net Date: Thu, 17 Jul 2014 14:44:42 -0700 From: cam@9.SQUiSH.org In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Subject: Re: [9fans] file server speed Topicbox-Message-UUID: 0536dcd8-ead9-11e9-9d60-3106f5b1d025 >> That was in an office environment. At home I use >> fossil+(plan9port)venti running on linux-based NAS. This ends up >> working very well for me since I have resources to spare on that >> machine. This also lets me backup my arenas via CrashPlan. I use a >=20 > I am very interested to use such a setup. Could you please add more > about the setup? What hardware do you use for the NAS? Any scripts > etc? (disclaimer: i am fairly new to the plan9 universe, so my apologies for anything i missed. most of this is=20 from memory.) my setup is slightly different, but similar in concept. =20 my only machine is a mac, so i run plan9 under qemu.=E2=90=A3 i have a p9p venti running on the mac with a few qemu vm's=20 using it. qemu also kind of solves the problem of the availability=20 of compatible hardware for me. when i first installed plan9, i installed using the=20 fossil+venti option. when i wanted to expand beyond a simple terminal, i decided to get venti out of the vm and run it off the mac using p9p. i also wanted to keep the data from the original venti. the old venti was ~8G, but used the default arena size of 512M. only 4 of the previous arenas were sealed with the=20 the fifth active. i also added another 10G worth of=20 arenas at 1G each, so this is roughly what i did. on plan9: for (i in 0 1 2 3 4) rdarena /dev/sdC0/arenas arenas0$i > /tmp/arenas0$i on the mac: dd if=3D/dev/zero of=3Darenas1.img bs=3D1024 count=3D2622212 dd if=3D/dev/zero of=3Darenas2.img bs=3D1024 count=3D10486532 dd if=3D/dev/zero of=3Disect.img bs=3D1024k count=3D1024 dd if=3D/dev/zero of=3Dbloom.img bs=3D1024k count=3D128 fmtarenas arenas1 arenas1.img fmtarenas -a 1024m arenas2 arenas2.img fmtisect isect isect.img fmtbloom -N 24 bloom.img fmtindex venti.conf buildindex -b venti.conf $PLAN9/bin/venti/venti -c venti.conf back to plan9: for (i in /tmp/arenas*) venti/wrarena -h tcp!$ventiaddr!17034 $i ---------- i keep the files on an apple raid mirror for venti. this=20 is my venti.conf: index main arenas /Volumes/venti/arenas1.img arenas /Volumes/venti/arenas2.img isect /Volumes/venti/isect.img bloom /Volumes/venti/bloom.img mem 48m bcmem 96m icmem 128m addr tcp!*!17034 httpaddr tcp!*!8000 webroot /Volumes/venti/webroot ---------- the end result is that i had a 12.5G venti with a 1G isect and all my previous data content. the venti memory footprint=20 is approximately 650M. it works quite well for me. in all instances below, i have adjusted my configuration files to what one would use if using separate machines. we have static ip addressing here, so my config will=20 reflect that. substitute $ventiaddr for the ip address of=20 your p9p venti server, $gateway for your router, $ipmask=20 for your netmask, and $clientip for the machine(s) using the p9p venti. my auth+fs vm (the main p9p venti client) has the usual 9fat, nvram, swap, and fossil partitions with plan9.ini as follows: bootfile=3DsdC0!9fat!9pccpu console=3D0 b115200 l8 pn s1 *noe820scan=3D1 bootargs=3Dlocal!#S/sdC0/fossi bootdisk=3Dlocal!#S/sdC0/fossil nobootprompt=3Dlocal!#S/sdC0/fossil venti=3Dtcp!$ventiaddr!17034 -g $gateway ether /net/ether0 $clientip $ipm= ask sysname=3Dp9auth ---------- the other cpu vm's pxe boot off the auth+fs vm using pxe config files like this: bootfile=3Dether0!/386/9pccpu console=3D0 b115200 l8 pn s1 readparts=3D nvram=3D/dev/sdC0/nvram *noe820scan=3D1 nobootprompt=3Dtcp sysname=3Dwhatever ---------- since two of my cpu vm's have local fossils that i would like to keep archived, i added the following to /bin/cpurc.local: venti=3Dtcp!$ventiaddr!17034 and then i start the fossils from /cfg/$sysname/cpurc. hope this is helpful in some way.