* [9fans] file server speed @ 2014-07-15 6:13 kokamoto 2014-07-15 13:02 ` Anthony Sorace 2014-07-15 16:30 ` cinap_lenrek 0 siblings, 2 replies; 31+ messages in thread From: kokamoto @ 2014-07-15 6:13 UTC (permalink / raw) To: 9fans I've experienced three kinds of Plan9 file servers, Lab's one, 9atom and plan9front. I felt that the file server speed is 9front > 9atom >> lab's. This is based on verious different machines for each. Is my feeling wrong, or hasve some base facts? This is to prepare the fastest file server on a planed relatively weak energy-saving machine. Kenji ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-15 6:13 [9fans] file server speed kokamoto @ 2014-07-15 13:02 ` Anthony Sorace 2014-07-15 16:30 ` cinap_lenrek 1 sibling, 0 replies; 31+ messages in thread From: Anthony Sorace @ 2014-07-15 13:02 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs [-- Attachment #1: Type: text/plain, Size: 453 bytes --] On Jul 15, 2014, at 2:13 , kokamoto@hera.eonet.ne.jp wrote: > I've experienced three kinds of Plan9 file servers, > Lab's one, 9atom and plan9front. Can you clarify which file server, specifically, you're comparing for each of these? The Labs doesn't distribute kenfs any more, and venti+fossil is a very different model. The old kenfs should reliably trounce it on performance. Then there's newer options in both 9atom and 9front. Anthony [-- Attachment #2: Message signed with OpenPGP using GPGMail --] [-- Type: application/pgp-signature, Size: 169 bytes --] ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-15 6:13 [9fans] file server speed kokamoto 2014-07-15 13:02 ` Anthony Sorace @ 2014-07-15 16:30 ` cinap_lenrek 2014-07-15 16:56 ` erik quanstrom 2014-07-16 0:04 ` kokamoto 1 sibling, 2 replies; 31+ messages in thread From: cinap_lenrek @ 2014-07-15 16:30 UTC (permalink / raw) To: 9fans not a fair comparsion. i'd look into kenfs for a fileserver only machine. might require some time to get it to work with your hardware tho. -- cinap ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-15 16:30 ` cinap_lenrek @ 2014-07-15 16:56 ` erik quanstrom 2014-07-16 0:04 ` kokamoto 1 sibling, 0 replies; 31+ messages in thread From: erik quanstrom @ 2014-07-15 16:56 UTC (permalink / raw) To: 9fans On Tue Jul 15 12:31:57 EDT 2014, cinap_lenrek@felloff.net wrote: > not a fair comparsion. > > i'd look into kenfs for a fileserver only machine. might > require some time to get it to work with your hardware tho. if you have a recent 64-bit intel machine, the hardware support should be nearly the same as plan 9 for networking, and all ahci and aoe are supported for disks. getting kenfs going seems to trip a lot of people up. but 9atom's mkfsboot (see mkfsconf(8)) should make life easier. perhaps an image of the current system bootable from the first ahci device might be better.... - erik ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-15 16:30 ` cinap_lenrek 2014-07-15 16:56 ` erik quanstrom @ 2014-07-16 0:04 ` kokamoto 2014-07-16 0:36 ` kokamoto ` (2 more replies) 1 sibling, 3 replies; 31+ messages in thread From: kokamoto @ 2014-07-16 0:04 UTC (permalink / raw) To: 9fans > not a fair comparsion. Yes, I'd have been more specific. my intension was cwfs > fossil+venti of 9atom >> fossil+venti labs. I did not consider kenfs itself, because I consider it should be file+auth+cpu server. The last is not important, but for drawterm from others. Recent kenfs can be such a machine? Please remember I plan it for my private home machine, not any sofisticated office use. Kenji ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-16 0:04 ` kokamoto @ 2014-07-16 0:36 ` kokamoto 2014-07-16 17:26 ` erik quanstrom 2014-07-16 14:23 ` Steven Stallion 2014-07-16 17:23 ` erik quanstrom 2 siblings, 1 reply; 31+ messages in thread From: kokamoto @ 2014-07-16 0:36 UTC (permalink / raw) To: 9fans kenfs(of course 64 bit)+auth server +++9pi terminal/cpu server may be best for home use... Kenji ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-16 0:36 ` kokamoto @ 2014-07-16 17:26 ` erik quanstrom 0 siblings, 0 replies; 31+ messages in thread From: erik quanstrom @ 2014-07-16 17:26 UTC (permalink / raw) To: 9fans On Wed Jul 16 13:06:16 EDT 2014, kokamoto@hera.eonet.ne.jp wrote: > kenfs(of course 64 bit)+auth server +++9pi terminal/cpu server > may be best for home use... i would go ahead and use to raspberry pi machines. having a dedicated cpu server is quite nice, and of course ken's file server is not an auth server. - erik ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-16 0:04 ` kokamoto 2014-07-16 0:36 ` kokamoto @ 2014-07-16 14:23 ` Steven Stallion 2014-07-16 14:53 ` Ramakrishnan Muthukrishnan ` (2 more replies) 2014-07-16 17:23 ` erik quanstrom 2 siblings, 3 replies; 31+ messages in thread From: Steven Stallion @ 2014-07-16 14:23 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs On Tue, Jul 15, 2014 at 7:04 PM, <kokamoto@hera.eonet.ne.jp> wrote: >> not a fair comparsion. > > Yes, I'd have been more specific. > my intension was cwfs > fossil+venti of 9atom >> fossil+venti labs. > I did not consider kenfs itself, because I consider it should be > file+auth+cpu server. The last is not important, but for drawterm > from others. > > Recent kenfs can be such a machine? > Please remember I plan it for my private home machine, not > any sofisticated office use. kenfs works well, but you have to be well prepared to maintain it. Invest in a decent UPS - preferably one that is supported by the auto-shutdown (ISTR support was added for that a while back). You need to be careful when sizing your cache - I would invest in a pair of decent SSDs for cache, and two or more drives for housing the WORM. Be prepared for failure. The last large kensfs I maintained (around 16TB usable, 48TB raw) worked very well but would still crash one every year or two. Make sure you keep hard copies of your fsconfig and get comfortable with scripting as erik mentioned. That was in an office environment. At home I use fossil+(plan9port)venti running on linux-based NAS. This ends up working very well for me since I have resources to spare on that machine. This also lets me backup my arenas via CrashPlan. I use a cheap SSD for fossil in my file server and a small SATA DOM for booting (the idea is I can throw away the SSD at any time and still recover). Speed has been on par with kenfs, but it takes a little work to get there. I hope this is useful to you - maintaining an fs shouldn't be taken on lightly, but it is one of the best ways to understand what it means to maintain a plan9 installation. Cheers, Steve ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-16 14:23 ` Steven Stallion @ 2014-07-16 14:53 ` Ramakrishnan Muthukrishnan 2014-07-17 1:13 ` Steven Stallion 2014-07-17 21:44 ` cam 2014-07-16 17:41 ` erik quanstrom 2014-07-16 23:15 ` kokamoto 2 siblings, 2 replies; 31+ messages in thread From: Ramakrishnan Muthukrishnan @ 2014-07-16 14:53 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs On Wed, Jul 16, 2014 at 7:53 PM, Steven Stallion <sstallion@gmail.com> wrote: > On Tue, Jul 15, 2014 at 7:04 PM, <kokamoto@hera.eonet.ne.jp> wrote: >>> not a fair comparsion. >> >> Yes, I'd have been more specific. >> my intension was cwfs > fossil+venti of 9atom >> fossil+venti labs. >> I did not consider kenfs itself, because I consider it should be >> file+auth+cpu server. The last is not important, but for drawterm >> from others. >> >> Recent kenfs can be such a machine? >> Please remember I plan it for my private home machine, not >> any sofisticated office use. > > kenfs works well, but you have to be well prepared to maintain it. > Invest in a decent UPS - preferably one that is supported by the > auto-shutdown (ISTR support was added for that a while back). You need > to be careful when sizing your cache - I would invest in a pair of > decent SSDs for cache, and two or more drives for housing the WORM. Be > prepared for failure. The last large kensfs I maintained (around 16TB > usable, 48TB raw) worked very well but would still crash one every > year or two. Make sure you keep hard copies of your fsconfig and get > comfortable with scripting as erik mentioned. > > That was in an office environment. At home I use > fossil+(plan9port)venti running on linux-based NAS. This ends up > working very well for me since I have resources to spare on that > machine. This also lets me backup my arenas via CrashPlan. I use a I am very interested to use such a setup. Could you please add more about the setup? What hardware do you use for the NAS? Any scripts etc? -- Ramakrishnan ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-16 14:53 ` Ramakrishnan Muthukrishnan @ 2014-07-17 1:13 ` Steven Stallion 2014-07-17 17:13 ` Ramakrishnan Muthukrishnan 2014-07-17 21:44 ` cam 1 sibling, 1 reply; 31+ messages in thread From: Steven Stallion @ 2014-07-17 1:13 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs On Wed, Jul 16, 2014 at 9:53 AM, Ramakrishnan Muthukrishnan <vu3rdd@gmail.com> wrote: > I am very interested to use such a setup. Could you please add more > about the setup? What hardware do you use for the NAS? Any scripts > etc? Sure thing - I've copied everything you should need under sources/contrib/stallion/venti (http://plan9.bell-labs.com/sources/contrib/stallion/venti/) Steve ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-17 1:13 ` Steven Stallion @ 2014-07-17 17:13 ` Ramakrishnan Muthukrishnan 0 siblings, 0 replies; 31+ messages in thread From: Ramakrishnan Muthukrishnan @ 2014-07-17 17:13 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs On Thu, Jul 17, 2014 at 6:43 AM, Steven Stallion <sstallion@gmail.com> wrote: > On Wed, Jul 16, 2014 at 9:53 AM, Ramakrishnan Muthukrishnan > <vu3rdd@gmail.com> wrote: >> I am very interested to use such a setup. Could you please add more >> about the setup? What hardware do you use for the NAS? Any scripts >> etc? > > Sure thing - I've copied everything you should need under > sources/contrib/stallion/venti > (http://plan9.bell-labs.com/sources/contrib/stallion/venti/) Thanks, Steve. -- Ramakrishnan ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-16 14:53 ` Ramakrishnan Muthukrishnan 2014-07-17 1:13 ` Steven Stallion @ 2014-07-17 21:44 ` cam 1 sibling, 0 replies; 31+ messages in thread From: cam @ 2014-07-17 21:44 UTC (permalink / raw) To: vu3rdd, 9fans >> That was in an office environment. At home I use >> fossil+(plan9port)venti running on linux-based NAS. This ends up >> working very well for me since I have resources to spare on that >> machine. This also lets me backup my arenas via CrashPlan. I use a > > I am very interested to use such a setup. Could you please add more > about the setup? What hardware do you use for the NAS? Any scripts > etc? (disclaimer: i am fairly new to the plan9 universe, so my apologies for anything i missed. most of this is from memory.) my setup is slightly different, but similar in concept. my only machine is a mac, so i run plan9 under qemu.␣ i have a p9p venti running on the mac with a few qemu vm's using it. qemu also kind of solves the problem of the availability of compatible hardware for me. when i first installed plan9, i installed using the fossil+venti option. when i wanted to expand beyond a simple terminal, i decided to get venti out of the vm and run it off the mac using p9p. i also wanted to keep the data from the original venti. the old venti was ~8G, but used the default arena size of 512M. only 4 of the previous arenas were sealed with the the fifth active. i also added another 10G worth of arenas at 1G each, so this is roughly what i did. on plan9: for (i in 0 1 2 3 4) rdarena /dev/sdC0/arenas arenas0$i > /tmp/arenas0$i on the mac: dd if=/dev/zero of=arenas1.img bs=1024 count=2622212 dd if=/dev/zero of=arenas2.img bs=1024 count=10486532 dd if=/dev/zero of=isect.img bs=1024k count=1024 dd if=/dev/zero of=bloom.img bs=1024k count=128 fmtarenas arenas1 arenas1.img fmtarenas -a 1024m arenas2 arenas2.img fmtisect isect isect.img fmtbloom -N 24 bloom.img fmtindex venti.conf buildindex -b venti.conf $PLAN9/bin/venti/venti -c venti.conf back to plan9: for (i in /tmp/arenas*) venti/wrarena -h tcp!$ventiaddr!17034 $i ---------- i keep the files on an apple raid mirror for venti. this is my venti.conf: index main arenas /Volumes/venti/arenas1.img arenas /Volumes/venti/arenas2.img isect /Volumes/venti/isect.img bloom /Volumes/venti/bloom.img mem 48m bcmem 96m icmem 128m addr tcp!*!17034 httpaddr tcp!*!8000 webroot /Volumes/venti/webroot ---------- the end result is that i had a 12.5G venti with a 1G isect and all my previous data content. the venti memory footprint is approximately 650M. it works quite well for me. in all instances below, i have adjusted my configuration files to what one would use if using separate machines. we have static ip addressing here, so my config will reflect that. substitute $ventiaddr for the ip address of your p9p venti server, $gateway for your router, $ipmask for your netmask, and $clientip for the machine(s) using the p9p venti. my auth+fs vm (the main p9p venti client) has the usual 9fat, nvram, swap, and fossil partitions with plan9.ini as follows: bootfile=sdC0!9fat!9pccpu console=0 b115200 l8 pn s1 *noe820scan=1 bootargs=local!#S/sdC0/fossi bootdisk=local!#S/sdC0/fossil nobootprompt=local!#S/sdC0/fossil venti=tcp!$ventiaddr!17034 -g $gateway ether /net/ether0 $clientip $ipmask sysname=p9auth ---------- the other cpu vm's pxe boot off the auth+fs vm using pxe config files like this: bootfile=ether0!/386/9pccpu console=0 b115200 l8 pn s1 readparts= nvram=/dev/sdC0/nvram *noe820scan=1 nobootprompt=tcp sysname=whatever ---------- since two of my cpu vm's have local fossils that i would like to keep archived, i added the following to /bin/cpurc.local: venti=tcp!$ventiaddr!17034 and then i start the fossils from /cfg/$sysname/cpurc. hope this is helpful in some way. ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-16 14:23 ` Steven Stallion 2014-07-16 14:53 ` Ramakrishnan Muthukrishnan @ 2014-07-16 17:41 ` erik quanstrom 2014-07-16 23:15 ` kokamoto 2 siblings, 0 replies; 31+ messages in thread From: erik quanstrom @ 2014-07-16 17:41 UTC (permalink / raw) To: 9fans > kenfs works well, but you have to be well prepared to maintain it. > Invest in a decent UPS - preferably one that is supported by the > auto-shutdown (ISTR support was added for that a while back). You need > to be careful when sizing your cache - I would invest in a pair of > decent SSDs for cache, and two or more drives for housing the WORM. Be > prepared for failure. The last large kensfs I maintained (around 16TB > usable, 48TB raw) worked very well but would still crash one every > year or two. Make sure you keep hard copies of your fsconfig and get > comfortable with scripting as erik mentioned. the cache comment is spot on. i've stuck with 20GB caches. this is because a very large cache will use too many buckets. buckets are stored on disk in the cache, but a bucket needs to be in memory to use that part of the cache. with too many buckets, most of the i/o will be thrashing through different buckets. i'm very familiar with doing this wrong, as i set up one machine (kibbiee, for those who remember) with a 750G cache. that was a bad idea. i'd have to add that after running ken's file server at home since 2005, and at least half a dozen in work environments, the file system has been pretty good with my data. from 2006 to 2008 i ran with local scsi disks on an old va linux box. in that location i had no ups, and lightning took out power about once a week. suprisingly, this didn't cause any trouble. the only time i did have trouble was not the file server's fault entirely. the storage was disconnected during a dump, and then the file server was rebooted. in that case, only the corrupt dump was lost. clearly any file server is ideally on a ups, with generator backup and automatic shutdown. :-) in my experience, using ssds for cache doesn't speed up the file server much, since one is usually rtt limited, and by craftiness with the cache, files are typically written sequentially to the disk. i have ssds in my current file server, and am a bit disappointed in the performance. - erik ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-16 14:23 ` Steven Stallion 2014-07-16 14:53 ` Ramakrishnan Muthukrishnan 2014-07-16 17:41 ` erik quanstrom @ 2014-07-16 23:15 ` kokamoto 2014-07-17 0:29 ` Steven Stallion 2 siblings, 1 reply; 31+ messages in thread From: kokamoto @ 2014-07-16 23:15 UTC (permalink / raw) To: 9fans > That was in an office environment. At home I use > fossil+(plan9port)venti running on linux-based NAS. Do you use wireless LAN? If so you also need wireless bridge? The combination of NAS and venti sounds like charm, because the snmallest config is two machines. How about the power-eating of that machine? Recent low-power machine can do that task? Kenji ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-16 23:15 ` kokamoto @ 2014-07-17 0:29 ` Steven Stallion 2014-07-17 0:31 ` Steven Stallion ` (3 more replies) 0 siblings, 4 replies; 31+ messages in thread From: Steven Stallion @ 2014-07-17 0:29 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs On Wed, Jul 16, 2014 at 6:15 PM, <kokamoto@hera.eonet.ne.jp> wrote: >> That was in an office environment. At home I use >> fossil+(plan9port)venti running on linux-based NAS. > > Do you use wireless LAN? > If so you also need wireless bridge? > The combination of NAS and venti sounds like charm, > because the snmallest config is two machines. > > How about the power-eating of that machine? > Recent low-power machine can do that task? I've used ReadyNAS appliances at home for almost 10 years. The current product line is made up of low-power Atoms. I'm running a RAID5 across 4 500G enterprise SATA drives (that should indicate how old this unit is pretty well...) I have a wired network primarily in the rack in the office at home - I absolutely would not use wireless to connect fossil to venti (fossil does *not* cope well with the connection to venti dropping). I switched over to fossil on the ReadyNAS a little over a year ago and have had really good luck; not a single crash. Performance has also been very good. It just so happens I wrote a README at the time since it was non-obvious how to set it up correctly: https://dl.dropboxusercontent.com/u/102312978/FOSSIL Cheers, Steve ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-17 0:29 ` Steven Stallion @ 2014-07-17 0:31 ` Steven Stallion 2014-07-17 1:10 ` john francis lee 2014-07-17 1:09 ` john francis lee ` (2 subsequent siblings) 3 siblings, 1 reply; 31+ messages in thread From: Steven Stallion @ 2014-07-17 0:31 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs On Wed, Jul 16, 2014 at 7:29 PM, Steven Stallion <sstallion@gmail.com> wrote: > On Wed, Jul 16, 2014 at 6:15 PM, <kokamoto@hera.eonet.ne.jp> wrote: > It just so happens I wrote a README at the time since it was > non-obvious how to set it up correctly: Corrected link: https://dl.dropboxusercontent.com/u/102312978/FOSSIL%2BVENTI ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-17 0:31 ` Steven Stallion @ 2014-07-17 1:10 ` john francis lee 0 siblings, 0 replies; 31+ messages in thread From: john francis lee @ 2014-07-17 1:10 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs Sorry, found it now. On 07/17/2014 07:31 AM, Steven Stallion wrote: > On Wed, Jul 16, 2014 at 7:29 PM, Steven Stallion <sstallion@gmail.com> wrote: >> On Wed, Jul 16, 2014 at 6:15 PM, <kokamoto@hera.eonet.ne.jp> wrote: >> It just so happens I wrote a README at the time since it was >> non-obvious how to set it up correctly: > > Corrected link: https://dl.dropboxusercontent.com/u/102312978/FOSSIL%2BVENTI > -- John Francis Lee Thanon Sanam Gila, Ban Fa Sai 79/151 Moo 22 T. Ropwieng Mueang Chiangrai 57000 Thailand ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-17 0:29 ` Steven Stallion 2014-07-17 0:31 ` Steven Stallion @ 2014-07-17 1:09 ` john francis lee 2014-07-17 1:10 ` Bakul Shah 2014-07-17 16:56 ` erik quanstrom 3 siblings, 0 replies; 31+ messages in thread From: john francis lee @ 2014-07-17 1:09 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs Not Found The resource could not be found. WSGI Server On 07/17/2014 07:29 AM, Steven Stallion wrote: > On Wed, Jul 16, 2014 at 6:15 PM, <kokamoto@hera.eonet.ne.jp> wrote: >>> That was in an office environment. At home I use >>> fossil+(plan9port)venti running on linux-based NAS. >> >> Do you use wireless LAN? >> If so you also need wireless bridge? >> The combination of NAS and venti sounds like charm, >> because the snmallest config is two machines. >> >> How about the power-eating of that machine? >> Recent low-power machine can do that task? > > I've used ReadyNAS appliances at home for almost 10 years. The current > product line is made up of low-power Atoms. I'm running a RAID5 across > 4 500G enterprise SATA drives (that should indicate how old this unit > is pretty well...) I have a wired network primarily in the rack in the > office at home - I absolutely would not use wireless to connect fossil > to venti (fossil does *not* cope well with the connection to venti > dropping). > > I switched over to fossil on the ReadyNAS a little over a year ago and > have had really good luck; not a single crash. Performance has also > been very good. > > It just so happens I wrote a README at the time since it was > non-obvious how to set it up correctly: > https://dl.dropboxusercontent.com/u/102312978/FOSSIL > > Cheers, > > Steve > -- John Francis Lee Thanon Sanam Gila, Ban Fa Sai 79/151 Moo 22 T. Ropwieng Mueang Chiangrai 57000 Thailand ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-17 0:29 ` Steven Stallion 2014-07-17 0:31 ` Steven Stallion 2014-07-17 1:09 ` john francis lee @ 2014-07-17 1:10 ` Bakul Shah 2014-07-17 16:56 ` erik quanstrom 3 siblings, 0 replies; 31+ messages in thread From: Bakul Shah @ 2014-07-17 1:10 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs On Wed, 16 Jul 2014 19:29:43 CDT Steven Stallion <sstallion@gmail.com> wrote: > I absolutely would not use wireless to connect fossil > to venti (fossil does *not* cope well with the connection to venti > dropping). To deal with this you can use a local venti proxy like contrib/vsrinvas/vtrc.c. One change I would like is to see is to cache all the blocks for the last (or last N) snapshots so that disconnected operation is possible. Not sure if there is a way to do this within fossil. ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-17 0:29 ` Steven Stallion ` (2 preceding siblings ...) 2014-07-17 1:10 ` Bakul Shah @ 2014-07-17 16:56 ` erik quanstrom 2014-07-17 18:14 ` Bakul Shah 3 siblings, 1 reply; 31+ messages in thread From: erik quanstrom @ 2014-07-17 16:56 UTC (permalink / raw) To: 9fans > I've used ReadyNAS appliances at home for almost 10 years. The current > product line is made up of low-power Atoms. I'm running a RAID5 across > 4 500G enterprise SATA drives (that should indicate how old this unit > is pretty well...) I have a wired network primarily in the rack in the > office at home - I absolutely would not use wireless to connect fossil > to venti (fossil does *not* cope well with the connection to venti > dropping). so this is absolutely a potential problem with ken's file server using aoe storage. the way i delt with the problem is to wait forever for aoe to come back. since there's no connection, there is no procedure for reestablishing a connection. there is also no overhead like a proxy connection. i gave a little paper at iwp9 about the "diskless file server" and a few weeks after we got back a bug in the storage caused it to stop responding. we were able to fix the storage, load a new image, and restart the storage kernel. the file server didn't miss a beat and none of the cpu servers had any issues. i would think the same approach would work with fossil. of course one would need a more sophisticated solution than "just wait forever", due to the tcp connection. - erik ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-17 16:56 ` erik quanstrom @ 2014-07-17 18:14 ` Bakul Shah 2014-07-17 18:39 ` erik quanstrom 2014-07-17 19:01 ` Bakul Shah 0 siblings, 2 replies; 31+ messages in thread From: Bakul Shah @ 2014-07-17 18:14 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs On Jul 17, 2014, at 9:56 AM, erik quanstrom <quanstro@quanstro.net> wrote: > i would think the same approach would work with fossil. of course one > would need a more sophisticated solution than "just wait forever", due to > the tcp connection. There is no particular reason for it to be tcp given lack of any authentication. The beauty of CAS is that it need not even talk to the same server! ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-17 18:14 ` Bakul Shah @ 2014-07-17 18:39 ` erik quanstrom 2014-07-17 19:01 ` Bakul Shah 1 sibling, 0 replies; 31+ messages in thread From: erik quanstrom @ 2014-07-17 18:39 UTC (permalink / raw) To: bakul, 9fans > There is no particular reason for it to be tcp given lack of any authentication. The beauty of CAS is that it need not even talk to the same server but it is! even when venti and fossil are on the same machine. - erik ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-17 18:14 ` Bakul Shah 2014-07-17 18:39 ` erik quanstrom @ 2014-07-17 19:01 ` Bakul Shah 2014-07-17 19:10 ` erik quanstrom 1 sibling, 1 reply; 31+ messages in thread From: Bakul Shah @ 2014-07-17 19:01 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs On Thu, 17 Jul 2014 11:14:01 PDT Bakul Shah <bakul@bitblocks.com> wrote: > > On Jul 17, 2014, at 9:56 AM, erik quanstrom <quanstro@quanstro.net> wrote: > > > i would think the same approach would work with fossil. of course one > > would need a more sophisticated solution than "just wait forever", due to > > the tcp connection. > > There is no particular reason for it to be tcp given lack of any > authentication. The beauty of CAS is that it need not even talk > to the same server! To expand this a bit: So long as a server returns a block corresponding to its SHA1 score, you (the client) don't care whether it is the same server you wrote the original block to or another (and you can always verify the returned block). This opens up some interesting choices. For instance, you can multicast each read or write or you can set up a hierarchy of servers and switch to another one (or even get different blocks from different instances). How the writes are handled/replicated.stored is entirely upto the server (cluster). Or you can implement a bittorrent like facility on top of venti (I call it bitventi or b20). I have been playing with Lucho's 'govt' package to explore stuff like this (though staying with the current venti wire protocol for now). ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-17 19:01 ` Bakul Shah @ 2014-07-17 19:10 ` erik quanstrom 2014-07-17 19:26 ` Bakul Shah 0 siblings, 1 reply; 31+ messages in thread From: erik quanstrom @ 2014-07-17 19:10 UTC (permalink / raw) To: 9fans > So long as a server returns a block corresponding to its SHA1 > score, you (the client) don't care whether it is the same > server you wrote the original block to or another (and you can > always verify the returned block). This opens up some but this isn't unique to content-addressed storage. as long as the block contents are the same, normal block storage can be served from anywhere. > interesting choices. For instance, you can multicast each read > or write or you can set up a hierarchy of servers and switch > to another one (or even get different blocks from different > instances). How the writes are handled/replicated.stored is > entirely upto the server (cluster). Or you can implement a > bittorrent like facility on top of venti (I call it bitventi > or b20). why not distributed hashing? distributed hashes have existing fast implementations. venti does not. - erik ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-17 19:10 ` erik quanstrom @ 2014-07-17 19:26 ` Bakul Shah 0 siblings, 0 replies; 31+ messages in thread From: Bakul Shah @ 2014-07-17 19:26 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs On Thu, 17 Jul 2014 15:10:34 EDT erik quanstrom <quanstro@quanstro.net> wrote: > > So long as a server returns a block corresponding to its SHA1 > > score, you (the client) don't care whether it is the same > > server you wrote the original block to or another (and you can > > always verify the returned block). This opens up some > > but this isn't unique to content-addressed storage. as long > as the block contents are the same, normal block storage can > be served from anywhere. But this is a much stricter requirement for normal block storage (every replica has to have the same content for a given block number N and if you have disks of different sizes, you either waste storage or have to add another mapping layer). With CAS you have much more freedom. > > interesting choices. For instance, you can multicast each read > > or write or you can set up a hierarchy of servers and switch > > to another one (or even get different blocks from different > > instances). How the writes are handled/replicated.stored is > > entirely upto the server (cluster). Or you can implement a > > bittorrent like facility on top of venti (I call it bitventi > > or b20). > > why not distributed hashing? distributed hashes have existing > fast implementations. venti does not. You can change implementations. Russ's "ventino" and Lucho's govt's "vtmm" server use different storage schemes from plan9's venti server. I am just playing with this stuff but if you want clustered, distributed storage you'd probably need some server side protocol to manage storage sanely. And add more complications if you want to improve performance. ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-16 0:04 ` kokamoto 2014-07-16 0:36 ` kokamoto 2014-07-16 14:23 ` Steven Stallion @ 2014-07-16 17:23 ` erik quanstrom 2014-07-16 23:01 ` kokamoto 2 siblings, 1 reply; 31+ messages in thread From: erik quanstrom @ 2014-07-16 17:23 UTC (permalink / raw) To: 9fans > Recent kenfs can be such a machine? > Please remember I plan it for my private home machine, not > any sofisticated office use. i use ken's file server for personal use. i enjoy the fact that a cpu kernel panic does not impact the file server. - erik ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-16 17:23 ` erik quanstrom @ 2014-07-16 23:01 ` kokamoto 2014-07-17 17:03 ` erik quanstrom 0 siblings, 1 reply; 31+ messages in thread From: kokamoto @ 2014-07-16 23:01 UTC (permalink / raw) To: 9fans > i use ken's file server for personal use. i enjoy the > fact that a cpu kernel panic does not impact the file server. My desire is to have one file server with auth server and any numbers of terminals which can also be used as cpu server (for drawterm). In this case the smallest config is a file server and a terminal/cpu server. Ken's file server is standallone and has special user space. Then can we add auth functinality on this file server machine? Kenji ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-16 23:01 ` kokamoto @ 2014-07-17 17:03 ` erik quanstrom 2014-07-18 16:50 ` hiro 0 siblings, 1 reply; 31+ messages in thread From: erik quanstrom @ 2014-07-17 17:03 UTC (permalink / raw) To: 9fans > My desire is to have one file server with auth server and any > numbers of terminals which can also be used as cpu server > (for drawterm). > > In this case the smallest config is a file server and a terminal/cpu > server. > Ken's file server is standallone and has special user space. > Then can we add auth functinality on this file server machine? sure one could. but there is no user space, so this would need to be bespoke code. i'm surprised that running one extra raspberry pi only when working would make much of a difference. the total cost couldn't be much more than 10w, even with the power supply losses. the monitor likely uses much more power. perhaps high-efficiency wall warts could make up much of the difference. picked at random (first link) ... http://www.amazon.com/Nokia-AC-10U-Micro-USB-Efficiency-Charger/dp/B00DP0TQLG - erik ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-17 17:03 ` erik quanstrom @ 2014-07-18 16:50 ` hiro 2014-07-18 19:36 ` erik quanstrom 0 siblings, 1 reply; 31+ messages in thread From: hiro @ 2014-07-18 16:50 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs > perhaps high-efficiency wall warts could make up much of the difference. > picked at random (first link) ... > > http://www.amazon.com/Nokia-AC-10U-Micro-USB-Efficiency-Charger/dp/B00DP0TQLG > Given that the rpi has some weird power issues and without specification of the amperage of the charger this might be bad advice. ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-18 16:50 ` hiro @ 2014-07-18 19:36 ` erik quanstrom 2014-07-18 20:11 ` Bakul Shah 0 siblings, 1 reply; 31+ messages in thread From: erik quanstrom @ 2014-07-18 19:36 UTC (permalink / raw) To: 9fans On Fri Jul 18 12:51:49 EDT 2014, 23hiro@gmail.com wrote: > > perhaps high-efficiency wall warts could make up much of the difference. > > picked at random (first link) ... > > > > http://www.amazon.com/Nokia-AC-10U-Micro-USB-Efficiency-Charger/dp/B00DP0TQLG > > > > Given that the rpi has some weird power issues and without > specification of the amperage of the charger this might be bad advice. good point. thanks for pointing that out. - erik ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [9fans] file server speed 2014-07-18 19:36 ` erik quanstrom @ 2014-07-18 20:11 ` Bakul Shah 0 siblings, 0 replies; 31+ messages in thread From: Bakul Shah @ 2014-07-18 20:11 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs On Fri, 18 Jul 2014 15:36:09 EDT erik quanstrom <quanstro@quanstro.net> wrote: > On Fri Jul 18 12:51:49 EDT 2014, 23hiro@gmail.com wrote: > > > perhaps high-efficiency wall warts could make up much of the difference. > > > picked at random (first link) ... > > > > > > http://www.amazon.com/Nokia-AC-10U-Micro-USB-Efficiency-Charger/dp/B00DP0 > TQLG > > > > > > > Given that the rpi has some weird power issues and without > > specification of the amperage of the charger this might be bad advice. > > good point. thanks for pointing that out. The new B+ has a switching regulator. It uses less power and it is more tolerant of voltage changes -- below 5V USB will stop working but there are reports of booting Linux to login screen on as low as 2.6V. So power issues should be resolved. ^ permalink raw reply [flat|nested] 31+ messages in thread
end of thread, other threads:[~2014-07-18 20:11 UTC | newest] Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2014-07-15 6:13 [9fans] file server speed kokamoto 2014-07-15 13:02 ` Anthony Sorace 2014-07-15 16:30 ` cinap_lenrek 2014-07-15 16:56 ` erik quanstrom 2014-07-16 0:04 ` kokamoto 2014-07-16 0:36 ` kokamoto 2014-07-16 17:26 ` erik quanstrom 2014-07-16 14:23 ` Steven Stallion 2014-07-16 14:53 ` Ramakrishnan Muthukrishnan 2014-07-17 1:13 ` Steven Stallion 2014-07-17 17:13 ` Ramakrishnan Muthukrishnan 2014-07-17 21:44 ` cam 2014-07-16 17:41 ` erik quanstrom 2014-07-16 23:15 ` kokamoto 2014-07-17 0:29 ` Steven Stallion 2014-07-17 0:31 ` Steven Stallion 2014-07-17 1:10 ` john francis lee 2014-07-17 1:09 ` john francis lee 2014-07-17 1:10 ` Bakul Shah 2014-07-17 16:56 ` erik quanstrom 2014-07-17 18:14 ` Bakul Shah 2014-07-17 18:39 ` erik quanstrom 2014-07-17 19:01 ` Bakul Shah 2014-07-17 19:10 ` erik quanstrom 2014-07-17 19:26 ` Bakul Shah 2014-07-16 17:23 ` erik quanstrom 2014-07-16 23:01 ` kokamoto 2014-07-17 17:03 ` erik quanstrom 2014-07-18 16:50 ` hiro 2014-07-18 19:36 ` erik quanstrom 2014-07-18 20:11 ` Bakul Shah
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).