9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
* [9fans] Help with two small shared file servers
@ 2011-08-17 10:09 Aram Hăvărneanu
  2011-08-17 14:11 ` Anthony Sorace
                   ` (2 more replies)
  0 siblings, 3 replies; 17+ messages in thread
From: Aram Hăvărneanu @ 2011-08-17 10:09 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

Hello,

I'm looking for advice on how to build a small network of two file
servers. I'm hoping most servers to be Plan9, clients are Windows and
Mac OS X.

I have 2 houses separated by about 40ms of network latency. I want to
set some servers in each location and have all data accessible from
anywhere. I'll have about 2TB of data at each location, one location
will probably scale up.

What's the best option for RAID in Plan9? I understand I can use
either Ken's fileserver or the Plan9 '#k' device. Can anyone shed some
light on why I might want one and not the other? Are there any other
options?

Do you recommend fossil+venti or kfs? I've seen that a lot of people prefer kfs.

Are there any options for block level encryption on Plan9?

Is 9p suitable for this? How will the 40ms latency affect 9p
operation? (I have 100Mbit). I assume there are no good 9p drivers for
Windows so I can access my data directly. How good are the various
Plan9 SMB servers? (On the long term I might write a 9p filesystem
driver for Windows, I've been a Windows filesystem developer in a past
life). Is it better if I use the Plan9 boxen only for AoE with a Linux
on top that serves SMB?

I'm hoping to use some laptops for this. I have a lot of unused
ThinkPads that can run Plan9 around here. Are there some kind of
external enclosures that can hold a reasonable number of drives and
connect via eSATA or something? Are they fast enough?

On an unrelated note, what do you recommend for VPN. In Plan9 is very
easy as I can just import a namespace, but I have Windows and Mac
clients. Are there any Plan9 options? If not, what else would you
recommend?

Right now (only one location) I am using a Solaris server with ZFS
that serves SMB and iSCSI. It works great, there's nothing wrong with
it, but I would like a Plan9 solution. Or do you think iSCSI would
perform better with this latency? (I can't use AoE as AoE is not
routable).

Any tips are welcomed :-),

Thanks,

-- 
Aram Hăvărneanu



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] Help with two small shared file servers
  2011-08-17 10:09 [9fans] Help with two small shared file servers Aram Hăvărneanu
@ 2011-08-17 14:11 ` Anthony Sorace
  2011-08-17 14:32   ` John Floren
                     ` (2 more replies)
  2011-08-17 21:00 ` Bakul Shah
  2011-08-30 21:40 ` Ethan Grammatikidis
  2 siblings, 3 replies; 17+ messages in thread
From: Anthony Sorace @ 2011-08-17 14:11 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: text/plain, Size: 5019 bytes --]

On Aug 17, 2011, at 6:09 AM, Aram Hăvărneanu wrote:

> What's the best option for RAID in Plan9? I understand I can use
> either Ken's fileserver or the Plan9 '#k' device. 

note that neither of these are RAID in the way most people expect. failure
notification, in particular, can be lacking, and they're more restricted in what
they try to do that many RAID systems. that said, I've used #k quite happily for
years with no issues (and periodic correctness checks).

i'm not sure what support, if any, we have for hardware RAID controllers other
than ones which present a single logical disk. those, though, should be an
option if you're concerned.

> Can anyone shed some light on why I might want one and not the other?
> Are there any other options?

ken's fs is a kernel, and essentially gives you a 9p-accessible file storage
appliance. it takes another box to be able to run it. #k is part of the standard
cpu server kernel, and is typically used in conjunction with fossil and/or
venti. really, you usually want to decide between Ken's fs and fossil (with or
without venti). the basics:

Ken's FS used to be the default file server in Plan 9. It's a kernel, and takes over
a box. It is only directly accessible via 9p and il, which pretty much means via
Plan 9 (cpu servers can share the resources as they like, of course). It does
automatic daily archives of the whole system. It does limited de-duplicating.
It is very fast and extremely stable.

fossil is a newer file server, designed for use with venti (but usable on its own,
as well). it can be configured to do daily archives and/or periodic ephemeral
snapshots (you pick how often and how long they last). It can be used stand-
alone (can't do archives that way, but can still do snapshots) or with venti, in
which case you get venti's block-level deduplication (and direct access to
block-level storage). fossil performs reasonably and is relatively stable, but
less so than ken's fs (and, personally, seems more sensitive to unexpected
shutdown). venti is very stable. typically, if fossil goes belly-up, you can
reconstruct it from venti (in every case i've seen).

personally, if you (a) have the extra box to spare, (b) can put in a little extra
work up front, (c) don't care about direct access to block-level storage, and
(d) can live without ephemeral snapshots, i'd suggest ken's fs; otherwise, i'd
suggest fossil backed by venti.

(note that (d) above is slightly tricky, as the ephemeral snapshots in fossil
seem to trigger a bug that causes fossil to hang periodically; most reports
have that happening every ~2 months or so)

this gets asked often, and is worth a wiki page.

> Do you recommend fossil+venti or kfs? I've seen that a lot of people prefer kfs.

note that "kfs", confusingly, is not "ken's fs". that's sometimes (more confusingly)
informally called "kenfs". kfs is a quite old offshoot that used to be the standard
install on things like laptops or other stand-alone systems before fossil came
along.

of kenfs, kfs, and fossil (with or without venti), kfs is your most "traditional" file
system. no snapshots or archives, no connection to venti or similar, no block or
file level de-duplicating. it still has a place, but doesn't sound like what you want.

> Are there any options for block level encryption on Plan9?

there have been a handful of such experiments, but i'm pretty sure there's nothing
in the stock distribution. check the 9fans archives for discussion of the various
experiments. i'm not sure if any were quite "there" yet.

> Is 9p suitable for this? How will the 40ms latency affect 9p operation?

i expect that'll be well low enough for most uses.

> I assume there are no good 9p drivers for Windows so I can
> access my data directly.

there was at least one good one, but (last i checked) it's not generally available
(search the archives for "rangboom" for more info).

> How good are the various
> Plan9 SMB servers? (On the long term I might write a 9p filesystem
> driver for Windows, I've been a Windows filesystem developer in a past
> life). Is it better if I use the Plan9 boxen only for AoE with a Linux
> on top that serves SMB?

i've used the plan9 smb stuff only lightly, but it's worked well for me. there are
folks actively paying attention to it and interested in having it work well, which
is always good. he linux stuff will doubtless be more well-tested, but then you
(a) have an extra layer of stuff to set up and configure, and (b) have linux.

i'd give the plan9 smb stuff a shot and see how it works for you.

> Are there some kind of external enclosures that can hold a reasonable
> number of drives and connect via eSATA or something? Are they fast enough?

"fast enough" depends on your needs. you've said you're on 100Mbit, though, so
i'd expect most decent eSATA enclosures to be fast enough to saturate that link.

i have no recommendations on specific enclosures.

anthony

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 210 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] Help with two small shared file servers
  2011-08-17 14:11 ` Anthony Sorace
@ 2011-08-17 14:32   ` John Floren
  2011-08-17 15:39   ` erik quanstrom
       [not found]   ` <CAL4LZyiEk35Kfq_wezUaEvJWsYX3ONeordrD7sQjFr+45fQiWg@mail.gmail.c>
  2 siblings, 0 replies; 17+ messages in thread
From: John Floren @ 2011-08-17 14:32 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

On Wed, Aug 17, 2011 at 7:11 AM, Anthony Sorace <a@9srv.net> wrote:
> On Aug 17, 2011, at 6:09 AM, Aram Hăvărneanu wrote:
>
>> Can anyone shed some light on why I might want one and not the other?
>> Are there any other options?
>
> ken's fs is a kernel, and essentially gives you a 9p-accessible file storage
> appliance. it takes another box to be able to run it. #k is part of the standard
> cpu server kernel, and is typically used in conjunction with fossil and/or
> venti. really, you usually want to decide between Ken's fs and fossil (with or
> without venti). the basics:
>
> Ken's FS used to be the default file server in Plan 9. It's a kernel, and takes over
> a box. It is only directly accessible via 9p and il, which pretty much means via
> Plan 9 (cpu servers can share the resources as they like, of course). It does
> automatic daily archives of the whole system. It does limited de-duplicating.
> It is very fast and extremely stable.
>
> fossil is a newer file server, designed for use with venti (but usable on its own,
> as well). it can be configured to do daily archives and/or periodic ephemeral
> snapshots (you pick how often and how long they last). It can be used stand-
> alone (can't do archives that way, but can still do snapshots) or with venti, in
> which case you get venti's block-level deduplication (and direct access to
> block-level storage). fossil performs reasonably and is relatively stable, but
> less so than ken's fs (and, personally, seems more sensitive to unexpected
> shutdown). venti is very stable. typically, if fossil goes belly-up, you can
> reconstruct it from venti (in every case i've seen).
>
> personally, if you (a) have the extra box to spare, (b) can put in a little extra
> work up front, (c) don't care about direct access to block-level storage, and
> (d) can live without ephemeral snapshots, i'd suggest ken's fs; otherwise, i'd
> suggest fossil backed by venti.
>
>(note that (d) above is slightly tricky, as the ephemeral snapshots in fossil
>seem to trigger a bug that causes fossil to hang periodically; most reports
>have that happening every ~2 months or so)

Ken's FS serves only 9P and can be a hassle to set up and get going.
In my opinion, if you can live without ephemeral snapshots, just run
fossil+venti with snapshots turned off, which should eliminate the
stability complaint.

I ran Ken's FS for a while, but in the end it wasn't worth the extra
trouble. We're back with fossil + venti now, with fossil on the local
disk and Venti on a 16 TB Coraid.


>> Is 9p suitable for this? How will the 40ms latency affect 9p operation?
>
> i expect that'll be well low enough for most uses.
>

It's going to be slow. See my thesis
(https://bitbucket.org/floren/tstream/src/67c7419ad84a/documents/Thesis.pdf)
for details, but I'd expect file transfers to be at least 6 times
slower than transferring via HTTP. When I tested 9P vs. HTTP over
connections with 25 ms latency (50ms RTT), I saw a 4x slowdown versus
HTTP. Even at 15 ms RTT, transfers took twice as long.


John



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] Help with two small shared file servers
  2011-08-17 14:11 ` Anthony Sorace
  2011-08-17 14:32   ` John Floren
@ 2011-08-17 15:39   ` erik quanstrom
       [not found]   ` <CAL4LZyiEk35Kfq_wezUaEvJWsYX3ONeordrD7sQjFr+45fQiWg@mail.gmail.c>
  2 siblings, 0 replies; 17+ messages in thread
From: erik quanstrom @ 2011-08-17 15:39 UTC (permalink / raw)
  To: 9fans

> > What's the best option for RAID in Plan9? I understand I can use
> > either Ken's fileserver or the Plan9 '#k' device.
>
> note that neither of these are RAID in the way most people expect. failure
> notification, in particular, can be lacking, and they're more restricted in what
> they try to do that many RAID systems. that said, I've used #k quite happily for
> years with no issues (and periodic correctness checks).

the way we use ken's file server is to connect it to
aoe storage which internally provides raid.  in fact,
we use the properties of the mirrors in the file server
to provide off site backup.  since the mirror always reads
from the first element which is very close, and since writes
are always done in the background and have no user
impact, the mirror doesn't change user-visable performance
at all.

- erik



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] Help with two small shared file servers
  2011-08-17 10:09 [9fans] Help with two small shared file servers Aram Hăvărneanu
  2011-08-17 14:11 ` Anthony Sorace
@ 2011-08-17 21:00 ` Bakul Shah
  2011-08-17 21:19   ` John Floren
                     ` (2 more replies)
  2011-08-30 21:40 ` Ethan Grammatikidis
  2 siblings, 3 replies; 17+ messages in thread
From: Bakul Shah @ 2011-08-17 21:00 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

On Wed, 17 Aug 2011 13:09:47 +0300 =?UTF-8?B?QXJhbSBIxIN2xINybmVhbnU=?= <aram.h@mgk.ro>  wrote:
> Hello,
>
> I'm looking for advice on how to build a small network of two file
> servers. I'm hoping most servers to be Plan9, clients are Windows and
> Mac OS X.
>
> I have 2 houses separated by about 40ms of network latency. I want to
> set some servers in each location and have all data accessible from
> anywhere. I'll have about 2TB of data at each location, one location
> will probably scale up.
	...
> Is 9p suitable for this? How will the 40ms latency affect 9p
> operation? (I have 100Mbit).

With a strict request/response protocol you will get no more
than 64KB once every 80ms so your throughput at best will be
6.55Mbps or about 15 times slower than using HTTP/FTP on
100Mbps link for large files.  [John, what was the link speed
for the tests in your thesis?]

> Right now (only one location) I am using a Solaris server with ZFS
> that serves SMB and iSCSI.

Using venti in place of (or over) ZFS on spinning disks would
incur further performance degradation.

> Any tips are welcomed :-),

Since you want everything accessble from both sites, how about
temporarily caching remote files locally?  There was a usenix
paper about `nache', a caching proxy for nfs4 that may be of
interest. Or may be ftpfs with a local cache if remote access
is readonly?



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] Help with two small shared file servers
  2011-08-17 21:00 ` Bakul Shah
@ 2011-08-17 21:19   ` John Floren
  2011-08-17 21:22   ` Skip Tavakkolian
  2011-08-18  5:29   ` erik quanstrom
  2 siblings, 0 replies; 17+ messages in thread
From: John Floren @ 2011-08-17 21:19 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

On Wed, Aug 17, 2011 at 2:00 PM, Bakul Shah <bakul@bitblocks.com> wrote:
> On Wed, 17 Aug 2011 13:09:47 +0300 =?UTF-8?B?QXJhbSBIxIN2xINybmVhbnU=?= <aram.h@mgk.ro>  wrote:
>> Hello,
>>
>> I'm looking for advice on how to build a small network of two file
>> servers. I'm hoping most servers to be Plan9, clients are Windows and
>> Mac OS X.
>>
>> I have 2 houses separated by about 40ms of network latency. I want to
>> set some servers in each location and have all data accessible from
>> anywhere. I'll have about 2TB of data at each location, one location
>> will probably scale up.
>        ...
>> Is 9p suitable for this? How will the 40ms latency affect 9p
>> operation? (I have 100Mbit).
>
> With a strict request/response protocol you will get no more
> than 64KB once every 80ms so your throughput at best will be
> 6.55Mbps or about 15 times slower than using HTTP/FTP on
> 100Mbps link for large files.  [John, what was the link speed
> for the tests in your thesis?]
>

10 Mbps was the bottleneck--I had a 100 Mbit switch connected to one
side of the Linux gateway and a 10 Mbit hub on the other. I know a 10
Mbit hub is not modern networking equipment, but the only traffic on
it was the tests, and I also figured that 1. You're not likely to do
much better on a cross-country net link anyway, and 2. The HTTP tests
would suffer just as badly :)

If you'd like to Dare to Be Stupid, you can download my source and run
both servers with streaming enabled, then make some sort of
client-side caching filesystem which can stream from the remote (40 ms
away) server on behalf of applications, avoiding the necessity of
re-coding all your applications to understand streaming.

Or just serve your files over SMB, NFS, or HTTP.

I guess I didn't check, but if you *have* 2 TB of data but don't plan
to routinely *access* that much of it at any given time, you can
probably get away with using 9P.

I do not think there is any way a fossil+venti or kenfs file server
serving 9P will outperform your Solaris server. Not with our current
software. The way I use it, Plan 9 and 9P are "fast enough", but for
serious data transfer over such a high-latency link they won't cut it.
Plan 9 + HTTP would be a reasonable option if you just want to share
movies and music between your houses, I suppose.


John



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] Help with two small shared file servers
  2011-08-17 21:00 ` Bakul Shah
  2011-08-17 21:19   ` John Floren
@ 2011-08-17 21:22   ` Skip Tavakkolian
  2011-08-18  5:29   ` erik quanstrom
  2 siblings, 0 replies; 17+ messages in thread
From: Skip Tavakkolian @ 2011-08-17 21:22 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

if the link is stable, cfs(4) might be useful.

-Skip

On Wed, Aug 17, 2011 at 2:00 PM, Bakul Shah <bakul@bitblocks.com> wrote:
> On Wed, 17 Aug 2011 13:09:47 +0300 =?UTF-8?B?QXJhbSBIxIN2xINybmVhbnU=?= <aram.h@mgk.ro>  wrote:
>> Hello,
>>
>> I'm looking for advice on how to build a small network of two file
>> servers. I'm hoping most servers to be Plan9, clients are Windows and
>> Mac OS X.
>>
>> I have 2 houses separated by about 40ms of network latency. I want to
>> set some servers in each location and have all data accessible from
>> anywhere. I'll have about 2TB of data at each location, one location
>> will probably scale up.
>        ...
>> Is 9p suitable for this? How will the 40ms latency affect 9p
>> operation? (I have 100Mbit).
>
> With a strict request/response protocol you will get no more
> than 64KB once every 80ms so your throughput at best will be
> 6.55Mbps or about 15 times slower than using HTTP/FTP on
> 100Mbps link for large files.  [John, what was the link speed
> for the tests in your thesis?]
>
>> Right now (only one location) I am using a Solaris server with ZFS
>> that serves SMB and iSCSI.
>
> Using venti in place of (or over) ZFS on spinning disks would
> incur further performance degradation.
>
>> Any tips are welcomed :-),
>
> Since you want everything accessble from both sites, how about
> temporarily caching remote files locally?  There was a usenix
> paper about `nache', a caching proxy for nfs4 that may be of
> interest. Or may be ftpfs with a local cache if remote access
> is readonly?
>
>



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] Help with two small shared file servers
  2011-08-17 21:00 ` Bakul Shah
  2011-08-17 21:19   ` John Floren
  2011-08-17 21:22   ` Skip Tavakkolian
@ 2011-08-18  5:29   ` erik quanstrom
  2011-08-18  5:47     ` Tristan Plumb
  2011-08-18  6:25     ` Bakul Shah
  2 siblings, 2 replies; 17+ messages in thread
From: erik quanstrom @ 2011-08-18  5:29 UTC (permalink / raw)
  To: 9fans

> > Is 9p suitable for this? How will the 40ms latency affect 9p
> > operation? (I have 100Mbit).
>
> With a strict request/response protocol you will get no more
> than 64KB once every 80ms so your throughput at best will be
> 6.55Mbps or about 15 times slower than using HTTP/FTP on
> 100Mbps link for large files.  [John, what was the link speed
> for the tests in your thesis?]

i calculate 1/0.080s/rt * 64*1024 = 819 kbytes/s.  (i think your
conversion from mbits/s to bytes/s is faulty.)

also you do get 1 oustanding per read or write, not per machine.
so the real limit is wire speed, but for most programs most of the
time the limit for a single file transfer would be as you specify.

- erik



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] Help with two small shared file servers
       [not found]   ` <CAL4LZyiEk35Kfq_wezUaEvJWsYX3ONeordrD7sQjFr+45fQiWg@mail.gmail.c>
@ 2011-08-18  5:34     ` erik quanstrom
  2011-08-18  5:48       ` John Floren
       [not found]       ` <CAL4LZygoKQZoTvof4F_fBQhxqsQZb2r+FR_nkgf=YbU94WvoBQ@mail.gmail.c>
  0 siblings, 2 replies; 17+ messages in thread
From: erik quanstrom @ 2011-08-18  5:34 UTC (permalink / raw)
  To: 9fans

> Ken's FS serves only 9P and can be a hassle to set up and get going.
> In my opinion, if you can live without ephemeral snapshots, just run
> fossil+venti with snapshots turned off, which should eliminate the
> stability complaint.

since it's job is to be a plan 9 file server, this should be neither
surprising nor limiting.

> slower than transferring via HTTP. When I tested 9P vs. HTTP over
> connections with 25 ms latency (50ms RTT), I saw a 4x slowdown versus
> HTTP. Even at 15 ms RTT, transfers took twice as long.

did you do testing at regular lan latencies?  what i see on my networks
is usually <= 30µs.

- erik



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] Help with two small shared file servers
  2011-08-18  5:29   ` erik quanstrom
@ 2011-08-18  5:47     ` Tristan Plumb
  2011-08-18  6:25     ` Bakul Shah
  1 sibling, 0 replies; 17+ messages in thread
From: Tristan Plumb @ 2011-08-18  5:47 UTC (permalink / raw)
  To: 9fans

> > > Is 9p suitable for this? How will the 40ms latency affect 9p
> > > operation? (I have 100Mbit).
> >
> > With a strict request/response protocol you will get no more
> > than 64KB once every 80ms so your throughput at best will be
> > 6.55Mbps or about 15 times slower than using HTTP/FTP on
> > 100Mbps link for large files.  [John, what was the link speed
> > for the tests in your thesis?]

> i calculate 1/0.080s/rt * 64*1024 = 819 kbytes/s.  (i think your
> conversion from mbits/s to bytes/s is faulty.)

819 kbyte/s just about than 6.55 Mbps, at 8 bits a byte.

and

64*1024/0.08 * 8 = 6553600

or are you talking about something else entirely?

tristan

--
All original matter is hereby placed immediately under the public domain.



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] Help with two small shared file servers
  2011-08-18  5:34     ` erik quanstrom
@ 2011-08-18  5:48       ` John Floren
       [not found]       ` <CAL4LZygoKQZoTvof4F_fBQhxqsQZb2r+FR_nkgf=YbU94WvoBQ@mail.gmail.c>
  1 sibling, 0 replies; 17+ messages in thread
From: John Floren @ 2011-08-18  5:48 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: text/plain, Size: 775 bytes --]

On Aug 17, 2011 10:40 PM, "erik quanstrom" <quanstro@quanstro.net> wrote:
> > slower than transferring via HTTP. When I tested 9P vs. HTTP over
> > connections with 25 ms latency (50ms RTT), I saw a 4x slowdown versus
> > HTTP. Even at 15 ms RTT, transfers took twice as long.
>
> did you do testing at regular lan latencies?  what i see on my networks
> is usually <= 30µs.
>
> - erik
>

I induced latencies similar to those found on the Internet. My tests at LAN
speeds (500us in my sub-optimal setup) had 9P at essentially the same
transfer rate as HTTP. My work was motivated by the observation that
transferring files to and from sources is just deathly slow, and that while
fcp does a good job helping that, it is just a patch covering a deeper
issue.

[-- Attachment #2: Type: text/html, Size: 924 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] Help with two small shared file servers
  2011-08-18  5:29   ` erik quanstrom
  2011-08-18  5:47     ` Tristan Plumb
@ 2011-08-18  6:25     ` Bakul Shah
  1 sibling, 0 replies; 17+ messages in thread
From: Bakul Shah @ 2011-08-18  6:25 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

On Thu, 18 Aug 2011 01:29:29 EDT erik quanstrom <quanstro@labs.coraid.com>  wrote:
> > > Is 9p suitable for this? How will the 40ms latency affect 9p
> > > operation? (I have 100Mbit).
> >
> > With a strict request/response protocol you will get no more
> > than 64KB once every 80ms so your throughput at best will be
> > 6.55Mbps or about 15 times slower than using HTTP/FTP on
> > 100Mbps link for large files.  [John, what was the link speed
> > for the tests in your thesis?]
>
> i calculate 1/0.080s/rt * 64*1024 = 819 kbytes/s.  (i think your
> conversion from mbits/s to bytes/s is faulty.)

*8 for bits. 2^16*8/0.080/10^6 Mbps == 6.5536 Mbps.

MBps == 10^6 bytes/sec.
Mbps == 10^6 bits/sec.

But it is not accurate in another way. 40ms for the server to
receive the request. 40 ms for the client to receive the first
byte of data + time for transmission of 64KB == 2^16*8/10^8 ==
about 5ms. Actually a bit more due to ethernet, ip & tcp
overhead. So the total time will be over 85ms for a  BW of
under 6.17Mbps.

> also you do get 1 oustanding per read or write, not per machine.
> so the real limit is wire speed, but for most programs most of the
> time the limit for a single file transfer would be as you specify.

It is three things:

RTT= round trip time = 2*latency
BW = bandwidth
DS = datasize that can be sent without waiting.

Time to deliver DS bytes will be at least
	RTT+DS/BW

Consider three cases:
1) local (small RTT)
2) remote slow (large RTT, small BW)
3) remote fast (large RTT, large BW)

When RTT effects are small (1 & 2) 9p does fine. For case 3,
to minimize RTT effects you must make DS large but that can't
be done in 9p beyond 64k.

In John's test it was 10Mbps @ 25ms latency. To transfer 64KB
is 50ms rtt + 52ms or about 102ms. He should see 5.1Mbps or
about half the throughput compared to HTTP streaming case but
he says it was 1/4th so there must be other inefficiencies.



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] Help with two small shared file servers
       [not found]       ` <CAL4LZygoKQZoTvof4F_fBQhxqsQZb2r+FR_nkgf=YbU94WvoBQ@mail.gmail.c>
@ 2011-08-18 13:26         ` erik quanstrom
  2011-08-18 14:21           ` Lucio De Re
  2011-08-19  8:15           ` cinap_lenrek
  0 siblings, 2 replies; 17+ messages in thread
From: erik quanstrom @ 2011-08-18 13:26 UTC (permalink / raw)
  To: 9fans

> > did you do testing at regular lan latencies?  what i see on my networks
> > is usually <= 30µs.
> >
> > - erik
> >
>
> I induced latencies similar to those found on the Internet. My tests at LAN
> speeds (500us in my sub-optimal setup) had 9P at essentially the same
> transfer rate as HTTP. My work was motivated by the observation that
> transferring files to and from sources is just deathly slow, and that while
> fcp does a good job helping that, it is just a patch covering a deeper
> issue.

i think fcp gets right at the heart of the matter.  issuing multiple outstanding
messages is how protocols like aoe, pcie, etc. get speed.  writes are posted.

the question for me is how does one make this easy and natural.  one way
is to allow the mnt driver to issue multiple concurrent reads or writes for
the same i/o.  another would be to use a system call ring to allow programs
to have an arbitrary number of outstanding system calls.

- erik



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] Help with two small shared file servers
  2011-08-18 13:26         ` erik quanstrom
@ 2011-08-18 14:21           ` Lucio De Re
  2011-08-19  8:15           ` cinap_lenrek
  1 sibling, 0 replies; 17+ messages in thread
From: Lucio De Re @ 2011-08-18 14:21 UTC (permalink / raw)
  To: 9fans

> issuing multiple outstanding
> messages is how protocols like aoe, pcie, etc. get speed.  writes are posted.
>
> the question for me is how does one make this easy and natural.  one way
> is to allow the mnt driver to issue multiple concurrent reads or writes for
> the same i/o.  another would be to use a system call ring to allow programs
> to have an arbitrary number of outstanding system calls.

The latter is the "windowing" feature that I seem to recall was first
introduced in HDLC or thereabouts (I'm no academic, I discovered this
feature many years after its invention).  But in those days
multithreading wasn't an option, maybe now it is possible to
consolidate both approaches into a single concept?  Both resemble
Inferno (Acme?) channels to me.

++L




^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] Help with two small shared file servers
  2011-08-18 13:26         ` erik quanstrom
  2011-08-18 14:21           ` Lucio De Re
@ 2011-08-19  8:15           ` cinap_lenrek
  1 sibling, 0 replies; 17+ messages in thread
From: cinap_lenrek @ 2011-08-19  8:15 UTC (permalink / raw)
  To: 9fans

%-W-NORML Normal Successful Completion

--
cinap :)



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] Help with two small shared file servers
  2011-08-17 10:09 [9fans] Help with two small shared file servers Aram Hăvărneanu
  2011-08-17 14:11 ` Anthony Sorace
  2011-08-17 21:00 ` Bakul Shah
@ 2011-08-30 21:40 ` Ethan Grammatikidis
  2 siblings, 0 replies; 17+ messages in thread
From: Ethan Grammatikidis @ 2011-08-30 21:40 UTC (permalink / raw)
  To: 9fans

On Wed, 17 Aug 2011 13:09:47 +0300
Aram Hăvărneanu <aram.h@mgk.ro> wrote:

> I have 2 houses separated by about 40ms of network latency. I want to
> set some servers in each location and have all data accessible from
> anywhere. I'll have about 2TB of data at each location, one location
> will probably scale up.

You might want to look into Octopus. It has another remote resource
protocol which doesn't support all the features of 9p but which
performs better in high-latency conditions. I couldn't give you the
details, it's been a while since I read about it.

http://lsub.org/ls/octopus.html



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [9fans] Help with two small shared file servers
       [not found] <CAEAzY3_Vx8WW1Oumt0t1_Ay6LtpTFFonpwMD+=0DYCM-yxXaeA@mail.gmail.c>
@ 2011-08-17 15:42 ` erik quanstrom
  0 siblings, 0 replies; 17+ messages in thread
From: erik quanstrom @ 2011-08-17 15:42 UTC (permalink / raw)
  To: 9fans

> Right now (only one location) I am using a Solaris server with ZFS
> that serves SMB and iSCSI. It works great, there's nothing wrong with
> it, but I would like a Plan9 solution. Or do you think iSCSI would
> perform better with this latency? (I can't use AoE as AoE is not
> routable).

use a layer-2 vpn or gre.

- erik



^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2011-08-30 21:40 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-08-17 10:09 [9fans] Help with two small shared file servers Aram Hăvărneanu
2011-08-17 14:11 ` Anthony Sorace
2011-08-17 14:32   ` John Floren
2011-08-17 15:39   ` erik quanstrom
     [not found]   ` <CAL4LZyiEk35Kfq_wezUaEvJWsYX3ONeordrD7sQjFr+45fQiWg@mail.gmail.c>
2011-08-18  5:34     ` erik quanstrom
2011-08-18  5:48       ` John Floren
     [not found]       ` <CAL4LZygoKQZoTvof4F_fBQhxqsQZb2r+FR_nkgf=YbU94WvoBQ@mail.gmail.c>
2011-08-18 13:26         ` erik quanstrom
2011-08-18 14:21           ` Lucio De Re
2011-08-19  8:15           ` cinap_lenrek
2011-08-17 21:00 ` Bakul Shah
2011-08-17 21:19   ` John Floren
2011-08-17 21:22   ` Skip Tavakkolian
2011-08-18  5:29   ` erik quanstrom
2011-08-18  5:47     ` Tristan Plumb
2011-08-18  6:25     ` Bakul Shah
2011-08-30 21:40 ` Ethan Grammatikidis
     [not found] <CAEAzY3_Vx8WW1Oumt0t1_Ay6LtpTFFonpwMD+=0DYCM-yxXaeA@mail.gmail.c>
2011-08-17 15:42 ` erik quanstrom

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).