9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
From: "Eric Van Hensbergen" <ericvh@gmail.com>
To: "Fans of the OS Plan 9 from Bell Labs" <9fans@9fans.net>
Subject: Re: [9fans] new lguest port available
Date: Wed, 23 Apr 2008 19:40:40 -0500	[thread overview]
Message-ID: <a4e6962a0804231740u58eb1548gc18c787f675c813b@mail.gmail.com> (raw)
In-Reply-To: <371bee37a941420e21ffaf97c5e192b7@quanstro.net>

On Wed, Apr 23, 2008 at 7:29 PM, erik quanstrom <quanstro@quanstro.net> wrote:
> >>  just put it up on a tee: why not use aoe?
>  >>
>  >
>  > The problems of disk I/O are largely a focus issue -- all this stuff
>  > is pretty new and they focused on the network mechanisms first because
>  > those were the ones where the competition has published the most
>  > compelling benchmarks.  The disk stuff will get tuned out and will
>  > likely outperform network for I/O.  As an example, 9P directly over
>  > virtio beats NFS/TCP/virtio-net by 70% without cacheing or
>  > optimization in 9P (which is usually the opposite case on
>  > unvirtualized hardware due to cacheing and what not).
>
>  i wouldn't think that you could tune out rotational latency.  8.4ms is
>  pretty much forever when you're counting nanoseconds.
>
>  since aoe can do wirespeed (120ms/s) on typical physical gige
>  chipsets i would think it would have no trouble keeping up with
>  spinning media.  especially when not handicapped by having to
>  actually stuff bits through a phy.
>

You on the wrong portion of the problem -- the disk solution they have
is effectively AOV (ATA over Virtio), you aren't going to do better by
putting a virtual network driver in between.  They just have to tune
their userspace gateway for disk access -- they put a lot of work into
making the virtio<->tun/tap gateway really efficient and I think they
are just using the crappy Qemu block device at the moment.  Once they
short-out the gateway between the guest-virtio channel and the
in-kernel block driver it'll be much faster than tunneling AOE over
the network device to the host.

Now - if you are talking about supporting an off-server CORAID storage
array -- then you should absolutely go AOE, but I think he was talking
about talking between guest and host partitions on his laptop in which
case you are adding extra layers for nothing.

         -eric


  reply	other threads:[~2008-04-24  0:40 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-04-23 18:01 ron minnich
2008-04-23 18:10 ` Eric Van Hensbergen
2008-04-24 16:21   ` ron minnich
2008-04-23 19:21 ` Juan M. Mendez
2008-04-23 20:30   ` Stefan Hajnoczi
2008-04-23 22:56   ` ron minnich
2008-04-23 23:00     ` ron minnich
2008-04-23 23:22     ` erik quanstrom
2008-04-23 23:40       ` ron minnich
2008-04-23 23:55         ` erik quanstrom
2008-04-24  0:11       ` Eric Van Hensbergen
2008-04-24  0:29         ` erik quanstrom
2008-04-24  0:40           ` Eric Van Hensbergen [this message]
2008-04-24  0:53             ` erik quanstrom
2008-04-24  1:24               ` Eric Van Hensbergen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a4e6962a0804231740u58eb1548gc18c787f675c813b@mail.gmail.com \
    --to=ericvh@gmail.com \
    --cc=9fans@9fans.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).