9front - general discussion about 9front
 help / color / mirror / Atom feed
From: Xiao-Yong Jin <meta.jxy@gmail.com>
To: 9front@9front.org
Subject: Re: [9front] vm disk performance
Date: Thu, 15 Apr 2021 20:04:46 -0500	[thread overview]
Message-ID: <A2F1C480-B5E9-472F-B051-B10772AB007B@gmail.com> (raw)
In-Reply-To: <CAFSF3XNXBwnN-Xx6_Wwyo7LpAVW=rT_js92jv6KCd75-miuzHA@mail.gmail.com>



> On Apr 15, 2021, at 4:52 AM, hiro <23hiro@gmail.com> wrote:
> 
> i realized that my 9front KVM cwfs install assumes 512byte sectors.
> the physical disk actually uses 4k sectors.
> 
> the partition table also aligns things by sector size.
> until nvram we are aligned by big enough multiples of 512 that they
> happen to also be multiples of 4k.
> but nvram is one sector big, causing the next important partition
> (other) to be misaligned by those 512bytes.
> 
> these 2 things together might affect performance considerably,
> especially on hdds.
> 
> personally i also use zvols from zfs between the VMs and my disk, they
> also happen to have a blocksize, the rather low default should
> generally be increased to 16k to align with cwfs. even though i doubt
> anybody else here is using zvols :)

% zfs list -o space,type,compress,compressratio,volmode,volsize,volblocksize zroot/9front      
NAME          AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD  TYPE    COMPRESS  RATIO  VOLMODE  VOLSIZE  VOLBLOCK
zroot/9front   353G  45.9G     5.96G   15.1G          24.9G          0  volume       lz4  1.46x      dev      30G        8K

> i remember also a long time ago i discovered (on shady forums, don't
> remember) that kvm on linux by default forces all normal (async)
> writes to the virtual disk for "legacy" operating systems to by
> synchronous to protect us from ourselves.
> personally i disabled sync writes on my zfs as a good enough remedy,
> but i have *no clue* how much better it might work if kvm could be
> convinced we are a microsoft/redhat certified non-legacy system.
> 
> since i managed to find already 4 very probable big bottlenecks for
> common setups with 9front inside VMs, i'd like to share my preliminary
> view on this matter.
> 
> tests outstanding.

I never worried about performance here, because network latency dominates
most of the use cases.  Perhaps you may produce some benchmarks for a
couple of use cases in mind?


  reply	other threads:[~2021-04-16  9:24 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-15  9:52 hiro
2021-04-16  1:04 ` Xiao-Yong Jin [this message]
2021-04-16 12:31   ` hiro

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=A2F1C480-B5E9-472F-B051-B10772AB007B@gmail.com \
    --to=meta.jxy@gmail.com \
    --cc=9front@9front.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).