9front - general discussion about 9front
 help / color / mirror / Atom feed
From: hiro <23hiro@gmail.com>
To: 9front@9front.org
Subject: Re: [9front] vm disk performance
Date: Fri, 16 Apr 2021 14:31:51 +0200	[thread overview]
Message-ID: <CAFSF3XMgGf6BQfsofJ_KEtfXJpUkzgf6np0NomLoV8+8CbVxSg@mail.gmail.com> (raw)
In-Reply-To: <A2F1C480-B5E9-472F-B051-B10772AB007B@gmail.com>

sadly i have no real-world tests that are meaningful.
sl has these issues with the mailinglist and 9front web server,
especially since they moved from SSD to HDD.
my purely theoretical engagement reminded me these are very likely
bottlenecks in all small blocksize random access scenarios.
especially combined with the assumption that maybe the cwfs cache is
not optimally used.

On 4/16/21, Xiao-Yong Jin <meta.jxy@gmail.com> wrote:
>
>
>> On Apr 15, 2021, at 4:52 AM, hiro <23hiro@gmail.com> wrote:
>>
>> i realized that my 9front KVM cwfs install assumes 512byte sectors.
>> the physical disk actually uses 4k sectors.
>>
>> the partition table also aligns things by sector size.
>> until nvram we are aligned by big enough multiples of 512 that they
>> happen to also be multiples of 4k.
>> but nvram is one sector big, causing the next important partition
>> (other) to be misaligned by those 512bytes.
>>
>> these 2 things together might affect performance considerably,
>> especially on hdds.
>>
>> personally i also use zvols from zfs between the VMs and my disk, they
>> also happen to have a blocksize, the rather low default should
>> generally be increased to 16k to align with cwfs. even though i doubt
>> anybody else here is using zvols :)
>
> % zfs list -o space,type,compress,compressratio,volmode,volsize,volblocksize
> zroot/9front
> NAME          AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD  TYPE
>    COMPRESS  RATIO  VOLMODE  VOLSIZE  VOLBLOCK
> zroot/9front   353G  45.9G     5.96G   15.1G          24.9G          0
> volume       lz4  1.46x      dev      30G        8K
>
>> i remember also a long time ago i discovered (on shady forums, don't
>> remember) that kvm on linux by default forces all normal (async)
>> writes to the virtual disk for "legacy" operating systems to by
>> synchronous to protect us from ourselves.
>> personally i disabled sync writes on my zfs as a good enough remedy,
>> but i have *no clue* how much better it might work if kvm could be
>> convinced we are a microsoft/redhat certified non-legacy system.
>>
>> since i managed to find already 4 very probable big bottlenecks for
>> common setups with 9front inside VMs, i'd like to share my preliminary
>> view on this matter.
>>
>> tests outstanding.
>
> I never worried about performance here, because network latency dominates
> most of the use cases.  Perhaps you may produce some benchmarks for a
> couple of use cases in mind?
>
>

      reply	other threads:[~2021-04-17  9:09 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-15  9:52 hiro
2021-04-16  1:04 ` Xiao-Yong Jin
2021-04-16 12:31   ` hiro [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAFSF3XMgGf6BQfsofJ_KEtfXJpUkzgf6np0NomLoV8+8CbVxSg@mail.gmail.com \
    --to=23hiro@gmail.com \
    --cc=9front@9front.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).