9front - general discussion about 9front
 help / color / mirror / Atom feed
* [9front] vm disk performance
@ 2021-04-15  9:52 hiro
  2021-04-16  1:04 ` Xiao-Yong Jin
  0 siblings, 1 reply; 3+ messages in thread
From: hiro @ 2021-04-15  9:52 UTC (permalink / raw)
  To: 9front

i realized that my 9front KVM cwfs install assumes 512byte sectors.
the physical disk actually uses 4k sectors.

the partition table also aligns things by sector size.
until nvram we are aligned by big enough multiples of 512 that they
happen to also be multiples of 4k.
but nvram is one sector big, causing the next important partition
(other) to be misaligned by those 512bytes.

these 2 things together might affect performance considerably,
especially on hdds.

personally i also use zvols from zfs between the VMs and my disk, they
also happen to have a blocksize, the rather low default should
generally be increased to 16k to align with cwfs. even though i doubt
anybody else here is using zvols :)

i remember also a long time ago i discovered (on shady forums, don't
remember) that kvm on linux by default forces all normal (async)
writes to the virtual disk for "legacy" operating systems to by
synchronous to protect us from ourselves.
personally i disabled sync writes on my zfs as a good enough remedy,
but i have *no clue* how much better it might work if kvm could be
convinced we are a microsoft/redhat certified non-legacy system.

since i managed to find already 4 very probable big bottlenecks for
common setups with 9front inside VMs, i'd like to share my preliminary
view on this matter.

tests outstanding.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [9front] vm disk performance
  2021-04-15  9:52 [9front] vm disk performance hiro
@ 2021-04-16  1:04 ` Xiao-Yong Jin
  2021-04-16 12:31   ` hiro
  0 siblings, 1 reply; 3+ messages in thread
From: Xiao-Yong Jin @ 2021-04-16  1:04 UTC (permalink / raw)
  To: 9front



> On Apr 15, 2021, at 4:52 AM, hiro <23hiro@gmail.com> wrote:
> 
> i realized that my 9front KVM cwfs install assumes 512byte sectors.
> the physical disk actually uses 4k sectors.
> 
> the partition table also aligns things by sector size.
> until nvram we are aligned by big enough multiples of 512 that they
> happen to also be multiples of 4k.
> but nvram is one sector big, causing the next important partition
> (other) to be misaligned by those 512bytes.
> 
> these 2 things together might affect performance considerably,
> especially on hdds.
> 
> personally i also use zvols from zfs between the VMs and my disk, they
> also happen to have a blocksize, the rather low default should
> generally be increased to 16k to align with cwfs. even though i doubt
> anybody else here is using zvols :)

% zfs list -o space,type,compress,compressratio,volmode,volsize,volblocksize zroot/9front      
NAME          AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD  TYPE    COMPRESS  RATIO  VOLMODE  VOLSIZE  VOLBLOCK
zroot/9front   353G  45.9G     5.96G   15.1G          24.9G          0  volume       lz4  1.46x      dev      30G        8K

> i remember also a long time ago i discovered (on shady forums, don't
> remember) that kvm on linux by default forces all normal (async)
> writes to the virtual disk for "legacy" operating systems to by
> synchronous to protect us from ourselves.
> personally i disabled sync writes on my zfs as a good enough remedy,
> but i have *no clue* how much better it might work if kvm could be
> convinced we are a microsoft/redhat certified non-legacy system.
> 
> since i managed to find already 4 very probable big bottlenecks for
> common setups with 9front inside VMs, i'd like to share my preliminary
> view on this matter.
> 
> tests outstanding.

I never worried about performance here, because network latency dominates
most of the use cases.  Perhaps you may produce some benchmarks for a
couple of use cases in mind?


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [9front] vm disk performance
  2021-04-16  1:04 ` Xiao-Yong Jin
@ 2021-04-16 12:31   ` hiro
  0 siblings, 0 replies; 3+ messages in thread
From: hiro @ 2021-04-16 12:31 UTC (permalink / raw)
  To: 9front

sadly i have no real-world tests that are meaningful.
sl has these issues with the mailinglist and 9front web server,
especially since they moved from SSD to HDD.
my purely theoretical engagement reminded me these are very likely
bottlenecks in all small blocksize random access scenarios.
especially combined with the assumption that maybe the cwfs cache is
not optimally used.

On 4/16/21, Xiao-Yong Jin <meta.jxy@gmail.com> wrote:
>
>
>> On Apr 15, 2021, at 4:52 AM, hiro <23hiro@gmail.com> wrote:
>>
>> i realized that my 9front KVM cwfs install assumes 512byte sectors.
>> the physical disk actually uses 4k sectors.
>>
>> the partition table also aligns things by sector size.
>> until nvram we are aligned by big enough multiples of 512 that they
>> happen to also be multiples of 4k.
>> but nvram is one sector big, causing the next important partition
>> (other) to be misaligned by those 512bytes.
>>
>> these 2 things together might affect performance considerably,
>> especially on hdds.
>>
>> personally i also use zvols from zfs between the VMs and my disk, they
>> also happen to have a blocksize, the rather low default should
>> generally be increased to 16k to align with cwfs. even though i doubt
>> anybody else here is using zvols :)
>
> % zfs list -o space,type,compress,compressratio,volmode,volsize,volblocksize
> zroot/9front
> NAME          AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD  TYPE
>    COMPRESS  RATIO  VOLMODE  VOLSIZE  VOLBLOCK
> zroot/9front   353G  45.9G     5.96G   15.1G          24.9G          0
> volume       lz4  1.46x      dev      30G        8K
>
>> i remember also a long time ago i discovered (on shady forums, don't
>> remember) that kvm on linux by default forces all normal (async)
>> writes to the virtual disk for "legacy" operating systems to by
>> synchronous to protect us from ourselves.
>> personally i disabled sync writes on my zfs as a good enough remedy,
>> but i have *no clue* how much better it might work if kvm could be
>> convinced we are a microsoft/redhat certified non-legacy system.
>>
>> since i managed to find already 4 very probable big bottlenecks for
>> common setups with 9front inside VMs, i'd like to share my preliminary
>> view on this matter.
>>
>> tests outstanding.
>
> I never worried about performance here, because network latency dominates
> most of the use cases.  Perhaps you may produce some benchmarks for a
> couple of use cases in mind?
>
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-04-17  9:09 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-15  9:52 [9front] vm disk performance hiro
2021-04-16  1:04 ` Xiao-Yong Jin
2021-04-16 12:31   ` hiro

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).