From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-0.1 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FROM autolearn=ham autolearn_force=no version=3.4.4 Received: (qmail 27286 invoked from network); 17 Apr 2021 09:09:16 -0000 Received: from 1ess.inri.net (216.126.196.35) by inbox.vuxu.org with ESMTPUTF8; 17 Apr 2021 09:09:16 -0000 Received: from mail-oi1-f173.google.com ([209.85.167.173]) by 1ess; Fri Apr 16 08:38:55 -0400 2021 Received: by mail-oi1-f173.google.com with SMTP id a21so14608758oib.10 for <9front@9front.org>; Fri, 16 Apr 2021 05:38:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to; bh=of2wDHn1N/FWt7VMFJcJHwvqQX4HUMnQmHPdMxN8chE=; b=pvb6plRRGtyr6nNSKOdYO4gVcsS3ZALf4vQqzLUPANrj6c0dGC3EZ4eQHRQbY7vw5U /slSFvTnC6jimB+zVTFvtaiLfS+klXukWzCDob6RscSO22DhIIL7mRli+buC40nL0ZB4 4gSgTHTJwmeUis4FQLlJXxB/L3ka97geHBL5JhH6H2cqtPaxs4vz2bDXsHeuOU/KuhWh 1Dp9+XznYelsXkTyu7aoaGPhnOZbQXvLw2rAqJXkycj1yV8t+zq96y95PXu8kVgjJGWW 8Ar2gJ1IqDhl2UPujjQqx6m/lUwOHMaKvcsXJYskBQsV5yOjtq8KsG9qs0rVsyKj64vD TO0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to; bh=of2wDHn1N/FWt7VMFJcJHwvqQX4HUMnQmHPdMxN8chE=; b=UqOJ5SqJADILlqnsch+Ggk244NT/hZPWaFAsP7PQnbsRb8OXlso0MdkSLRd6/RaPjY zxJNYnTYH3LP9G0lI8iAZmk8uQLzac9f2MfduO20flvDhN291jdr5nM9YJi0xnQSwvVA Z2/Y4fVgAlaBnqjWt1h3s1Gfoe/HamU6pQiB7jTqX7jysRLUVoEvobkhgH9J344CmEy0 lXG4wkHk21aQPaj8XgRGDbvoKUrI8ipcj7eUjdQZzsSyUdnh/GyVvYmYihq/yyyX5rvJ +FzXxY2F6zVr8cEmpZpxIGxoFt60Zygsm18d66EZfL9Ltd90W9O0fzyCTpQfJKRXTzJE zpCg== X-Gm-Message-State: AOAM533U2H2H/MNCnMuz3PGnxDiamhgm2suegX089wSNTEmRdgSPFwva 2EccOCpIlMounqp/L8m0ylUcg605U8I7sDxxgCaBCYPApvE= X-Google-Smtp-Source: ABdhPJwwpZpt03AsnYFT6jcLyNAdRnnlWfw6MXFduwT8rlZcirO6E6jqQO3MpPlWQXAtrqHdyDYpDZbXhT+o0n86y8E= X-Received: by 2002:a17:90a:c696:: with SMTP id n22mr9306421pjt.164.1618576312904; Fri, 16 Apr 2021 05:31:52 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:a05:6a20:a01e:b029:1b:3549:26b7 with HTTP; Fri, 16 Apr 2021 05:31:51 -0700 (PDT) In-Reply-To: References: From: hiro <23hiro@gmail.com> Date: Fri, 16 Apr 2021 14:31:51 +0200 Message-ID: To: 9front@9front.org Content-Type: text/plain; charset="UTF-8" List-ID: <9front.9front.org> List-Help: X-Glyph: ➈ X-Bullshit: patented proven TOR service-aware factory element optimizer Subject: Re: [9front] vm disk performance Reply-To: 9front@9front.org Precedence: bulk sadly i have no real-world tests that are meaningful. sl has these issues with the mailinglist and 9front web server, especially since they moved from SSD to HDD. my purely theoretical engagement reminded me these are very likely bottlenecks in all small blocksize random access scenarios. especially combined with the assumption that maybe the cwfs cache is not optimally used. On 4/16/21, Xiao-Yong Jin wrote: > > >> On Apr 15, 2021, at 4:52 AM, hiro <23hiro@gmail.com> wrote: >> >> i realized that my 9front KVM cwfs install assumes 512byte sectors. >> the physical disk actually uses 4k sectors. >> >> the partition table also aligns things by sector size. >> until nvram we are aligned by big enough multiples of 512 that they >> happen to also be multiples of 4k. >> but nvram is one sector big, causing the next important partition >> (other) to be misaligned by those 512bytes. >> >> these 2 things together might affect performance considerably, >> especially on hdds. >> >> personally i also use zvols from zfs between the VMs and my disk, they >> also happen to have a blocksize, the rather low default should >> generally be increased to 16k to align with cwfs. even though i doubt >> anybody else here is using zvols :) > > % zfs list -o space,type,compress,compressratio,volmode,volsize,volblocksize > zroot/9front > NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD TYPE > COMPRESS RATIO VOLMODE VOLSIZE VOLBLOCK > zroot/9front 353G 45.9G 5.96G 15.1G 24.9G 0 > volume lz4 1.46x dev 30G 8K > >> i remember also a long time ago i discovered (on shady forums, don't >> remember) that kvm on linux by default forces all normal (async) >> writes to the virtual disk for "legacy" operating systems to by >> synchronous to protect us from ourselves. >> personally i disabled sync writes on my zfs as a good enough remedy, >> but i have *no clue* how much better it might work if kvm could be >> convinced we are a microsoft/redhat certified non-legacy system. >> >> since i managed to find already 4 very probable big bottlenecks for >> common setups with 9front inside VMs, i'd like to share my preliminary >> view on this matter. >> >> tests outstanding. > > I never worried about performance here, because network latency dominates > most of the use cases. Perhaps you may produce some benchmarks for a > couple of use cases in mind? > >