From mboxrd@z Thu Jan 1 00:00:00 1970 From: erik quanstrom Date: Sun, 10 May 2015 13:18:09 -0700 To: 9fans@9fans.net Message-ID: In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Subject: Re: [9fans] fossil+venti performance question Topicbox-Message-UUID: 50fd7ff0-ead9-11e9-9d60-3106f5b1d025 On Sun May 10 10:58:55 PDT 2015, 0intro@gmail.com wrote: > >> however, after fixing things so the initial cwind isn't hosed, i get= a little better story: > > > > so, actually, i think this is the root cause. the intial cwind is mi= sset for loopback. > > i but that the symptom folks will see is that /net/tcp/stats shows fr= agmentation when > > performance sucks. evidently there is a backoff bug in sources' tcp,= too. >=20 > What is your cwind change? >=20 the patch is here: /n/atom/patch/tcpmss note i applied a patch to nettest(8) to simulate a rpc-style protocol. i= still ~500MB/s with my test machine simulating rpc-style transactions, or 15=C2=B5s per = 8k transaction. we're at least an order of magnitude off the performance mark for this. a similar test using pipe(2) shows a latency of 5.7=C2=B5s (!) for a pipe= -based rpc, which limits us to about 1.4 GB/s for 8k pipe-based ping-poing rpc. - erik