On Jun 20, 2018, at 7:22 PM, Lonnie Abelbeck <lists@lonnie.abelbeck.com> wrote:


On Jun 20, 2018, at 6:47 PM, Jason A. Donenfeld <Jason@zx2c4.com> wrote:

Hey Lonnie,

Thanks for helping to debug this.

On Thu, Jun 21, 2018 at 12:37 AM Lonnie Abelbeck
<lists@lonnie.abelbeck.com> wrote:
Hunk #1 only does the trick, though performance is ever so slightly slower than before overall.

It's good to hear that hunks #2 and #3 don't have much an effect,
though it does still seem to have _some_ effect.

Looks like hunk 1 is rather worrisome though. Can you try out
https://א.cc/eaxxpxbB and let me know if it has any effect?

That patch, as is, is very bad
--
[SUM]   0.00-30.00  sec  1.26 GBytes   360 Mbits/sec   98             sender
[SUM]   0.00-30.03  sec  1.25 GBytes   358 Mbits/sec                  receiver

I then edited the patch to add back in local_bh_disable() / local_bh_enable(), much better
--
[SUM]   0.00-30.00  sec  2.62 GBytes   751 Mbits/sec  1389             sender
[SUM]   0.00-30.00  sec  2.61 GBytes   748 Mbits/sec                  receiver

essentially back to 0.0.20180531 performance, hunk #1 from previous patch and hunk #1 from the latest patch.

Hey Jason,

I'm not sure if this helps, but the 0.0.20180620 performance loss is most noticeable on the box performing "iperf3 -s".

Also, here are a couple .png CPU utilizations of the "iperf3 -s" box via htop during the iperf3 test ...

Test: 0.0.20180620


Test: 0.0.20180620 + hunk #1 from previous patch and hunk #1 from the latest patch


Lonnie