Development discussion of WireGuard
 help / color / mirror / Atom feed
* multiple wireguard interface and kworker ressources
@ 2017-06-13 12:38 nicolas prochazka
  2017-06-13 12:48 ` Jason A. Donenfeld
  0 siblings, 1 reply; 12+ messages in thread
From: nicolas prochazka @ 2017-06-13 12:38 UTC (permalink / raw)
  To: WireGuard mailing list

Hello,
for i in `seq 1 1000` ; do ip link add dev wg${i} type wireguard ; done
=> kernel thread kworker  is 50% cpu time   , with 5000 interface ,
100% cpu time

for i in `seq 1 1000` ; do ip link add dev ifb${i} type ifb ; done
          ( ifb or dummy .. )
=> kernel thread kworker is < 1%

Is it normal behavior ?
 Version : WireGuard 0.0.20170409
  kernel : 4.9.23

Regards,
Nicolas Prochazka

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: multiple wireguard interface and kworker ressources
  2017-06-13 12:38 multiple wireguard interface and kworker ressources nicolas prochazka
@ 2017-06-13 12:48 ` Jason A. Donenfeld
  2017-06-13 14:55   ` nicolas prochazka
  0 siblings, 1 reply; 12+ messages in thread
From: Jason A. Donenfeld @ 2017-06-13 12:48 UTC (permalink / raw)
  To: nicolas prochazka; +Cc: WireGuard mailing list

Hi Nicolas,

I'll look into this. However, you need to update WireGuard to the
latest version, which is 0.0.20170613. I can't provide help for
outdated versions.

Jason

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: multiple wireguard interface and kworker ressources
  2017-06-13 12:48 ` Jason A. Donenfeld
@ 2017-06-13 14:55   ` nicolas prochazka
  2017-06-13 21:47     ` Jason A. Donenfeld
  0 siblings, 1 reply; 12+ messages in thread
From: nicolas prochazka @ 2017-06-13 14:55 UTC (permalink / raw)
  To: Jason A. Donenfeld; +Cc: WireGuard mailing list

Hello again,
with 0.0.20170613  , i can reproduce a big kworker cpu time consumption
Regards,
nicolas

2017-06-13 14:48 GMT+02:00 Jason A. Donenfeld <Jason@zx2c4.com>:
> Hi Nicolas,
>
> I'll look into this. However, you need to update WireGuard to the
> latest version, which is 0.0.20170613. I can't provide help for
> outdated versions.
>
> Jason

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: multiple wireguard interface and kworker ressources
  2017-06-13 14:55   ` nicolas prochazka
@ 2017-06-13 21:47     ` Jason A. Donenfeld
  2017-06-14  7:52       ` nicolas prochazka
  0 siblings, 1 reply; 12+ messages in thread
From: Jason A. Donenfeld @ 2017-06-13 21:47 UTC (permalink / raw)
  To: nicolas prochazka; +Cc: WireGuard mailing list

Hi Nicolas,

It looks to me like some resources are indeed expended in adding those
interfaces. Not that much that would be problematic -- are you seeing
a problematic case? -- but still a non-trivial amount.

I tracked it down to WireGuard's instantiation of xt_hashlimit, which
does some ugly vmalloc, and it's call into the power state
notification system, which uses a naive O(n) algorithm for insertion.
I might have a way of amortizing on module insertion, which would
speed things up. But I wonder -- what is the practical detriment of
spending a few extra cycles on `ip link add`? What's your use case
where this would actually be a problem?

Thanks,
Jason

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: multiple wireguard interface and kworker ressources
  2017-06-13 21:47     ` Jason A. Donenfeld
@ 2017-06-14  7:52       ` nicolas prochazka
  2017-06-14 13:50         ` nicolas prochazka
  2017-06-14 14:05         ` Jason A. Donenfeld
  0 siblings, 2 replies; 12+ messages in thread
From: nicolas prochazka @ 2017-06-14  7:52 UTC (permalink / raw)
  To: Jason A. Donenfeld; +Cc: WireGuard mailing list

hello,
after create of wg interface, kworker thread does not return to a
normal state in my case,
kernel thread continues to consume a lot of cpu .
I must delete wireguard interface to kworker decrease.

Nicolas

2017-06-13 23:47 GMT+02:00 Jason A. Donenfeld <Jason@zx2c4.com>:
> Hi Nicolas,
>
> It looks to me like some resources are indeed expended in adding those
> interfaces. Not that much that would be problematic -- are you seeing
> a problematic case? -- but still a non-trivial amount.
>
> I tracked it down to WireGuard's instantiation of xt_hashlimit, which
> does some ugly vmalloc, and it's call into the power state
> notification system, which uses a naive O(n) algorithm for insertion.
> I might have a way of amortizing on module insertion, which would
> speed things up. But I wonder -- what is the practical detriment of
> spending a few extra cycles on `ip link add`? What's your use case
> where this would actually be a problem?
>
> Thanks,
> Jason

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: multiple wireguard interface and kworker ressources
  2017-06-14  7:52       ` nicolas prochazka
@ 2017-06-14 13:50         ` nicolas prochazka
  2017-06-14 14:15           ` Jason A. Donenfeld
  2017-06-14 14:05         ` Jason A. Donenfeld
  1 sibling, 1 reply; 12+ messages in thread
From: nicolas prochazka @ 2017-06-14 13:50 UTC (permalink / raw)
  To: Jason A. Donenfeld; +Cc: WireGuard mailing list

At this moment, we are using  3000 wg tunnel on a single wireguard
interface, but now
we want divide the tunnels by interface and by group of our client, to
manage qos by wireguard interface, and some other tasks.
So on in a single interface, it's working well, but test with 3000
interface causes some trouble about cpu / load average , performance
of vm.
Regards,
Nicolas

2017-06-14 9:52 GMT+02:00 nicolas prochazka <prochazka.nicolas@gmail.com>:
> hello,
> after create of wg interface, kworker thread does not return to a
> normal state in my case,
> kernel thread continues to consume a lot of cpu .
> I must delete wireguard interface to kworker decrease.
>
> Nicolas
>
> 2017-06-13 23:47 GMT+02:00 Jason A. Donenfeld <Jason@zx2c4.com>:
>> Hi Nicolas,
>>
>> It looks to me like some resources are indeed expended in adding those
>> interfaces. Not that much that would be problematic -- are you seeing
>> a problematic case? -- but still a non-trivial amount.
>>
>> I tracked it down to WireGuard's instantiation of xt_hashlimit, which
>> does some ugly vmalloc, and it's call into the power state
>> notification system, which uses a naive O(n) algorithm for insertion.
>> I might have a way of amortizing on module insertion, which would
>> speed things up. But I wonder -- what is the practical detriment of
>> spending a few extra cycles on `ip link add`? What's your use case
>> where this would actually be a problem?
>>
>> Thanks,
>> Jason

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: multiple wireguard interface and kworker ressources
  2017-06-14  7:52       ` nicolas prochazka
  2017-06-14 13:50         ` nicolas prochazka
@ 2017-06-14 14:05         ` Jason A. Donenfeld
  2017-06-14 14:13           ` Jason A. Donenfeld
  1 sibling, 1 reply; 12+ messages in thread
From: Jason A. Donenfeld @ 2017-06-14 14:05 UTC (permalink / raw)
  To: nicolas prochazka; +Cc: WireGuard mailing list

On Wed, Jun 14, 2017 at 9:52 AM, nicolas prochazka
<prochazka.nicolas@gmail.com> wrote:
> after create of wg interface, kworker thread does not return to a
> normal state in my case,
> kernel thread continues to consume a lot of cpu .
> I must delete wireguard interface to kworker decrease.

So you're telling me that simply running:

for i in `seq 1 1000` ; do ip link add dev wg${i} type wireguard ; done

Will keep a kworker at 100% CPU, even after that command completes?
I'm unable to reproduce this here. Could you give me detailed
environmental information so that I can reproduce this precisely?
Something minimal and easily reproducable would be preferred.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: multiple wireguard interface and kworker ressources
  2017-06-14 14:05         ` Jason A. Donenfeld
@ 2017-06-14 14:13           ` Jason A. Donenfeld
  2017-06-14 14:17             ` nicolas prochazka
  0 siblings, 1 reply; 12+ messages in thread
From: Jason A. Donenfeld @ 2017-06-14 14:13 UTC (permalink / raw)
  To: nicolas prochazka; +Cc: WireGuard mailing list

On Wed, Jun 14, 2017 at 4:05 PM, Jason A. Donenfeld <Jason@zx2c4.com> wrote:
> Will keep a kworker at 100% CPU, even after that command completes?
> I'm unable to reproduce this here. Could you give me detailed
> environmental information so that I can reproduce this precisely?
> Something minimal and easily reproducable would be preferred.

Okay, I have a working reproducer now, and I've isolated the issue to
the xt_hashlimit garbage collector, an issue outside the WireGuard
code base. I might look for a work around within WireGuard, however.
I'll keep you posted.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: multiple wireguard interface and kworker ressources
  2017-06-14 13:50         ` nicolas prochazka
@ 2017-06-14 14:15           ` Jason A. Donenfeld
  2017-06-14 18:08             ` nicolas prochazka
  0 siblings, 1 reply; 12+ messages in thread
From: Jason A. Donenfeld @ 2017-06-14 14:15 UTC (permalink / raw)
  To: nicolas prochazka; +Cc: WireGuard mailing list

On Wed, Jun 14, 2017 at 3:50 PM, nicolas prochazka
<prochazka.nicolas@gmail.com> wrote:
> At this moment, we are using  3000 wg tunnel on a single wireguard
> interface, but now
> we want divide the tunnels by interface and by group of our client, to
> manage qos by wireguard interface, and some other tasks.
> So on in a single interface, it's working well, but test with 3000
> interface causes some trouble about cpu / load average , performance
> of vm.

This seems like a bad idea. Everything will be much better if you
continue to use one tunnel. If you want to do QoS or any other type of
management, you can safely do this per-IP, since the allowed IPs
concept gives strong binding between public key and IP address.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: multiple wireguard interface and kworker ressources
  2017-06-14 14:13           ` Jason A. Donenfeld
@ 2017-06-14 14:17             ` nicolas prochazka
  0 siblings, 0 replies; 12+ messages in thread
From: nicolas prochazka @ 2017-06-14 14:17 UTC (permalink / raw)
  To: Jason A. Donenfeld; +Cc: WireGuard mailing list

thanks :)
NIcolas

2017-06-14 16:13 GMT+02:00 Jason A. Donenfeld <Jason@zx2c4.com>:
> On Wed, Jun 14, 2017 at 4:05 PM, Jason A. Donenfeld <Jason@zx2c4.com> wrote:
>> Will keep a kworker at 100% CPU, even after that command completes?
>> I'm unable to reproduce this here. Could you give me detailed
>> environmental information so that I can reproduce this precisely?
>> Something minimal and easily reproducable would be preferred.
>
> Okay, I have a working reproducer now, and I've isolated the issue to
> the xt_hashlimit garbage collector, an issue outside the WireGuard
> code base. I might look for a work around within WireGuard, however.
> I'll keep you posted.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: multiple wireguard interface and kworker ressources
  2017-06-14 14:15           ` Jason A. Donenfeld
@ 2017-06-14 18:08             ` nicolas prochazka
  2017-06-21 13:54               ` Jason A. Donenfeld
  0 siblings, 1 reply; 12+ messages in thread
From: nicolas prochazka @ 2017-06-14 18:08 UTC (permalink / raw)
  To: Jason A. Donenfeld; +Cc: WireGuard mailing list

hello,
one interface = one public key
with multiples interfaces we can manage mutliples ip without aliasing,
it's more confortable to bind some specific service .
statisitiques informations ( bp, error) is more easily to manage with
differents interfaces

we are talking about ~ 1000 wireguard interfaces with 500 tunnels
(peer) for each .

Nicolas

2017-06-14 16:15 GMT+02:00 Jason A. Donenfeld <Jason@zx2c4.com>:
> On Wed, Jun 14, 2017 at 3:50 PM, nicolas prochazka
> <prochazka.nicolas@gmail.com> wrote:
>> At this moment, we are using  3000 wg tunnel on a single wireguard
>> interface, but now
>> we want divide the tunnels by interface and by group of our client, to
>> manage qos by wireguard interface, and some other tasks.
>> So on in a single interface, it's working well, but test with 3000
>> interface causes some trouble about cpu / load average , performance
>> of vm.
>
> This seems like a bad idea. Everything will be much better if you
> continue to use one tunnel. If you want to do QoS or any other type of
> management, you can safely do this per-IP, since the allowed IPs
> concept gives strong binding between public key and IP address.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: multiple wireguard interface and kworker ressources
  2017-06-14 18:08             ` nicolas prochazka
@ 2017-06-21 13:54               ` Jason A. Donenfeld
  0 siblings, 0 replies; 12+ messages in thread
From: Jason A. Donenfeld @ 2017-06-21 13:54 UTC (permalink / raw)
  To: nicolas prochazka; +Cc: WireGuard mailing list

Hi Nicolas,

1000 interfaces with 500 peers each. That's a very impressive quantity
of 500001 wireguard deployments! Please do let me know how that goes.
I'd be interested to learn what the name of this project/company is.

With regards to your problem, I've fixed it by completely rewriting
ratelimiter.c to not use xt_hashtable, a piece of decaying Linux code
from the 1990s, and instead using my own token bucket implementation.
The result performs much better, is easier on RAM usage, and requires
far fewer lines of code. Most importantly for you, all interfaces will
now be able to share the same netns-keyed hashtable, so that the
cleanup routines are always fast, no matter how many interfaces you
have.

I'll likely sit on it for a bit longer while I verify it and make sure
it works, but if you'd like to try it now, it's sitting in the git
master. Please let me know how it goes.

Regards,
Jason

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2017-06-21 13:38 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-13 12:38 multiple wireguard interface and kworker ressources nicolas prochazka
2017-06-13 12:48 ` Jason A. Donenfeld
2017-06-13 14:55   ` nicolas prochazka
2017-06-13 21:47     ` Jason A. Donenfeld
2017-06-14  7:52       ` nicolas prochazka
2017-06-14 13:50         ` nicolas prochazka
2017-06-14 14:15           ` Jason A. Donenfeld
2017-06-14 18:08             ` nicolas prochazka
2017-06-21 13:54               ` Jason A. Donenfeld
2017-06-14 14:05         ` Jason A. Donenfeld
2017-06-14 14:13           ` Jason A. Donenfeld
2017-06-14 14:17             ` nicolas prochazka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).