Development discussion of WireGuard
 help / color / mirror / Atom feed
* [RFC] wiregard RX packet processing.
@ 2021-12-08 17:32 Sebastian Andrzej Siewior
  2021-12-20 17:29 ` Jason A. Donenfeld
  0 siblings, 1 reply; 5+ messages in thread
From: Sebastian Andrzej Siewior @ 2021-12-08 17:32 UTC (permalink / raw)
  To: wireguard, netdev
  Cc: Jason A. Donenfeld, David S. Miller, Jakub Kicinski,
	Thomas Gleixner, Peter Zijlstra

I didn't understand everything, I just stumbled upon this while looking
for something else and don't have the time to figure everything out.
Also I might haven taken a wrong turn somewhere…

need_resched() is something you want avoid unless you write core code.
On a PREEMPT kernel you never observe true here and cond_resched() is a
nop. On non-PREEMPT kernels need_resched() can return true/ false _and_
should_resched() (which is part of cond_resched()) returns only true if
the same bit is true. This means invoking only cond_resched() saves one
read access. Bonus points: On x86 that bit is folded into the preemption
counter so you avoid reading that bit entirely plus the whole thing is
optimized away on a PREEMPT kernel.

wg_queue_enqueue_per_peer_rx() enqueues somehow skb for NAPI processing
(this bit I haven't figured out yet but it has to) and then invokes
napi_schedule(). This napi_schedule() wasn't meant to be invoked from
preemptible context, only from an actual IRQ handler:
- if NAPI is already active (which can only happen if it is running on a
  remote CPU) then nothing happens. Good.

- if NAPI is idle then __napi_schedule() will "schedule" it. Here is
  the thing: You are in process context (kworker) so nothing happens
  right away: NET_RX_SOFTIRQ is set for the local CPU and NAPI struct is
  added to the list. Now you need to wait until a random interrupt
  appears which will notice that a softirq bit is set and will process
  it. So it will happen eventually…

I would suggest to either:
- add a comment that this is know and it doesn't not matter because
  $REASON. I would imagine you might want to batch multiple skbs but…

- add a BH disable section around wg_queue_enqueue_per_peer_rx() (see
  below). That bh-enable() will invoke pending softirqs which in your
  case should invoke wg_packet_rx_poll() where you see only one skb.

diff --git a/drivers/net/wireguard/receive.c b/drivers/net/wireguard/receive.c
index 7b8df406c7737..64e4ca1ded108 100644
--- a/drivers/net/wireguard/receive.c
+++ b/drivers/net/wireguard/receive.c
@@ -507,9 +507,11 @@ void wg_packet_decrypt_worker(struct work_struct *work)
 		enum packet_state state =
 			likely(decrypt_packet(skb, PACKET_CB(skb)->keypair)) ?
 				PACKET_STATE_CRYPTED : PACKET_STATE_DEAD;
+		local_bh_disable();
 		wg_queue_enqueue_per_peer_rx(skb, state);
-		if (need_resched())
-			cond_resched();
+		local_bh_enable();
+
+		cond_resched();
 	}
 }
 

Sebastian

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [RFC] wiregard RX packet processing.
  2021-12-08 17:32 [RFC] wiregard RX packet processing Sebastian Andrzej Siewior
@ 2021-12-20 17:29 ` Jason A. Donenfeld
  2022-01-05  0:14   ` Toke Høiland-Jørgensen
  2022-01-11 15:40   ` Sebastian Andrzej Siewior
  0 siblings, 2 replies; 5+ messages in thread
From: Jason A. Donenfeld @ 2021-12-20 17:29 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: WireGuard mailing list, Netdev, David S. Miller, Jakub Kicinski,
	Thomas Gleixner, Peter Zijlstra

Hi Sebastian,

Seems like you've identified two things, the use of need_resched, and
potentially surrounding napi_schedule in local_bh_{disable,enable}.

Regarding need_resched, I pulled that out of other code that seemed to
have the "same requirements", as vaguely conceived. It indeed might
not be right. The intent is to have that worker running at maximum
throughput for extended periods of time, but not preventing other
threads from running elsewhere, so that, e.g., a user's machine
doesn't have a jenky mouse when downloading a file.

What are the effects of unconditionally calling cond_resched() without
checking for if (need_resched())? Sounds like you're saying none at
all?

Regarding napi_schedule, I actually wasn't aware that it's requirement
to _only_ ever run from softirq was a strict one. When I switched to
using napi_schedule in this way, throughput really jumped up
significantly. Part of this indeed is from the batching, so that the
napi callback can then handle more packets in one go later. But I
assumed it was something inside of NAPI that was batching and
scheduling it, rather than a mistake on my part to call this from a wq
and not from a softirq.

What, then, are the effects of surrounding that in
local_bh_{disable,enable} as you've done in the patch? You mentioned
one aspect is that it will "invoke wg_packet_rx_poll() where you see
only one skb." It sounds like that'd be bad for performance, though,
given that the design of napi is really geared toward batching.

Jason

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [RFC] wiregard RX packet processing.
  2021-12-20 17:29 ` Jason A. Donenfeld
@ 2022-01-05  0:14   ` Toke Høiland-Jørgensen
  2022-01-11 15:40   ` Sebastian Andrzej Siewior
  1 sibling, 0 replies; 5+ messages in thread
From: Toke Høiland-Jørgensen @ 2022-01-05  0:14 UTC (permalink / raw)
  To: Jason A. Donenfeld, Sebastian Andrzej Siewior
  Cc: WireGuard mailing list, Netdev, David S. Miller, Jakub Kicinski,
	Thomas Gleixner, Peter Zijlstra

"Jason A. Donenfeld" <Jason@zx2c4.com> writes:

> Hi Sebastian,
>
> Seems like you've identified two things, the use of need_resched, and
> potentially surrounding napi_schedule in local_bh_{disable,enable}.
>
> Regarding need_resched, I pulled that out of other code that seemed to
> have the "same requirements", as vaguely conceived. It indeed might
> not be right. The intent is to have that worker running at maximum
> throughput for extended periods of time, but not preventing other
> threads from running elsewhere, so that, e.g., a user's machine
> doesn't have a jenky mouse when downloading a file.
>
> What are the effects of unconditionally calling cond_resched() without
> checking for if (need_resched())? Sounds like you're saying none at
> all?

I believe so: AFAIU, you use need_resched() if you need to do some kind
of teardown before the schedule point, like this example I was recently
looking at:

https://elixir.bootlin.com/linux/latest/source/net/bpf/test_run.c#L73

If you just need to maybe reschedule, you can just call cond_resched()
and it'll do what it says on the tin: do a schedule if needed, and
return immediately otherwise.

> Regarding napi_schedule, I actually wasn't aware that it's requirement
> to _only_ ever run from softirq was a strict one. When I switched to
> using napi_schedule in this way, throughput really jumped up
> significantly. Part of this indeed is from the batching, so that the
> napi callback can then handle more packets in one go later. But I
> assumed it was something inside of NAPI that was batching and
> scheduling it, rather than a mistake on my part to call this from a wq
> and not from a softirq.
>
> What, then, are the effects of surrounding that in
> local_bh_{disable,enable} as you've done in the patch? You mentioned
> one aspect is that it will "invoke wg_packet_rx_poll() where you see
> only one skb." It sounds like that'd be bad for performance, though,
> given that the design of napi is really geared toward batching.

Heh, I wrote a whole long explanation he about variable batch sizes
because you don't control when the NAPI is scheduled, etc... And then I
noticed the while loop is calling ptr_ring_consume_bh(), which means
that there's already a local_bh_disable/enable pair on every loop
invocation. So you already have this :)

Which of course raises the question of whether there's anything to gain
from *adding* batching to the worker? Something like:

#define BATCH_SIZE 8
void wg_packet_decrypt_worker(struct work_struct *work)
{
	struct crypt_queue *queue = container_of(work, struct multicore_worker,
						 work)->ptr;
	void *skbs[BATCH_SIZE];
	bool again;
	int i;

restart:
	local_bh_disable();
	ptr_ring_consume_batched(&queue->ring, skbs, BATCH_SIZE);

	for (i = 0; i < BATCH_SIZE; i++) {
		struct sk_buff *skb = skbs[i];
		enum packet_state state;

		if (!skb)
			break;

		state = likely(decrypt_packet(skb, PACKET_CB(skb)->keypair)) ?
				PACKET_STATE_CRYPTED : PACKET_STATE_DEAD;
		wg_queue_enqueue_per_peer_rx(skb, state);
	}
        
	again = !ptr_ring_empty(&queue->ring);
	local_bh_enable();

	if (again) {
		cond_resched();
		goto restart;
	}
}


Another thing that might be worth looking into is whether it makes sense
to enable threaded NAPI for Wireguard. See:
https://lore.kernel.org/r/20210208193410.3859094-1-weiwan@google.com

-Toke

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [RFC] wiregard RX packet processing.
  2021-12-20 17:29 ` Jason A. Donenfeld
  2022-01-05  0:14   ` Toke Høiland-Jørgensen
@ 2022-01-11 15:40   ` Sebastian Andrzej Siewior
  2022-02-10  0:21     ` Feature Request :: Configuration-Option "ospf = yes|no" on Multi-Peer-Interfaces markus
  1 sibling, 1 reply; 5+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-01-11 15:40 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: WireGuard mailing list, Netdev, David S. Miller, Jakub Kicinski,
	Thomas Gleixner, Peter Zijlstra

On 2021-12-20 18:29:49 [+0100], Jason A. Donenfeld wrote:
> Hi Sebastian,
> 
> Seems like you've identified two things, the use of need_resched, and
> potentially surrounding napi_schedule in local_bh_{disable,enable}.
> 
> Regarding need_resched, I pulled that out of other code that seemed to
> have the "same requirements", as vaguely conceived. It indeed might
> not be right. The intent is to have that worker running at maximum
> throughput for extended periods of time, but not preventing other
> threads from running elsewhere, so that, e.g., a user's machine
> doesn't have a jenky mouse when downloading a file.
>
> What are the effects of unconditionally calling cond_resched() without
> checking for if (need_resched())? Sounds like you're saying none at
> all?

I stand to be corrected but "if need_resched() cond_resched())" is not
something one should do. If you hold a lock and need to drop it first
and und you don't want to drop the lock if there is no need for
scheduling then there is cond_resched_lock() for instance. If you need
to do something more complex (say set a marker if you drop the lock)
then okay _but_ in this case you do more than just the "if …" from
above.

cond_resched() gets optimized away on a preemptible kernel. The side
effect is that you have always a branch (to cond_resched()) including a
possible RCU section (urgently needed quiescent state).

> Regarding napi_schedule, I actually wasn't aware that it's requirement
> to _only_ ever run from softirq was a strict one. When I switched to
> using napi_schedule in this way, throughput really jumped up
> significantly. Part of this indeed is from the batching, so that the
> napi callback can then handle more packets in one go later. But I
> assumed it was something inside of NAPI that was batching and
> scheduling it, rather than a mistake on my part to call this from a wq
> and not from a softirq.

There is no strict requirement to do napi_schedule() from hard-IRQ but
it makes sense actually. So napi_schedule() invokes
__raise_softirq_irqoff() which only ors a bit in the softirq state.
Nothing else. The only reason that the softirq is invoked in a
deterministic way is that irq_exit() has this "if
(local_softirq_pending()) invoke_softirq()" check before returing (to
interrupted user/ kernel code).

So if you use it in a worker (for instance) the NAPI call is delayed
until the next IRQ (due to irq_exit() part) or a random
local_bh_enable() user.

> What, then, are the effects of surrounding that in
> local_bh_{disable,enable} as you've done in the patch? You mentioned
> one aspect is that it will "invoke wg_packet_rx_poll() where you see
> only one skb." It sounds like that'd be bad for performance, though,
> given that the design of napi is really geared toward batching.

As Toke Høiland-Jørgensen wrote in the previous reply, I missed the BH
disable/ enable in ptr_ring_consume_bh(). So what happens is that
ptr_ring_consume_bh() gives you one skb, you do
wg_queue_enqueue_per_peer_rx() which raises NAPI then the following
ptr_ring_consume_bh() (that local_bh_enable() to be exact) invokes the
NAPI callback (I guess wg_packet_rx_poll() but as I wrote earlier, I
didn't figure out how the skbs move from here to the other queue for
that callback).

So there is probably no batching assuming that one skb is processed in
the NAPI callback.

> Jason

Sebastian

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Feature Request :: Configuration-Option "ospf = yes|no" on Multi-Peer-Interfaces
  2022-01-11 15:40   ` Sebastian Andrzej Siewior
@ 2022-02-10  0:21     ` markus
  0 siblings, 0 replies; 5+ messages in thread
From: markus @ 2022-02-10  0:21 UTC (permalink / raw)
  To: wireguard

Hi!

Basically, I understood that WireGuard uses a kind of packet forwarding 
(routing) via "allowed-ips". Since the beginning I have always used a 
dedicated interface with "..., 224.0.0.5/32" for each peer and it works. 
However, I believe that an option is missing in order to allow a 
224.0.0.5 per peer in a multi-peer interface, that it is multicast 
traffic, it shouldn't be a problem, right?! Or just an option "ospf = 
yes|no" with which it is decided whether OSPF is transmitted in a peer 
or not. This would allow even greater flexibility between peer nodes 
with regard to dynamic routing. opinions on this?

Best regards,
Markus


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-02-10  0:21 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-08 17:32 [RFC] wiregard RX packet processing Sebastian Andrzej Siewior
2021-12-20 17:29 ` Jason A. Donenfeld
2022-01-05  0:14   ` Toke Høiland-Jørgensen
2022-01-11 15:40   ` Sebastian Andrzej Siewior
2022-02-10  0:21     ` Feature Request :: Configuration-Option "ospf = yes|no" on Multi-Peer-Interfaces markus

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).