Development discussion of WireGuard
 help / color / mirror / Atom feed
* CPU round-robin and isolated cores
@ 2022-03-29 22:16 Charles-François Natali
  0 siblings, 0 replies; only message in thread
From: Charles-François Natali @ 2022-03-29 22:16 UTC (permalink / raw)
  To: wireguard


We've run into an issue where wireguard doesn't play nice with
isolated cores (`isolcpus` kernel parameter).

Basically we use `isolcpus` to isolate cores and explicitly bind our
low-latency processes to those cores, in order to minimize latency due
to the kernel and userspace.

It worked great until we started using wireguard, in particular it
seems to be due to the way work is allocated to the workqueues created here:

I'm not familiar with the wireguard code at all so might be missing
something, but looking at e.g.
it seems that the RX path uses round-robin to dispatch the
packets to all online CPUs, including isolated ones:

void wg_packet_receive(struct wg_device *wg, struct sk_buff *skb)
    /* Then we queue it up in the device queue, which consumes the
    * packet as soon as it can.
    cpu = wg_cpumask_next_online(next_cpu);
    if (unlikely(ptr_ring_produce_bh(&device_queue->ring, skb)))
        return -EPIPE;
    queue_work_on(cpu, wq, &per_cpu_ptr(device_queue->worker, cpu)->work);
    return 0;

Where `wg_cpumask_next_online` is defined like this:
static inline int wg_cpumask_next_online(int *next)
    int cpu = *next;

    while (unlikely(!cpumask_test_cpu(cpu, cpu_online_mask)))
        cpu = cpumask_next(cpu, cpu_online_mask) % nr_cpumask_bits;
    *next = cpumask_next(cpu, cpu_online_mask) % nr_cpumask_bits;
    return cpu;

It's a problem for us because it causes significant latency, see e.g.
this ftrace output showing a kworker - bound to an isolated core - spend over
240usec inside wg_packet_decrypt_worker - we've seen much higher, up to
500usec or even more:

kworker/47:1-2373323 [047] 243644.756405: funcgraph_entry: |
process_one_work() {
    kworker/47:1-2373323 [047] 243644.756406: funcgraph_entry: |
wg_packet_decrypt_worker() {
    kworker/47:1-2373323 [047] 243644.756647: funcgraph_exit: 0.591 us | }
    kworker/47:1-2373323 [047] 243644.756647: funcgraph_exit: ! 242.655 us | }

If it was for example a physical NIC, typically what we'd do would
be to set IRQ affinity to avoid those isolated cores, which would also
avoid running the corresponding softirqs on those cores, avoiding such

However it seems that there's currently no way to tell wireguard to
avoid those cores.

I was wondering if it would make sense for wireguard to ignore
isolated cores to avoid this kind of issue. As far as I can tell it should
be a matter of replacing usages of `cpu_online_mask` by
`housekeeping_cpumask(HK_TYPE_DOMAIN)` or even
`housekeeping_cpumask(HK_TYPE_DOMAIN | HK_TYPE_WQ)`.

We could potentially run with a patched kernel but would very much
prefer using an upstream fix if that's acceptable.

Thanks in advance!


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2022-04-21 23:48 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-29 22:16 CPU round-robin and isolated cores Charles-François Natali

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).