Development discussion of WireGuard
 help / color / mirror / Atom feed
From: "Toke Høiland-Jørgensen" <toke@toke.dk>
To: Daniel Golle <daniel@makrotopia.org>
Cc: "Jason A. Donenfeld" <Jason@zx2c4.com>,
	Florent Daigniere <nextgens@freenetproject.org>,
	WireGuard mailing list <wireguard@lists.zx2c4.com>
Subject: Re: passing-through TOS/DSCP marking
Date: Mon, 05 Jul 2021 18:59:10 +0200	[thread overview]
Message-ID: <87v95oy9wh.fsf@toke.dk> (raw)
In-Reply-To: <YOMtyz6GJdjJ4xuy@makrotopia.org>

Daniel Golle <daniel@makrotopia.org> writes:

> On Mon, Jul 05, 2021 at 05:21:25PM +0200, Toke Høiland-Jørgensen wrote:
>> Daniel Golle <daniel@makrotopia.org> writes:
>> ...
>> > I have managed to test your solution and it seems to do the job.
>> > Remaining issues:
>> >  * What to do if there are many tunnels all sharing the same upstream
>> >    interface? In this case I'm thinking of doing:
>> >    preserve-dscp wg0 eth0
>> >    preserve-dscp wg1 eth0
>> >    preserve-dscp wg2 eth0
>> >    ...
>> >    But I'm unsure whether this is indented or if further details need
>> >    to be implemented in order to make that work.
>> 
>> Hmm, not sure whether that will work out of the box, actually. Would
>> definitely be doable to make the userspace utility understand how to do
>> this properly, though. There's nothing in principle preventing this from
>> working; the loader should just be smart enough to do incremental
>> loading of multiple "ingress" programs while still sharing the map
>> between all of them.
>
> You make it at least sound easy :)

I'd say the implementation is relatively straight-forward for anyone
familiar with how BPF works; figuring out how it's *supposed* to work is
the hard bit ;)

>> The only potential operational issue with using it on multiple wg
>> interfaces is if they share IP space; because in that case you might
>> have packets from different tunnels ending up with identical hashes,
>> confusing the egress side. Fixing this would require the outer BPF
>> program to know about wg endpoint addresses and map the packets back to
>> their inner ifindexes using that. But as long as the wireguard tunnels
>> are using different IP subnets (or mostly forwarding traffic without the
>> inner addresses as sources or destinations), the hash collision
>> probability should not be bigger than just traffic on a single tunnel, I
>> suppose.
>> 
>> One particular thing to watch out for here is IPv6 link-local traffic;
>> sine wg doesn't generate link-local addresses automatically, they are
>> commonly configured with (the same) static address (like fe80::1 or
>> fe80::2), which would make link-local traffic identical across wg
>> interfaces. But this is only used for particular setups (I use it for
>> running Babel over wg, for instance), just make sure it won't be an
>> issue for your deployment scenario :)
>
> All this is good to know, but from what I can see now shouldn't be
> a problem in our deployment -- it's multiple wireguard links which are
> (using fwmark and ip rules) routed over several uplinks. We then use
> mwan3 to balance most of the gateway traffic accross the available
> wireguard interfaces, using MASQ/SNAT on each tunnel which has a
> unique transfer network assigned, and no IPv6 at all.
> Hence it should be ok to go under the restrictions you described.

Alright, so the wireguard-to-physical interfaces is always many-to-one?
I.e., each wireguard interface is always routed out the same physical
interface, but there may be multiple wg interfaces sharing the same
uplink?

I'm asking because in that case it does make sense to keep separate
instances of the whole setup per physical interface to limit hash
collisions; otherwise, the lookup table could also be made global and
shared between all physical interfaces, so you'd avoid having to specify
the relationship explicitly...

>> >  * Once a wireguard interface goes down, one cannot unload the
>> >    remaining program on the upstream interface, as
>> >    preserve-dscp wg0 eth0 --unload
>> >    would fail in case of 'wg0' having gone missing.
>> >    What do you suggest to do in this case?
>> 
>> Just fixing the userspace utility to deal with this case properly as
>> well is probably the easiest. How are you thinking you'd deploy this?
>> Via ifup hooks on openwrt, or something different?
>
> Yes, I use ifup hooks configured in an init script for procd and have
> it tied to the wireguard config sections in /etc/config/network:
>
> https://git.openwrt.org/?p=openwrt/staging/dangole.git;a=blob;f=package/network/utils/bpf-examples/files/wireguard-preserve-dscp.init;h=f1e5e25e663308e057285e2bd8e3bcb9560bdd54;hb=5923a78d74be3f05e734b0be0a832a87be8d369b#l56
>
> Passing multiple inner interfaces to one call to the to-be-modified
> preserve-dscp tool could be achieved by some shell magic dealing with
> the configuration...

Not necessary: it's perfectly fine to attach them one at a time.

> We will have to restart the filter for all inner interfaces in case of
> one being added or removed, right?

Nope, that's no necessary either. We can just re-attach the same filter
program to each additional interface.

> And maybe I'll come up with some state tracking so orphaned filters can
> be removed after configuration changes...

The userspace loader could be made to detect this and automatically
clean up the program on the physical interface after the last internal
interface goes away. At least as long as we can rely on an ifdown hook
this will be fairly straight-forward (just requires a lock to not be
racy). Detecting it after interfaces are automatically removed from the
kernel is a bit more cumbersome as it would require some way to trigger
the garbage collection.

-Toke

  reply	other threads:[~2021-07-05 16:59 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-16 13:24 Daniel Golle
2021-06-16 16:28 ` Jason A. Donenfeld
2021-06-16 19:26   ` Daniel Golle
2021-06-16 23:33     ` Toke Høiland-Jørgensen
2021-06-17  7:55       ` Florent Daigniere
2021-06-17  9:41         ` Daniel Golle
2021-06-17 12:24           ` Toke Høiland-Jørgensen
     [not found]             ` <CAMaqUZ09KRtp01OK3u-Di52X_kH9eT4E-wmnPc6QzjSCd5dEiw@mail.gmail.com>
2021-06-17 20:54               ` Toke Høiland-Jørgensen
2021-06-17 23:04             ` Toke Høiland-Jørgensen
2021-06-18 12:24               ` Jason A. Donenfeld
2021-06-21 12:36                 ` Daniel Golle
2021-06-21 14:27                   ` Toke Høiland-Jørgensen
2021-06-30 17:23                     ` Daniel Golle
2021-06-30 20:55                       ` Toke Høiland-Jørgensen
2021-07-04 14:15                         ` Daniel Golle
2021-07-05 15:21                           ` Toke Høiland-Jørgensen
2021-07-05 16:05                             ` Daniel Golle
2021-07-05 16:59                               ` Toke Høiland-Jørgensen [this message]
2021-07-05 17:26                                 ` Daniel Golle
2021-07-05 21:20                                   ` Toke Høiland-Jørgensen
2021-07-06  7:00   ` Florent Daigniere
2021-07-06 20:08     ` Luiz Angelo Daros de Luca

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87v95oy9wh.fsf@toke.dk \
    --to=toke@toke.dk \
    --cc=Jason@zx2c4.com \
    --cc=daniel@makrotopia.org \
    --cc=nextgens@freenetproject.org \
    --cc=wireguard@lists.zx2c4.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).