From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=5.0 tests=DKIM_ADSP_CUSTOM_MED, DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,HTML_MESSAGE,MAILING_LIST_MULTI, NORMAL_HTTP_TO_IP,NUMERIC_HTTP_ADDR,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H4,RCVD_IN_MSPIKE_WL,T_SCC_BODY_TEXT_LINE,WEIRD_PORT autolearn=ham autolearn_force=no version=3.4.4 Received: from second.openwall.net (second.openwall.net [193.110.157.125]) by inbox.vuxu.org (Postfix) with SMTP id C7E8F26782 for ; Fri, 8 Mar 2024 02:30:32 +0100 (CET) Received: (qmail 24240 invoked by uid 550); 8 Mar 2024 01:26:30 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: musl@lists.openwall.com Received: (qmail 24207 invoked from network); 8 Mar 2024 01:26:30 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709861418; x=1710466218; darn=lists.openwall.com; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=0w4T4XpDRfq7hmuxfAzHaVPA25hLeraHg/vjWQ8fiZk=; b=eW8bGOTRcAknUNEaGBRc5Lwv2sJg+2omMY3J7CFmfH2BTbzHfbtNNHXu6PpmppYTZn LAznH4n2pNxSrUgZf4NrxitP3Tw4gbxu6UUPad965p4Y7w5DInsvbSF4bNMIh0y/GhrO CmL7exkuZNqJry/YasHeMxSN5VbCY6rG/5/EPPMlW39d9YJNQuFqiaJjtlr9blrQL0Dh u6ceCU3Ohfxk2JVYGh8mry5aXE6Rrv6Xd/Q8gJ1Cunj4h5iqp1Sc+TiBxV52214PffYM kL1Ycr42uv85sD8Fj2vBXWPQRZ74vWNrRXvzfNbwY0C/E4jvbNXd7CkPop2r15C2H6ml jmjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709861418; x=1710466218; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=0w4T4XpDRfq7hmuxfAzHaVPA25hLeraHg/vjWQ8fiZk=; b=nfMtX73KnKxmP6m9w737UC2UXo7Q7BdNk2yH7JPyONsfkvyCnJLGnujtEUt8Tzy5/5 tB6nGgE/zmi0DIWK5Y/5aYUN/viKy+uW+PnNkYQtvwJO2skKeTPFpeLzwWyqbB7y6LOP 3tRdoCqBTAw2zXg4LBDQHudJqA+GmgVFtGxgef24wOrPkrHpTf+nOp1CVwKBUSAvsfMx TcD8vZXbF668u5XLt7XUds8loxfYKG6OLLojVm6EQ5u1I3pGOn5vtKWWKW2YI45wBcCX 5oUTx1D3KiUQ9XxqpKxZsL7iI404g70T00Zja8rHEO6IzWR02ZC20ScEVPbUaKWQjZ+r RCBA== X-Gm-Message-State: AOJu0YxAzrMJfIIRKN26I8NsxzjLb506t+PP9XIDUPZE9Zx6o8wtyUkL aQ3HRywkTrBkwrQTCIsbmJ7WELYcHWddQBEpFUg6lWeFRclVlF+a82Y8JD5bs/ig/QL1NJ5G3Nm 7+cVKo8HdYoiq9BXpEY1sM4dzZrQDDM7Yj64= X-Google-Smtp-Source: AGHT+IEx9qrVgotBhAtz4hvTrbvzYMSIB3Uj5tN8rxn0/5LwKrvV3ZX2k9SoE3x9B9zs6ipsSWsdCJuYRlAqLXsTXcw= X-Received: by 2002:a17:906:480a:b0:a45:6423:b445 with SMTP id w10-20020a170906480a00b00a456423b445mr8849650ejq.65.1709861418180; Thu, 07 Mar 2024 17:30:18 -0800 (PST) MIME-Version: 1.0 References: <20240306161544.GH4163@brightrain.aerifal.cx> <20240307024316.GI4163@brightrain.aerifal.cx> <20240308000818.GJ4163@brightrain.aerifal.cx> In-Reply-To: <20240308000818.GJ4163@brightrain.aerifal.cx> From: David Schinazi Date: Thu, 7 Mar 2024 17:30:06 -0800 Message-ID: To: Rich Felker Cc: musl@lists.openwall.com Content-Type: multipart/alternative; boundary="00000000000072287b06131c228f" Subject: Re: [musl] mDNS in musl --00000000000072287b06131c228f Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Thu, Mar 7, 2024 at 4:08=E2=80=AFPM Rich Felker wrote: > On Thu, Mar 07, 2024 at 02:50:53PM -0800, David Schinazi wrote: > > On Wed, Mar 6, 2024 at 6:42=E2=80=AFPM Rich Felker wr= ote: > > > > > On Wed, Mar 06, 2024 at 04:17:44PM -0800, David Schinazi wrote: > > > > As Jeffrey points out, when the IETF decided to standardize mDNS, > they > > > > published it (RFC 6762) at the same time as the Special-Use Domain > > > Registry > > > > (RFC 6761) which created a process for reserving domain names for > custom > > > > purposes, and ".local" was one of the initial entries into that > registry. > > > > The UTF-8 vs punycode issue when it comes to mDNS and DNS is > somewhat of > > > a > > > > mess. It was discussed in Section 16 of RFC 6762 but at the end of > the > > > day > > > > punycode won. Even Apple's implementation of getaddrinfo will perfo= rm > > > > punycode conversion for .local instead of sending the UTF-8. So in > > > practice > > > > you wouldn't need to special-case anything here. > > > > > > OK, these are both really good news! > > > > > > > There's also very much a policy matter of what "locally over > > > > > multicast" means (what the user wants it to mean). Which interfac= es > > > > > should be queried? Wired and wireless ethernet? VPN links or othe= r > > > > > sorts of tunnels? Just one local interface (which one to > prioritize) > > > > > or all of them? Only if the network is "trusted"? Etc. > > > > > > > > > > > > > You're absolutely right. Most mDNS systems try all non-loopback > non-p2p > > > > multicast-supporting interfaces, but sending to the default route > > > interface > > > > would be a good start, more on that below. > > > > > > This is really one thing that suggests a need for configurability > > > outside of what libc might be able to offer. With normal DNS lookups, > > > they're something you can block off and prevent from going to the > > > network at all by policy (and in fact they don't go past the loopback > > > by default, in the absence of a resolv.conf file). Adding mDNS that's > > > on-by-default and not configurable would make a vector for network > > > traffic being generated that's probably not expected and that could b= e > > > a privacy leak. > > > > > > > Totally agree. I was thinking through this both in terms of RFCs and in > > terms of minimal code changes, and had a potential idea. Conceptually, > > sending DNS to localhost is musl's IPC mechanism to a more feature-rich > > resolver running in user-space. So when that's happening, we don't want > to > > mess with it because that could cause a privacy leak. Conversely, when > > there's a non-loopback IP configured in resolv.conf, then musl acts as = a > > DNS stub resolver and the server in resolv.conf acts as a DNS recursive > > resolver. In that scenario, sending the .local query over DNS to that > other > > host violates the RFCs. This allows us to treat the configured resolver > > address as an implicit configuration mechanism that allows us to > > selectively enable this without impacting anyone doing their own DNS > > locally. > > This sounds like an odd overloading of one thing to have a very > different meaning, and would break builtin mDNS for anyone doing > DNSSEC right (which requires validating nameserver on localhost). > Inventing a knob that's an overload of an existing knob is still > inventing a knob, just worse. > Sorry, I was suggesting the other way around: to only enable the mDNS mode if resolver !=3D 127.0.0.1. But on the topic of DNSSEC, that doesn't really make sense in the context of mDNS because the names aren't globally unique and signed. In theory you could exchange DNSSEC keys out of band and use DNSSEC with mDNS, but I've never heard of anyone doing that. At that point people exchange TLS certificates out of band and use mTLS. But overall I can't argue that overloading configs to mean multiple things is janky :-) > > > When you do that, how do you control which interface(s) it goes over? > > > > > I think that's an important missing ingredient. > > > > > > > > You're absolutely right. In IPv4, sending to a link-local multicast > > > address > > > > like this will send it over the IPv4 default route interface. In > IPv6, > > > the > > > > interface needs to be specified in the scope_id. So we'd need to pu= ll > > > that > > > > out of the kernel with rtnetlink. > > > > > > There's already code to enumerate interfaces, but it's a decent bit o= f > > > additional machinery to pull in as a dep for the stub resolver, > > > > > > Yeah we'd need lookup_name.c to include netlink.h - it's not huge thoug= h, > > netlink.c is 50 lines long and statically linked anyway right? > > I was thinking in terms of using if_nameindex or something, but indeed > that's not desirable because it's allocating. So it looks like it > wouldn't share code but use netlink.c directly if it were done this > way. > > BTW if there's a legacy ioctl that tells you the number of interfaces > (scope_ids), it sems like you could just iterate over the whole > numeric range without actually doing netlink enumeration. > That would also work. The main limitation I was working around was that you can only pass around MAXNS (3) name servers around without making more changes. > > and > > > it's not clear how to do it properly for IPv4 (do scope ids work with > > > v4-mapped addresses by any chance?) > > > > > > > Scope IDs unfortunately don't work for IPv4. There's the SO_BINDTODEVIC= E > > socket option, but that requires elevated privileges. For IPv4 I'd just > use > > the default route interface. > > But the default route interface is almost surely *not* the LAN where > you expect .local things to live except in the case where there is > only one interface. If you have a network that's segmented into > separate LAN and outgoing interfaces, the LAN, not the route to the > public internet, is where you would want mDNS going. > In the case of a router, definitely. In the case of most end hosts or VMs though, they often have only one or two routable interfaces, and the default route is also the LAN. With that said, SO_BINDTODEVICE is not the standard way to do this, > and the correct/standard way doesn't need root. What it does need is > binding to the local address on each device, which is still rather > undesirable because it means you need N sockets for N interfaces, > rather than one socket that can send/receive all addresses. > Oh you're absolutely right, I knew there was a non-privileged way to do this but couldn't remember it earlier. This is giving me an idea though: we could use the "connect UDP socket to get a route lookup" trick. Let's say we're configured with a nameserver that's not 127.0.0.1 (which is the case where I'd like to enable this) let's say the nameserver is set to 192.0.2.33, then today foobar.local would be sent to 192.0.2.33 over whichever interface has a route to it (in most cases the default interface, but not always). We could open an AF_INET/SOCK_DGRAM socket, connect it to 192.0.2.33:53, and then use getsockname to get the local address - we then close that socket. We can then create a new socket, bind it to that local address. That would ensure that we send the mDNS traffic on the same interface where we would have sent the unicast query. Downside is that since all queries share the same socket, we'd bind everything to the interface of the first resolver, or need multiple sockets. You answered for v4, but are you sure scope ids don't work for > *v4-mapped*? That is, IPv6 addresses of the form > ::ffff:aaa.bbb.ccc.ddd. I guess I could check this. I'm not very > hopeful, but it would be excellent if this worked to send v4 multicast > to a particular interface. > Huh I hadn't thought of that, worth a try? RFC 4007 doesn't really allow using scope IDs for globally routable addresses but I'm not sure if Linux does. > Another issue you haven't mentioned: how does TCP fallback work with > > > mDNS? Or are answers too large for standard UDP replies just illegal? > > > > > > > Good point, I hadn't thought of that. That handling for mDNS is defined > in > > [1]. In the ephemeral query mode that we'd use here, it works the same = as > > for regular DNS: when you receive a response with the TC bit, retry the > > query with TCP. The slight difference is that you send the TCP to the > > address you got the response from (not to the multicast address that yo= u > > sent the original query to). From looking at the musl code, we'd need a > > small tweak to __res_msend_rc() to use that address. Luckily that code > > already looks at the sender address so we don't need any additional cal= ls > > to get it. > > Yes, that's what I figured you might do. I guess that works reasonably > well. > > > > > Reason for that is that that is the most generic way to support any > > > > > other name service besides DNS. It avoids the dependency on dynam= ic > > > > > loading that something like glibc's nsswitch would create, and > would > > > > > avoid having multiple backends in libc. I really don't think anyo= ne > > > > > wants to open that particular door. Once mDNS is in there, someon= e > will > > > > > add NetBIOS, just you wait. > > > > > > > > > > > > I'm definitely supportive of the slippery slope argument, but I thi= nk > > > > there's still a real line between mDNS and NetBIOS. mDNS uses a > different > > > > transport but lives inside the DNS namespace, whereas NetBIOS is > really > > > its > > > > own thing - NetBIOS names aren't valid DNS hostnames. > > > > > > > > Let me know what you think of the above. If you think of mDNS as it= s > own > > > > beast then I can see how including it wouldn't really make sense. > But if > > > > you see it as an actual part of the DNS, then it might be worth a > small > > > > code change :-) > > > > > > I'm not worried about slippery slopes to NetBIOS. :-P I am concerned > > > about unwanted network traffic that can't be suppressed, privacy > > > leaks, inventing new configuration knobs, potentially pulling in more > > > code & more fragility, getting stuck supporting something that turns > > > out to have hidden problems we haven't thought about, etc. > > > > > > > Those are great reasons, and I totally agree with those goals. If we > scope > > the problem down with the details higher up in this email, we have a wa= y > to > > turn this off (set the resolver to localhost), we avoid privacy leaks i= n > > cases where the traffic wasn't going out in the first place, we don't > have > > to add more configuration knobs because we're reusing an existing one, > and > > As mentioned above, I don't think "reusing an existing one" is an > improvement. > Fair, my goal was minimizing change size, but that's not the only goal. > the amount of added code would be quite small. Limiting things to the > > default interface isn't a full multi-network solution, but for those I > > think it makes more sense to recommend running your own resolver on > > loopback (you'd need elevated privileges to make this work fully anyway= ). > > Coding wise, I think this would be pretty robust. The only breakage I > > foresee is cases where someone built a custom resolver that runs on a > > different machine and somehow handles .local differently than what the > RFCs > > say. That config sounds like a bad idea, and a violation of the RFCs, b= ut > > that doesn't mean there isn't someone somewhere who's doing it. So > there's > > a non-zero risk there. But to me that's manageable risk. > > > > What do you think? > > I think a more reasonable approach might be requiring an explicit knob > to enable mDNS, in the form of an options field like ndots, timeout, > retries, etc. in resolv.conf. This ensures that it doesn't become > attack surface/change-of-behavior in network environments where peers > are not supposed to be able to define network names. > That would work. I'm not sure who maintains the list of options though. >From a quick search it looks like they came out of 4.3BSD like many networking features, but it's unclear if POSIX owns it or just no one does (which would be the same, POSIX is not around as a standard body any more). One further advantage of such an approach is that it could also solve > the "which interface(s)" problem by letting the answer just be > "whichever one(s) the user configured" (with the default list being > empty). That way we wouldn't even need netlink, just if_nametoindex to > convert interface name strings to scope ids, or alternatively (does > this work for v6 in the absence of an explicit scope_id?) one or more > local addresses to bind and send from. > I definitely would avoid putting local addresses in the config, because it would break for any non-static addresses like DHCP or v6 RAs. The interface name would require walking the getifaddrs list to map it to a corresponding source address but it would work if the interface name is stable. I guess we're looking at two ways to go about this: (1) the simpler but less clean option - where we key off of "resolver !=3D 127.0.0.1" - very limited code size change, but only handles a small subset of scenarios (2) the cleaner option that involves more work - new config option, need multiple sockets - would be cleaner design-wise, but would change quite a bit more code Another aspect to consider is the fact that in a lot of cases resolv.conf is overwritten by various components like NetworkManager, so we'd need to modify them to also understand the option. I'm always in favor of doing the right thing, unless the right thing ends up being so much effort that it doesn't happen. Then I'm a fan of doing the easy thing ;-) David --00000000000072287b06131c228f Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable


=
On Thu, Mar 7, 2024 at 4:08=E2=80=AFP= M Rich Felker <dalias@libc.org>= ; wrote:
On Thu,= Mar 07, 2024 at 02:50:53PM -0800, David Schinazi wrote:
> On Wed, Mar 6, 2024 at 6:42=E2=80=AFPM Rich Felker <dalias@libc.org> wrote:
>
> > On Wed, Mar 06, 2024 at 04:17:44PM -0800, David Schinazi wrote: > > > As Jeffrey points out, when the IETF decided to standardize = mDNS, they
> > > published it (RFC 6762) at the same time as the Special-Use = Domain
> > Registry
> > > (RFC 6761) which created a process for reserving domain name= s for custom
> > > purposes, and ".local" was one of the initial entr= ies into that registry.
> > > The UTF-8 vs punycode issue when it comes to mDNS and DNS is= somewhat of
> > a
> > > mess. It was discussed in Section 16 of RFC 6762 but at the = end of the
> > day
> > > punycode won. Even Apple's implementation of getaddrinfo= will perform
> > > punycode conversion for .local instead of sending the UTF-8.= So in
> > practice
> > > you wouldn't need to special-case anything here.
> >
> > OK, these are both really good news!
> >
> > > There's also very much a policy matter of what "loc= ally over
> > > > multicast" means (what the user wants it to mean).= Which interfaces
> > > > should be queried? Wired and wireless ethernet? VPN lin= ks or other
> > > > sorts of tunnels? Just one local interface (which one t= o prioritize)
> > > > or all of them? Only if the network is "trusted&qu= ot;? Etc.
> > > >
> > >
> > > You're absolutely right. Most mDNS systems try all non-l= oopback non-p2p
> > > multicast-supporting interfaces, but sending to the default = route
> > interface
> > > would be a good start, more on that below.
> >
> > This is really one thing that suggests a need for configurability=
> > outside of what libc might be able to offer. With normal DNS look= ups,
> > they're something you can block off and prevent from going to= the
> > network at all by policy (and in fact they don't go past the = loopback
> > by default, in the absence of a resolv.conf file). Adding mDNS th= at's
> > on-by-default and not configurable would make a vector for networ= k
> > traffic being generated that's probably not expected and that= could be
> > a privacy leak.
> >
>
> Totally agree. I was thinking through this both in terms of RFCs and i= n
> terms of minimal code changes, and had a potential idea. Conceptually,=
> sending DNS to localhost is musl's IPC mechanism to a more feature= -rich
> resolver running in user-space. So when that's happening, we don&#= 39;t want to
> mess with it because that could cause a privacy leak. Conversely, when=
> there's a non-loopback IP configured in resolv.conf, then musl act= s as a
> DNS stub resolver and the server in resolv.conf acts as a DNS recursiv= e
> resolver. In that scenario, sending the .local query over DNS to that = other
> host violates the RFCs. This allows us to treat the configured resolve= r
> address as an implicit configuration mechanism that allows us to
> selectively enable this without impacting anyone doing their own DNS > locally.

This sounds like an odd overloading of one thing to have a very
different meaning, and would break builtin mDNS for anyone doing
DNSSEC right (which requires validating nameserver on localhost).
Inventing a knob that's an overload of an existing knob is still
inventing a knob, just worse.

Sorr= y, I was suggesting the other way around: to only enable the mDNS mode if r= esolver !=3D 127.0.0.1. But on the topic of DNSSEC, that doesn't really= make sense in the context of mDNS because the names aren't globally un= ique and signed. In theory you could exchange DNSSEC keys out of band and u= se DNSSEC with mDNS, but I've never heard of anyone doing that. At that= point people exchange TLS certificates out of band and use mTLS. But overa= ll I can't argue that overloading configs to mean multiple things is ja= nky :-)

> > > When you do that, how do you control which interface(s) it g= oes over?
> > > > I think that's an important missing ingredient.
> > >
> > > You're absolutely right. In IPv4, sending to a link-loca= l multicast
> > address
> > > like this will send it over the IPv4 default route interface= . In IPv6,
> > the
> > > interface needs to be specified in the scope_id. So we'd= need to pull
> > that
> > > out of the kernel with rtnetlink.
> >
> > There's already code to enumerate interfaces, but it's a = decent bit of
> > additional machinery to pull in as a dep for the stub resolver, >
>
> Yeah we'd need lookup_name.c to include netlink.h - it's not h= uge though,
> netlink.c is 50 lines long and statically linked anyway right?

I was thinking in terms of using if_nameindex or something, but indeed
that's not desirable because it's allocating. So it looks like it wouldn't share code but use netlink.c directly if it were done this
way.

BTW if there's a legacy ioctl that tells you the number of interfaces (scope_ids), it sems like you could just iterate over the whole
numeric range without actually doing netlink enumeration.
<= div>
That would also work. The main limitation I was working = around was that you can only pass around=C2=A0MAXNS (3) name servers around= without making more changes.

> > and
> > it's not clear how to do it properly for IPv4 (do scope ids w= ork with
> > v4-mapped addresses by any chance?)
> >
>
> Scope IDs unfortunately don't work for IPv4. There's the SO_BI= NDTODEVICE
> socket option, but that requires elevated privileges. For IPv4 I'd= just use
> the default route interface.

But the default route interface is almost surely *not* the LAN where
you expect .local things to live except in the case where there is
only one interface. If you have a network that's segmented into
separate LAN and outgoing interfaces, the LAN, not the route to the
public internet, is where you would want mDNS going.
<= br>
In the case of a router, definitely. In the case of most end = hosts or VMs though, they often have only one or two routable=C2=A0interfac= es, and the default route is also the LAN.

With that said, SO_BINDTODEVICE is not the standard way to do this,
and the correct/standard way doesn't need root. What it does need is binding to the local address on each device, which is still rather
undesirable because it means you need N sockets for N interfaces,
rather than one socket that can send/receive all addresses.

Oh you're absolutely right, I knew there was a non= -privileged way to do this but couldn't remember it earlier.
=
This is giving me an idea though: we could use the "con= nect UDP socket to get a route lookup" trick. Let's say we're = configured with a nameserver that's not 127.0.0.1 (which is the case wh= ere I'd like to enable this) let's say the nameserver is set to 192= .0.2.33, then today foobar.local would be sent to 192.0.2.33 over whichever= interface has a route to it (in most cases the default interface, but not = always). We could open an AF_INET/SOCK_DGRAM socket, connect it to 192.0.2.33:53, and then use=C2=A0getsockname = to get the local address - we then close that socket. We can then create a = new socket, bind it to that local address. That would ensure that we send t= he mDNS traffic on the same interface where we would have sent the unicast = query. Downside is that since all queries share the same socket, we'd b= ind everything to the interface of the first resolver, or need multiple soc= kets.

You answered for v4, but are you sure scope ids don't work for
*v4-mapped*? That is, IPv6 addresses of the form
::ffff:aaa.bbb.ccc.ddd. I guess I could check this. I'm not very
hopeful, but it would be excellent if this worked to send v4 multicast
to a particular interface.

Huh I hadn&#= 39;t thought of that, worth a try? RFC 4007 doesn't really allow using = scope IDs for globally routable addresses but I'm not sure if Linux doe= s.

> Another issue you haven't mentioned: how does TCP fallback work wi= th
> > mDNS? Or are answers too large for standard UDP replies just ille= gal?
> >
>
> Good point, I hadn't thought of that. That handling for mDNS is de= fined in
> [1]. In the ephemeral query mode that we'd use here, it works the = same as
> for regular DNS: when you receive a response with the TC bit, retry th= e
> query with TCP. The slight difference is that you send the TCP to the<= br> > address you got the response from (not to the multicast address that y= ou
> sent the original query to). From looking at the musl code, we'd n= eed a
> small tweak to __res_msend_rc() to use that address. Luckily that code=
> already looks at the sender address so we don't need any additiona= l calls
> to get it.

Yes, that's what I figured you might do. I guess that works reasonably<= br> well.

> > > Reason for that is that that is the most generic way to supp= ort any
> > > > other name service besides DNS. It avoids the dependenc= y on dynamic
> > > > loading that something like glibc's nsswitch would = create, and would
> > > > avoid having multiple backends in libc. I really don= 9;t think anyone
> > > > wants to open that particular door. Once mDNS is in the= re, someone will
> > > > add NetBIOS, just you wait.
> > >
> > >
> > > I'm definitely supportive of the slippery slope argument= , but I think
> > > there's still a real line between mDNS and NetBIOS. mDNS= uses a different
> > > transport but lives inside the DNS namespace, whereas NetBIO= S is really
> > its
> > > own thing - NetBIOS names aren't valid DNS hostnames. > > >
> > > Let me know what you think of the above. If you think of mDN= S as its own
> > > beast then I can see how including it wouldn't really ma= ke sense. But if
> > > you see it as an actual part of the DNS, then it might be wo= rth a small
> > > code change :-)
> >
> > I'm not worried about slippery slopes to NetBIOS. :-P I am co= ncerned
> > about unwanted network traffic that can't be suppressed, priv= acy
> > leaks, inventing new configuration knobs, potentially pulling in = more
> > code & more fragility, getting stuck supporting something tha= t turns
> > out to have hidden problems we haven't thought about, etc. > >
>
> Those are great reasons, and I totally agree with those goals. If we s= cope
> the problem down with the details higher up in this email, we have a w= ay to
> turn this off (set the resolver to localhost), we avoid privacy leaks = in
> cases where the traffic wasn't going out in the first place, we do= n't have
> to add more configuration knobs because we're reusing an existing = one, and

As mentioned above, I don't think "reusing an existing one" i= s an
improvement.

Fair,=C2=A0my goal was min= imizing change size,=C2=A0but that's not the only goal.

<= /div>
> the amount of added code would be quite small. Limiting things to the<= br> > default interface isn't a full multi-network solution, but for tho= se I
> think it makes more sense to recommend running your own resolver on > loopback (you'd need elevated privileges to make this work fully a= nyway).
> Coding wise, I think this would be pretty robust. The only breakage I<= br> > foresee is cases where someone built a custom resolver that runs on a<= br> > different machine and somehow handles .local differently than what the= RFCs
> say. That config sounds like a bad idea, and a violation of the RFCs, = but
> that doesn't mean there isn't someone somewhere who's doin= g it. So there's
> a non-zero risk there. But to me that's manageable risk.
>
> What do you think?

I think a more reasonable approach might be requiring an explicit knob
to enable mDNS, in the form of an options field like ndots, timeout,
retries, etc. in resolv.conf. This ensures that it doesn't become
attack surface/change-of-behavior in network environments where peers
are not supposed to be able to define network names.
<= br>
That would work. I'm not sure who maintains the list of o= ptions though. From a quick search it looks like they came out of 4.3BSD li= ke many networking features, but it's unclear if POSIX owns it or just = no one does (which would be the same, POSIX is not around as a standard bod= y any more).

One further advantage of such an approach is that it could also solve
the "which interface(s)" problem by letting the answer just be "whichever one(s) the user configured" (with the default list bei= ng
empty). That way we wouldn't even need netlink, just if_nametoindex to<= br> convert interface name strings to scope ids, or alternatively (does
this work for v6 in the absence of an explicit scope_id?) one or more
local addresses to bind and send from.

= I definitely would avoid putting local addresses in the config, because it = would break for any non-static addresses like DHCP or v6 RAs. The interface= name would require walking the getifaddrs list to map it to a correspondin= g source address but it would work if the interface name is stable.

I guess we're looking at two ways to go about this:

(1) the simpler but less clean option - where we ke= y off of "resolver !=3D 127.0.0.1" - very limited code size chang= e, but only handles a small subset of scenarios

(2= ) the cleaner option that involves more work - new config option, need mult= iple sockets - would be cleaner design-wise, but would change quite a bit m= ore code

Another aspect to consider is the fact th= at in a lot of cases resolv.conf is overwritten by various components like = NetworkManager, so we'd need to modify them to also understand the opti= on.

I'm always in favor of doing the right thi= ng, unless the right thing ends up being so much effort that it doesn't= happen. Then I'm a fan of doing the easy thing ;-)

David
--00000000000072287b06131c228f--