From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-3.1 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H4, RCVD_IN_MSPIKE_WL,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.4 Received: from second.openwall.net (second.openwall.net [193.110.157.125]) by inbox.vuxu.org (Postfix) with SMTP id 526C427491 for ; Fri, 8 Mar 2024 01:08:15 +0100 (CET) Received: (qmail 28165 invoked by uid 550); 8 Mar 2024 00:04:14 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: musl@lists.openwall.com Received: (qmail 28122 invoked from network); 8 Mar 2024 00:04:13 -0000 Date: Thu, 7 Mar 2024 19:08:19 -0500 From: Rich Felker To: David Schinazi Cc: musl@lists.openwall.com Message-ID: <20240308000818.GJ4163@brightrain.aerifal.cx> References: <20240306161544.GH4163@brightrain.aerifal.cx> <20240307024316.GI4163@brightrain.aerifal.cx> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Subject: Re: [musl] mDNS in musl On Thu, Mar 07, 2024 at 02:50:53PM -0800, David Schinazi wrote: > On Wed, Mar 6, 2024 at 6:42 PM Rich Felker wrote: > > > On Wed, Mar 06, 2024 at 04:17:44PM -0800, David Schinazi wrote: > > > As Jeffrey points out, when the IETF decided to standardize mDNS, they > > > published it (RFC 6762) at the same time as the Special-Use Domain > > Registry > > > (RFC 6761) which created a process for reserving domain names for custom > > > purposes, and ".local" was one of the initial entries into that registry. > > > The UTF-8 vs punycode issue when it comes to mDNS and DNS is somewhat of > > a > > > mess. It was discussed in Section 16 of RFC 6762 but at the end of the > > day > > > punycode won. Even Apple's implementation of getaddrinfo will perform > > > punycode conversion for .local instead of sending the UTF-8. So in > > practice > > > you wouldn't need to special-case anything here. > > > > OK, these are both really good news! > > > > > There's also very much a policy matter of what "locally over > > > > multicast" means (what the user wants it to mean). Which interfaces > > > > should be queried? Wired and wireless ethernet? VPN links or other > > > > sorts of tunnels? Just one local interface (which one to prioritize) > > > > or all of them? Only if the network is "trusted"? Etc. > > > > > > > > > > You're absolutely right. Most mDNS systems try all non-loopback non-p2p > > > multicast-supporting interfaces, but sending to the default route > > interface > > > would be a good start, more on that below. > > > > This is really one thing that suggests a need for configurability > > outside of what libc might be able to offer. With normal DNS lookups, > > they're something you can block off and prevent from going to the > > network at all by policy (and in fact they don't go past the loopback > > by default, in the absence of a resolv.conf file). Adding mDNS that's > > on-by-default and not configurable would make a vector for network > > traffic being generated that's probably not expected and that could be > > a privacy leak. > > > > Totally agree. I was thinking through this both in terms of RFCs and in > terms of minimal code changes, and had a potential idea. Conceptually, > sending DNS to localhost is musl's IPC mechanism to a more feature-rich > resolver running in user-space. So when that's happening, we don't want to > mess with it because that could cause a privacy leak. Conversely, when > there's a non-loopback IP configured in resolv.conf, then musl acts as a > DNS stub resolver and the server in resolv.conf acts as a DNS recursive > resolver. In that scenario, sending the .local query over DNS to that other > host violates the RFCs. This allows us to treat the configured resolver > address as an implicit configuration mechanism that allows us to > selectively enable this without impacting anyone doing their own DNS > locally. This sounds like an odd overloading of one thing to have a very different meaning, and would break builtin mDNS for anyone doing DNSSEC right (which requires validating nameserver on localhost). Inventing a knob that's an overload of an existing knob is still inventing a knob, just worse. > > > When you do that, how do you control which interface(s) it goes over? > > > > I think that's an important missing ingredient. > > > > > > You're absolutely right. In IPv4, sending to a link-local multicast > > address > > > like this will send it over the IPv4 default route interface. In IPv6, > > the > > > interface needs to be specified in the scope_id. So we'd need to pull > > that > > > out of the kernel with rtnetlink. > > > > There's already code to enumerate interfaces, but it's a decent bit of > > additional machinery to pull in as a dep for the stub resolver, > > > Yeah we'd need lookup_name.c to include netlink.h - it's not huge though, > netlink.c is 50 lines long and statically linked anyway right? I was thinking in terms of using if_nameindex or something, but indeed that's not desirable because it's allocating. So it looks like it wouldn't share code but use netlink.c directly if it were done this way. BTW if there's a legacy ioctl that tells you the number of interfaces (scope_ids), it sems like you could just iterate over the whole numeric range without actually doing netlink enumeration. > > and > > it's not clear how to do it properly for IPv4 (do scope ids work with > > v4-mapped addresses by any chance?) > > > > Scope IDs unfortunately don't work for IPv4. There's the SO_BINDTODEVICE > socket option, but that requires elevated privileges. For IPv4 I'd just use > the default route interface. But the default route interface is almost surely *not* the LAN where you expect .local things to live except in the case where there is only one interface. If you have a network that's segmented into separate LAN and outgoing interfaces, the LAN, not the route to the public internet, is where you would want mDNS going. With that said, SO_BINDTODEVICE is not the standard way to do this, and the correct/standard way doesn't need root. What it does need is binding to the local address on each device, which is still rather undesirable because it means you need N sockets for N interfaces, rather than one socket that can send/receive all addresses. You answered for v4, but are you sure scope ids don't work for *v4-mapped*? That is, IPv6 addresses of the form ::ffff:aaa.bbb.ccc.ddd. I guess I could check this. I'm not very hopeful, but it would be excellent if this worked to send v4 multicast to a particular interface. > Another issue you haven't mentioned: how does TCP fallback work with > > mDNS? Or are answers too large for standard UDP replies just illegal? > > > > Good point, I hadn't thought of that. That handling for mDNS is defined in > [1]. In the ephemeral query mode that we'd use here, it works the same as > for regular DNS: when you receive a response with the TC bit, retry the > query with TCP. The slight difference is that you send the TCP to the > address you got the response from (not to the multicast address that you > sent the original query to). From looking at the musl code, we'd need a > small tweak to __res_msend_rc() to use that address. Luckily that code > already looks at the sender address so we don't need any additional calls > to get it. Yes, that's what I figured you might do. I guess that works reasonably well. > > > Reason for that is that that is the most generic way to support any > > > > other name service besides DNS. It avoids the dependency on dynamic > > > > loading that something like glibc's nsswitch would create, and would > > > > avoid having multiple backends in libc. I really don't think anyone > > > > wants to open that particular door. Once mDNS is in there, someone will > > > > add NetBIOS, just you wait. > > > > > > > > > I'm definitely supportive of the slippery slope argument, but I think > > > there's still a real line between mDNS and NetBIOS. mDNS uses a different > > > transport but lives inside the DNS namespace, whereas NetBIOS is really > > its > > > own thing - NetBIOS names aren't valid DNS hostnames. > > > > > > Let me know what you think of the above. If you think of mDNS as its own > > > beast then I can see how including it wouldn't really make sense. But if > > > you see it as an actual part of the DNS, then it might be worth a small > > > code change :-) > > > > I'm not worried about slippery slopes to NetBIOS. :-P I am concerned > > about unwanted network traffic that can't be suppressed, privacy > > leaks, inventing new configuration knobs, potentially pulling in more > > code & more fragility, getting stuck supporting something that turns > > out to have hidden problems we haven't thought about, etc. > > > > Those are great reasons, and I totally agree with those goals. If we scope > the problem down with the details higher up in this email, we have a way to > turn this off (set the resolver to localhost), we avoid privacy leaks in > cases where the traffic wasn't going out in the first place, we don't have > to add more configuration knobs because we're reusing an existing one, and As mentioned above, I don't think "reusing an existing one" is an improvement. > the amount of added code would be quite small. Limiting things to the > default interface isn't a full multi-network solution, but for those I > think it makes more sense to recommend running your own resolver on > loopback (you'd need elevated privileges to make this work fully anyway). > Coding wise, I think this would be pretty robust. The only breakage I > foresee is cases where someone built a custom resolver that runs on a > different machine and somehow handles .local differently than what the RFCs > say. That config sounds like a bad idea, and a violation of the RFCs, but > that doesn't mean there isn't someone somewhere who's doing it. So there's > a non-zero risk there. But to me that's manageable risk. > > What do you think? I think a more reasonable approach might be requiring an explicit knob to enable mDNS, in the form of an options field like ndots, timeout, retries, etc. in resolv.conf. This ensures that it doesn't become attack surface/change-of-behavior in network environments where peers are not supposed to be able to define network names. One further advantage of such an approach is that it could also solve the "which interface(s)" problem by letting the answer just be "whichever one(s) the user configured" (with the default list being empty). That way we wouldn't even need netlink, just if_nametoindex to convert interface name strings to scope ids, or alternatively (does this work for v6 in the absence of an explicit scope_id?) one or more local addresses to bind and send from. Rich