That's not quite right. The security properties of mDNS and DNS are the same. DNS is inherently insecure, regardless of unicast vs multicast. If I'm on a coffee shop Wi-Fi, all my DNS queries are sent in the clear to whatever IP address the DHCP server gave me. So the stack has to deal with the fact that any DNS response can be spoofed. The most widely used solution is TLS: a successful DNS hijack can prevent you from accessing a TLS service, but can't impersonate it. That's true of both mDNS and regular unicast DNS. As an example, all Apple devices have mDNS enabled on all interfaces, with no security impact - the features that rely on it (AirDrop, AirPlay, contact sharing, etc) all use mTLS to ensure they're talking to the right device regardless of the correctness of DNS. (Printing remains completely insecure, but that's also independent of DNS - your coffee shop Wi-Fi access point can attack you at the IP layer too). One might think that DNSSEC could save us here, but it doesn't. DNSSEC was unfortunately built with a fundamental design flaw: it requires you to trust all resolvers on the path, including recursive resolvers. So even if you ask for DNSSEC validation of the DNS records for
www.example.com, your coffee shop DNS recursive resolver can tell you "I checked, and
example.com does not support DNSSEC, here's the IP address for
www.example.com though" and you have to accept it. The world has accepted that DNS can be spoofed, deployed TLS in applications, and moved on. The important remaining bit was improving the privacy of DNS, and that was solved by DNS-over-TLS, DNS-over-HTTPS, and DNS-over-QUIC - all of which don't provide DNS integrity checking end-to-end but allow you to encrypt your DNS packets between the client and trusted recursive resolver. Going back to your example,
http://mything.local is only safe on trusted networks regardless of changes in the DNS stack because the coffee shop router can inspect and modify your HTTP request and response. The issue here is that unencrypted http is insecure, not the DNS.
And that's totally fair. If your argument is that "defaults should reflect the previous behavior", then I can't argue. That said, I disagree that this is measurably safer.
Yeah, caching negative results in DNS has been a tricky thing from the start. You probably could hack something by installing a fake SOA record for .local. in your recursive resolver running on localhost. But the RFC-compliant answer is for stub resolvers to treat it specially and know that those often never get an answer (musl doesn't cache DNS results so in a way we're avoiding this problem altogether at the stub resolver).