mailing list of musl libc
 help / color / mirror / code / Atom feed
From: Hamish Forbes <>
To: Rich Felker <>
Subject: Re: [musl] DNS answer buffer is too small
Date: Wed, 5 Jul 2023 12:18:11 +1200	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <>

On Wed, 5 Jul 2023 at 04:06, Rich Felker <> wrote:
> On Tue, Jul 04, 2023 at 04:19:16PM +1200, Hamish Forbes wrote:
> > On Tue, 4 Jul 2023 at 15:29, Rich Felker <> wrote:
> It's as safe as it was before, and it's always been the intended
> behavior. Aside from the CNAME chain issue which wasn't even realized
> at the time, use of TCP for getaddrinfo is not about getting more
> answers than fit in the UDP response size. It's about handling the
> case where the recursive server returns a truncated response with zero
> answer records instead of the max number that fit. It turned out this
> could also occur with a single CNAME where both the queried name and
> the CNAME target take up nearly the full 255 length.
> As for why not to care about more results, getaddrinfo does not
> provide a precise view of DNS space. It takes a hostname and gives you
> a set of addresses you can use to attempt to connect to that host (or
> bind if that name is your own, etc.). There's very little utility in
> timing out more than 47 times then continuing to try more addresses
> rather than just failing. "Our name resolves to 100 addresses and you
> have to try all of them to find the one that works" is not a viable
> configuration. (A lot of software does not even iterate and try
> fallbacks at all, but only attempts to use the first one, typically
> round-robin rotated by the nameserver.)
> Anyway, if there are objections to this behavior, it's a completely
> separate issue from handling long CNAME chains.

Ah yeah, ok that makes sense.
I wasn't thinking about it as "we just need any address".
No objections to that from me!

> From my reading of your links, and
> I don't think max-recursion-depth is related to CNAMEs. It's the depth
> of delegation recursion. The max CNAME chain length is separate, and
> in unbound terminology is the number of "restarts". Unbound's limit as
> you've found is 11. BIND's is supposedly hard-coded at 16.
> Assuming the recursive server uses pointers properly, max size of a
> length-N CNAME chain is (N+1)*(255+epsilon). This comes out to a
> little over 4k for the BIND limit, and that's assuming max-length
> names with no further redundancy. I would expect the real-world need
> is considerably lower than this, and that the Unbound default limit on
> chain length also suffices in practice (or it wouldn't be the default
> for a widely used recursive server). So, for example, using a 4k
> buffer (adding a little over 3k to what we have now, which already had
> enough for one CNAME) should solve the problem entirely.
> Does this sound like an okay fix to you?

Sounds good to me!

> Rich

  reply	other threads:[~2023-07-05  0:18 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-04  0:17 Hamish Forbes
2023-07-04  3:29 ` Rich Felker
     [not found]   ` <>
2023-07-04  4:19     ` Hamish Forbes
2023-07-04 16:07       ` Rich Felker
2023-07-05  0:18         ` Hamish Forbes [this message]
2023-07-05  3:41           ` Rich Felker
2023-07-05 11:24             ` A. Wilcox

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).