On March 9, 2021, Plan 9 from Bell Labs <9fans@9fans.net> wrote:
I was trying to use 9fans/go/plan9/client.Fid.Dirreadall
[https://github.com/9fans/go/blob/master/plan9/client/fid.go#L54-L60]
and found that it returned fewer entries than a single Dirread. The
reason is that Dirreadall uses ioutil.ReadAll which doesn't work
well with a fid corresponding to a directory, because it spuriously
detects io.EOF when in fact there are more dir entries, but none
fits into the read buffer.

Fixed, thanks.

As for Linux,

When it gets a 0-byte Rread, it tries a new Tread with the largest
possible buffer size (msize=8192 in this connection). You see this
behavior twice above. Only after getting 0-byte Rread twice in a
row, it gives up.

QUESTION. The last Tread/Rread seems superflous to me. After
enlarging the buffer size from 2758 to 8168 and still getting 0, I'd
think the client could detect EOF. I don't see the point of an
additional Tread of 8168 bytes.

Probably the code just always retries with max, even if the buffer was already max'ed. I agree that it seems unnecessary.

QUESTION. But other than that, is that what a client should do for
reading directories? Enlarge the read buffer and see if it still
gets 0 bytes? I'm asking because my initial fix to Dirreadall was to
always issue Treads with count=msize-24, and because I find the
above approach to incur 2x the round trips necessary.

Properly, a client should always make sure there are STATMAX bytes available in the read (perhaps limited by msize).


QUESTION. Similarly, is the server supposed to silently return a
0-byte Rread when no dir entries fit in the response buffer, despite
there being more dir entries available?

It would appear that that's what keeps things working.
The original Plan 9 dirread and dirreadall did not exercise this case,
so there's not likely to be agreement between servers on the answer.

Finally, I noticed the plan9.STATMAX (65535) as the buffer size used
in Dirread (a few lines above Dirreadall). That points to the fact
that in theory that's the max size of a serialized dir entry, which
leads me to ask one last

QUESTION. What happens and what should happen when a dir entry is
larger than msize-24? Possibly written from a connection with a
large msize, and to be read from a connection with a smaller msize?

Probably that should return an error, rather than a silent EOF.
It honestly never came up, because first a server would have to
let you create a file with an almost-8k name (or else have very long
user and group names). disk/kfs limited names to 27 bytes, and
fossil limited all strings to 1k. So the max directory entry size in
any server anyone ran was under 4k. 

Best,
Russ