9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
From: "nicolagi via 9fans" <9fans@9fans.net>
To: 9fans@9fans.net
Subject: [9fans] 9P reading directories client/server behavior
Date: Tue, 9 Mar 2021 13:32:39 +0000
Message-ID: <202103091332.129DWgIe017612@sdf.org> (raw)

Dear 9fans,

I'm confused regarding the correct behavior for a Tread handler for
directories, and the corresponding correct behavior for a client
that wants to read the whole contents of a directory.

I was trying to use 9fans/go/plan9/client.Fid.Dirreadall
[https://github.com/9fans/go/blob/master/plan9/client/fid.go#L54-L60]
and found that it returned fewer entries than a single Dirread. The
reason is that Dirreadall uses ioutil.ReadAll which doesn't work
well with a fid corresponding to a directory, because it spuriously
detects io.EOF when in fact there are more dir entries, but none
fits into the read buffer. Example trace:

        Tread tag 65535 fid 2 offset 0 count 512
        Rread tag 65535 count 499
        Tread tag 65535 fid 2 offset 499 count 13
        Rread tag 65535 count 0

There would be more dir entries, but the one at offset 499 is longer
than 13 bytes, so none is returned.

It follows that Dirreadall is buggy, but let's set aside the
discussion of a fix for a moment. Now consider this instead: I also
thought that my server was at fault: perhaps it should respond with
Rerror of some kind instead of a 0-byte Rread. The rationale was
that it's risky for directory contents to disappear if the client
doesn't have large enough buffers.

When I tried that change in my fs, I found that I could no
longer list directories when mounted under Linux. Restoring the
original code that responds with 0-byte Rread, I traced the Linux
driver's behaviour when listing a directory:

        Tread tag 0 fid 2 offset 0 count 8168
        Rread tag 0 count 8109
        Tread tag 0 fid 2 offset 8109 count 59
        Rread tag 0 count 0
        Tread tag 0 fid 2 offset 8109 count 8168
        Rread tag 0 count 5410
        Tread tag 0 fid 2 offset 13519 count 2758
        Rread tag 0 count 0
        Tread tag 0 fid 2 offset 13519 count 8168
        Rread tag 0 count 0
        Tread tag 0 fid 2 offset 13519 count 8168
        Rread tag 0 count 0

When it gets a 0-byte Rread, it tries a new Tread with the largest
possible buffer size (msize=8192 in this connection). You see this
behavior twice above. Only after getting 0-byte Rread twice in a
row, it gives up.

QUESTION. The last Tread/Rread seems superflous to me. After
enlarging the buffer size from 2758 to 8168 and still getting 0, I'd
think the client could detect EOF. I don't see the point of an
additional Tread of 8168 bytes.

QUESTION. But other than that, is that what a client should do for
reading directories? Enlarge the read buffer and see if it still
gets 0 bytes? I'm asking because my initial fix to Dirreadall was to
always issue Treads with count=msize-24, and because I find the
above approach to incur 2x the round trips necessary.

QUESTION. Similarly, is the server supposed to silently return a
0-byte Rread when no dir entries fit in the response buffer, despite
there being more dir entries available?

Finally, I noticed the plan9.STATMAX (65535) as the buffer size used
in Dirread (a few lines above Dirreadall). That points to the fact
that in theory that's the max size of a serialized dir entry, which
leads me to ask one last

QUESTION. What happens and what should happen when a dir entry is
larger than msize-24? Possibly written from a connection with a
large msize, and to be read from a connection with a smaller msize?

Thank you,

Nicola


------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/T36fa75fb83c81d6d-Ma709ac1a88a1167f9a50ce45
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

             reply	other threads:[~2021-03-09 13:32 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-09 13:32 nicolagi via 9fans [this message]
2021-04-09 20:00 ` Russ Cox
2021-04-14 18:06   ` nicolagi via 9fans
2021-03-09 14:05 nicolagi via 9fans
2021-03-09 20:01 ` Anthony Martin
2021-03-10 19:36 nicolagi via 9fans

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=202103091332.129DWgIe017612@sdf.org \
    --to=9fans@9fans.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

9fans - fans of the OS Plan 9 from Bell Labs

This inbox may be cloned and mirrored by anyone:

	git clone --mirror http://inbox.vuxu.org/9fans

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V1 9fans 9fans/ http://inbox.vuxu.org/9fans \
		9fans@9fans.net
	public-inbox-index 9fans

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.vuxu.org/vuxu.archive.9fans


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git