From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, HTML_MESSAGE,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H4, RCVD_IN_MSPIKE_WL autolearn=ham autolearn_force=no version=3.4.4 Received: (qmail 8850 invoked from network); 9 Apr 2021 20:00:50 -0000 Received: from tb-ob21.topicbox.com (173.228.157.67) by inbox.vuxu.org with ESMTPUTF8; 9 Apr 2021 20:00:50 -0000 Received: from tb-mx0.topicbox.com (tb-mx0.nyi.icgroup.com [10.90.30.73]) by tb-ob21.topicbox.com (Postfix) with ESMTP id 514492994C for ; Fri, 9 Apr 2021 16:00:48 -0400 (EDT) (envelope-from bounce.mM1168d8ea3f8e0caf83fb6f27.r522be890-2105-11eb-b15e-8d699134e1fa@9fans.bounce.topicbox.com) Received: by tb-mx0.topicbox.com (Postfix, from userid 1132) id 176902080FEB; Fri, 9 Apr 2021 16:00:48 -0400 (EDT) ARC-Authentication-Results: i=2; topicbox.com; arc=pass; dkim=pass (2048-bit rsa key sha256) header.d=swtch.com header.i=@swtch.com header.b=bFd2XBSK header.a=rsa-sha256 header.s=heymail x-bits=2048; dmarc=pass policy.published-domain-policy=none policy.applied-disposition=none policy.evaluated-disposition=none (p=none,d=none,d.eval=none) policy.policy-from=p header.from=swtch.com; spf=pass smtp.mailfrom=rsc@swtch.com smtp.helo=02a.hosted.hey.com; x-internal-arc=fail (as.1.topicbox.com=pass, ams.1.topicbox.com=fail (message has been altered)) (Message modified while forwarding at Topicbox) ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d= topicbox.com; h=date:from:to:message-id:in-reply-to:subject :mime-version:content-type:content-transfer-encoding:list-help :list-id:list-post:list-subscribe:reply-to:list-unsubscribe; s= sysmsg-1; t=1617998447; bh=bGagYeq6ptyZOos9Zx+EkFosMshCbsZlhll/D /WO+q0=; b=e7aizCuOmDu/t7dElwvYbBo/hgbn6OMjW0Z5QrSBALhwQIJxBjE/r DQ9LOBh6DOOyPFAmc0nphsZPM/6w+LzlEQBi0k38OXH+vQCLD8usVJSkYFeO4x9E hA29qDeEXtUwQ+qVuQB672HbCy8wRrMW1rS3L/RfdP1F8QEvy6CLcg= ARC-Seal: i=2; a=rsa-sha256; cv=pass; d=topicbox.com; s=sysmsg-1; t= 1617998447; b=HZQoukA+hnT7uP2FlnJj+OB2bNEB54s89qgIWaT7rmvr9PrNZz 5X3WExWUlUvu5vP4SBojaRyblLTRDdSY9Q+bQFjqVOuEkO3YH+wUaNTo68FvNuJZ RjfrrWs3Dyi+EHF+Og8k7AJ7rVNJ3sl7ujvma3PHOD06krBj5YJFiUoao= Authentication-Results: topicbox.com; arc=pass; dkim=pass (2048-bit rsa key sha256) header.d=swtch.com header.i=@swtch.com header.b=bFd2XBSK header.a=rsa-sha256 header.s=heymail x-bits=2048; dmarc=pass policy.published-domain-policy=none policy.applied-disposition=none policy.evaluated-disposition=none (p=none,d=none,d.eval=none) policy.policy-from=p header.from=swtch.com; spf=pass smtp.mailfrom=rsc@swtch.com smtp.helo=02a.hosted.hey.com; x-internal-arc=fail (as.1.topicbox.com=pass, ams.1.topicbox.com=fail (message has been altered)) (Message modified while forwarding at Topicbox) X-Received-Authentication-Results: tb-mx1.topicbox.com; arc=none (no signatures found); bimi=skipped (DMARC Policy is not at enforcement); dkim=pass (2048-bit rsa key sha256) header.d=swtch.com header.i=@swtch.com header.b=bFd2XBSK header.a=rsa-sha256 header.s=heymail x-bits=2048; dmarc=pass policy.published-domain-policy=none policy.applied-disposition=none policy.evaluated-disposition=none (p=none,d=none,d.eval=none) policy.policy-from=p header.from=swtch.com; iprev=pass smtp.remote-ip=204.62.114.140 (02a.hosted.hey.com); spf=pass smtp.mailfrom=rsc@swtch.com smtp.helo=02a.hosted.hey.com; x-aligned-from=pass (Address match); x-me-sender=none; x-ptr=pass smtp.helo=02a.hosted.hey.com policy.ptr=02a.hosted.hey.com; x-return-mx=pass header.domain=swtch.com policy.is_org=yes (MX Records found: work-mx.app.hey.com); x-return-mx=pass smtp.domain=swtch.com policy.is_org=yes (MX Records found: work-mx.app.hey.com); x-tls=pass smtp.version=TLSv1.2 smtp.cipher=ECDHE-RSA-AES256-GCM-SHA384 smtp.bits=256/256; x-vs=clean score=0 state=0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=9fans.net; h=date:from :to:message-id:in-reply-to:subject:mime-version:content-type :content-transfer-encoding:list-help:list-id:list-post :list-subscribe:reply-to:list-unsubscribe; s=dkim-1; bh=U5o1PsPE TTKmgl98huzy8SNwTl3H5rU+dxAhu6zcpGc=; b=PFBKNLVGfKkM0aDk4653f7pW yK6/eFmWnw7K7QIkKT3MWNm97Xg17KZwfjOSM51WiA90LDAQiRHqSO3qV3JIlW1M p9FdzMpF2MWxUCxOqE68jHIdMPBngF6e1EGhZ5jq7cUtovfqDzMUjGXFa90nVMec E/ydYYZGzcRXtAn477k= Received: from tb-mx1.topicbox.com (localhost.local [127.0.0.1]) by tb-mx1.topicbox.com (Postfix) with ESMTP id 9D7B0225B121 for <9fans@9fans.net>; Fri, 9 Apr 2021 16:00:37 -0400 (EDT) (envelope-from rsc@swtch.com) Received: from tb-mx1.topicbox.com (localhost [127.0.0.1]) by tb-mx1.topicbox.com (Authentication Milter) with ESMTP id E6234717004; Fri, 9 Apr 2021 16:00:37 -0400 ARC-Seal: i=1; a=rsa-sha256; cv=none; d=topicbox.com; s=arcseal; t= 1617998437; b=jQal7EgrMQf+xphPnrdvqn2DBYgzEeCYF5lWJzoJIJsXJKfged g/dFqvrGu3HTEh7Ke+c4u1b9PRwj6drF5xj0vLPZLgRoEtqeIyijDXQzMYjOlADR mUd8/G344sFcOXVgHWCmMwc+edG7SIr/3tVR9O/xKoKQ1cp6/h7H7C5UkwN0Lej1 JSAs7rtKep+HUUukHgo567TqjwEAXubECAcGN1gbhot03yv3+A0BMl3t7v0WZDFo BwV4qCv6g2qfD8OJX0jIKApbSNZZ95UoIGGxw+cRty4fsYtyNXLCp6o4pMptmvJe BFUSt1vWCqlZdfQHzK0kcQprNjVp52UPYA6g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d= topicbox.com; h=date:from:to:message-id:in-reply-to:subject :mime-version:content-type:content-transfer-encoding; s=arcseal; t=1617998437; bh=eUNOrKDpbRO5MeYI7nIgCOdqkKYlP8IGa1A3jySFy04=; b= dWk9TwSkHMumgO3c+fVEsXEwhoAdzEjgtwIK7GaAUUFZapS90Dc7s8ncTvC7PsNL Oux5QyQvKki9RBA0ej27iJZKstbt3hIazDKRC7L4lMWmbQtfYhfvWJDvH6Lo9NDy E3lAhqz5Y9mkmAAj2c3RRAFqh5wwcd0uLTPuUM21qxtacsPNjzjOVSW4J0k8IAUi o1y0zuaA9z8w0UJLX1BvpoISQ3L9zLyikxG5fp5diql2IAr8nJwFaERryLJIG1xl pseMPh6m9xIjy+nI+mgaprSBkOenn9SHNkYjTtQWvGuRJz/QU/xwC7vMxoJYn+gZ wdO//BzWc8vK6eoQUfzRyA== ARC-Authentication-Results: i=1; tb-mx1.topicbox.com; arc=none (no signatures found); bimi=skipped (DMARC Policy is not at enforcement); dkim=pass (2048-bit rsa key sha256) header.d=swtch.com header.i=@swtch.com header.b=bFd2XBSK header.a=rsa-sha256 header.s=heymail x-bits=2048; dmarc=pass policy.published-domain-policy=none policy.applied-disposition=none policy.evaluated-disposition=none (p=none,d=none,d.eval=none) policy.policy-from=p header.from=swtch.com; iprev=pass smtp.remote-ip=204.62.114.140 (02a.hosted.hey.com); spf=pass smtp.mailfrom=rsc@swtch.com smtp.helo=02a.hosted.hey.com; x-aligned-from=pass (Address match); x-me-sender=none; x-ptr=pass smtp.helo=02a.hosted.hey.com policy.ptr=02a.hosted.hey.com; x-return-mx=pass header.domain=swtch.com policy.is_org=yes (MX Records found: work-mx.app.hey.com); x-return-mx=pass smtp.domain=swtch.com policy.is_org=yes (MX Records found: work-mx.app.hey.com); x-tls=pass smtp.version=TLSv1.2 smtp.cipher=ECDHE-RSA-AES256-GCM-SHA384 smtp.bits=256/256; x-vs=clean score=0 state=0 X-ME-VSCause: gggruggvucftvghtrhhoucdtuddrgeduledrudekuddgudegjecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpggftfghnshhusghstghrihgsvgdp uffrtefokffrpgfnqfghnecuuegrihhlohhuthemuceftddtnecunecujfgurhepfffhvf fkjgfugggtgfesrgejreertddtjeenucfhrhhomheptfhushhsucevohiguceorhhstges shifthgthhdrtghomheqnecuggftrfgrthhtvghrnhepledutdelvefhgeegleduueeiue eivdffgeelkeekkeelleeuheettdeilefhtefhnecuffhomhgrihhnpehgihhthhhusgdr tghomhenucfkphepvddtgedriedvrdduudegrddugedtnecuvehluhhsthgvrhfuihiivg eptdenucfrrghrrghmpehinhgvthepvddtgedriedvrdduudegrddugedtpdhhvghlohep tddvrgdrhhhoshhtvggurdhhvgihrdgtohhmpdhmrghilhhfrhhomhepoehrshgtsehsfi httghhrdgtohhmqe X-ME-VSScore: 0 X-ME-VSCategory: clean Received-SPF: pass (swtch.com: Sender is authorized to use 'rsc@swtch.com' in 'mfrom' identity (mechanism 'include:_spf.hey.com' matched)) receiver=tb-mx1.topicbox.com; identity=mailfrom; envelope-from="rsc@swtch.com"; helo=02a.hosted.hey.com; client-ip=204.62.114.140 Received: from 02a.hosted.hey.com (02a.hosted.hey.com [204.62.114.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by tb-mx1.topicbox.com (Postfix) with ESMTPS for <9fans@9fans.net>; Fri, 9 Apr 2021 16:00:36 -0400 (EDT) (envelope-from rsc@swtch.com) Received: from hey.com (bigip-vip-new.rw-ash-int.37signals.com [10.20.0.24]) by 02.hosted.hey.com (Postfix) with ESMTP id 7CDDD4079B for <9fans@9fans.net>; Fri, 9 Apr 2021 20:00:36 +0000 (UTC) Date: Fri, 09 Apr 2021 16:00:35 -0400 From: Russ Cox To: Plan 9 from Bell Labs <9fans@9fans.net> Message-ID: <6eafc6eebda4beda2237ec6990485bb3c78b444e@hey.com> In-Reply-To: <202103091332.129DWgIe017612@sdf.org> Subject: Re: [9fans] 9P reading directories client/server behavior Mime-Version: 1.0 Content-Type: multipart/alternative; boundary="--==_mimepart_6070b2642937f_301d4c8785d" Content-Transfer-Encoding: 7bit Topicbox-Policy-Reasoning: allow: sender is an admin Topicbox-Message-UUID: 43838014-996e-11eb-ac1a-c70b388f4334 Archived-At: =?UTF-8?B?PGh0dHBzOi8vOWZhbnMudG9waWNib3guY29tL2dyb3Vwcy85?= =?UTF-8?B?ZmFucy9UMzZmYTc1ZmI4M2M4MWQ2ZC1NMTE2OGQ4ZWEzZjhlMGNhZjgzZmI2?= =?UTF-8?B?ZjI3Pg==?= List-Help: List-Id: "9fans" <9fans.9fans.net> List-Post: List-Software: Topicbox v0 List-Subscribe: Precedence: list Reply-To: 9fans <9fans@9fans.net> List-Unsubscribe: , Topicbox-Delivery-ID: 2:9fans:437d30aa-c441-11e9-8a57-d036212d11b0:522be890-2105-11eb-b15e-8d699134e1fa:M1168d8ea3f8e0caf83fb6f27:1:DNUv8zTYIs0QSMCDH7IHgOlOCSdJnYBvdno907Iuh3E ----==_mimepart_6070b2642937f_301d4c8785d Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On March 9, 2021, Plan 9 from Bell Labs <9fans@9fans.net> wrote: > I was trying to use 9fans/go/plan9/client.Fid.Dirreadall > [https://github.com/9fans/go/blob/master/plan9/client/fid.go#L54-L60] > and found that it returned fewer entries than a single Dirread. The > reason is that Dirreadall uses ioutil.ReadAll which doesn't work > well with a fid corresponding to a directory, because it spuriously > detects io.EOF when in fact there are more dir entries, but none > fits into the read buffer. Fixed, thanks. As for Linux, > When it gets a 0-byte Rread, it tries a new Tread with the largest > possible buffer size (msize=3D8192 in this connection). You see this > behavior twice above. Only after getting 0-byte Rread twice in a > row, it gives up. > > QUESTION. The last Tread/Rread seems superflous to me. After > enlarging the buffer size from 2758 to 8168 and still getting 0, I'd > think the client could detect EOF. I don't see the point of an > additional Tread of 8168 bytes. Probably the code just always retries with max, even if the buffer was already max'ed. I agree that it seems unnecessary. > QUESTION. But other than that, is that what a client should do for > reading directories? Enlarge the read buffer and see if it still > gets 0 bytes? I'm asking because my initial fix to Dirreadall was to > always issue Treads with count=3Dmsize-24, and because I find the > above approach to incur 2x the round trips necessary. Properly, a client should always make sure there are STATMAX bytes available in the read (perhaps limited by msize). > QUESTION. Similarly, is the server supposed to silently return a > 0-byte Rread when no dir entries fit in the response buffer, despite > there being more dir entries available? It would appear that that's what keeps things working. The original Plan 9 dirread and dirreadall did not exercise this case, so there's not likely to be agreement between servers on the answer. > Finally, I noticed the plan9.STATMAX (65535) as the buffer size used > in Dirread (a few lines above Dirreadall). That points to the fact > that in theory that's the max size of a serialized dir entry, which > leads me to ask one last > > QUESTION. What happens and what should happen when a dir entry is > larger than msize-24? Possibly written from a connection with a > large msize, and to be read from a connection with a smaller msize? Probably that should return an error, rather than a silent EOF. It honestly never came up, because first a server would have to let you create a file with an almost-8k name (or else have very long user and group names). disk/kfs limited names to 27 bytes, and fossil limited all strings to 1k. So the max directory entry size in any server anyone ran was under 4k.=EF=BF=BD=EF=BF=BD Best, Russ ------------------------------------------ 9fans: 9fans Permalink: https://9fans.topicbox.com/groups/9fans/T36fa75fb83c81d6d-M1168d= 8ea3f8e0caf83fb6f27 Delivery options: https://9fans.topicbox.com/groups/9fans/subscription ----==_mimepart_6070b2642937f_301d4c8785d Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
On March 9, 2021, Plan 9 from Bell Labs <9fans@9fans.net&g= t; wrote:
I was trying to use 9fans/go/plan9/client.Fid.Di= rreadall
[https://github.com/9fans/go/blob/master/plan9/client/fid.go#= L54-L60]
and found that it returned fewer entries than a single Dirrea= d. The
reason is that Dirreadall uses ioutil.ReadAll which doesn't= work
well with a fid corresponding to a directory, because it spuriou= sly
detects io.EOF when in fact there are more dir entries, but nonefits into the read buffer.

Fixed, thanks.
<= br />As for Linux,

When it gets a 0-byte Rread= , it tries a new Tread with the largest
possible buffer size (msize=3D= 8192 in this connection). You see this
behavior twice above. Only afte= r getting 0-byte Rread twice in a
row, it gives up.

QUESTIO= N. The last Tread/Rread seems superflous to me. After
enlarging the bu= ffer size from 2758 to 8168 and still getting 0, I'd
think the cli= ent could detect EOF. I don't see the point of an
additional Tread= of 8168 bytes.

Probably the code just always retrie= s with max, even if the buffer was already max'ed. I agree that it seem= s unnecessary.

QUESTION. But other than that, = is that what a client should do for
reading directories? Enlarge the r= ead buffer and see if it still
gets 0 bytes? I'm asking because my= initial fix to Dirreadall was to
always issue Treads with count=3Dmsi= ze-24, and because I find the
above approach to incur 2x the round tri= ps necessary.

Properly, a client should always make = sure there are STATMAX bytes available in the read (perhaps limited by msiz= e).


QUESTION. Similarly, is the server s= upposed to silently return a
0-byte Rread when no dir entries fit in t= he response buffer, despite
there being more dir entries available?

It would appear that that's what keeps things work= ing.
The original Plan 9 dirread and dirreadall did not exercise this = case,
so there's not likely to be agreement between servers on the= answer.

Finally, I noticed the plan9.STATMAX = (65535) as the buffer size used
in Dirread (a few lines above Dirreada= ll). That points to the fact
that in theory that's the max size of= a serialized dir entry, which
leads me to ask one last

QUE= STION. What happens and what should happen when a dir entry is
larger = than msize-24? Possibly written from a connection with a
large msize, = and to be read from a connection with a smaller msize?
Probably that should return an error, rather than a silent EOF.
It = honestly never came up, because first a server would have to
let you c= reate a file with an almost-8k name (or else have very long
user and g= roup names). disk/kfs limited names to 27 bytes, and
fossil limited al= l strings to 1k. So the max directory entry size in
any server anyone = ran was under 4k. 

Best,
Russ
= ----==_mimepart_6070b2642937f_301d4c8785d--