I found several of Bryan Ford's ideas interesting as always here, particularly Structured Streams (https://bford.info/pub/net/sst-abs/) and Breaking Up the Transport Logjam (https://bford.info/pub/net/logjam-abs/)

On Wed, Jan 27, 2021 at 4:53 PM <ori@eigenstate.org> wrote:
Quoth David Arroyo <droyo@aqwari.net>:
> On Tue, Dec 29, 2020, at 18:50, cigar562hfsp952fans@icebubble.org wrote:
> > It's well-known that 9P has trouble transferring large files (high
> > volume/high bandwith) over high-latency networks, such as the Internet.
>
> From what I know of 9P, I don't think this is the fault of the protocol
> itself. In fact, since 9P lets the clients choose Fid and Tag identifiers,
> it should be uniquely well suited for "long fat pipes". You could avoid
> waiting for round-trips by optimistically assuming your requests succeed.
> For example, you could do the following to optimistically read the first
> 8K bytes of a file without needing to wait for a response from the server.
>
> * Twalk tag=1 fid=0 newfid=1 /path/to/somefile
> * Topen tag=2 fid=1 o_read
> * Tread tag=3 fid=1 off=0 count=4096
> * Tread tag=4 fid=1 off=4096 count=4096
> * Tclunk tag=5 fid=1
>
> I'm not aware of any client implementations that do this kind of
> pipelining, though.
>
> David

This also has some hairy edge cases. For example,
what happens to the actions in the pipeline if one
of the operations fails?

I think that for this kind of pipelining to be
effective, 9p may need some notion of 'bundles',
where the first failure, short write, or other
exceptional operation would cause all other
commands in the bundle to be ignored.

Another issue is that its' difficult to retrofit
this pipelining into the plan 9 system call
interface; when do you return an error from
read(2)? what if there are mutiple Treads?

Finally, there's the problem of flow control;
with 9p, round trip time limits starvation,
at least to a degree. But how do you handle
congestion if you can stuff as many 9p packets
down a single connection as possible? There's
no packet loss, but you can end up with very
long delays as small reads like /dev/mouse
get queued behind bulk transfers.

There are some programs that try to handle this
for 'well behaved' file systems (eg, fcp(1),
or http://src.a-b.xyz/clone/, but making it
happen in general is not trivial.


------------------------------------------
9fans: 9fans
Permalink: https://9fans.topicbox.com/groups/9fans/Te69bb0fce0f0ffaf-Maff5e0961941062961ff4f2d
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription