On Thu, Apr 29, 2010 at 5:40 AM, roger peppe wrote: > On 28 April 2010 19:42, ron minnich wrote: > > We did a simple experiment recently: added a new 9p type called > > Tstream, because this issue of streams vs. transactions has been > > bugging me for years. The semantics are simple: it's a lot like Tread > > (almost same packet) but a single Tstream results in any number of > > Rstreams, until you hit no more data (/dev/mouse) or maybe EOF > > (/usr/rminnich/movie). Andrey tossed a sample implementation into > > newsham's 9p library. We saw a 27x improvement in performance from > > calgary to sandia for a big file. Fcp did not come close. > > what happens if the consumer is slow and the Rstream writer > blocks? how do you stop all the other replies on the connection > waiting for the consumer to get on with it? > > in fact, how do you stop it deadlocking if the consumer is in > fact waiting for one of those replies? > > i suppose this comes down to what the API looks like. > isochronous might be easier, as a slow reader is a bad reader > so you could just throw away some data. > > It sounds ok on the surface so far. Does RStream signal the end of the stream chunks, or does the TStreamer already know that answer? Is there any sort of realtime constraint for handling incoming RStream chunks? I would think this could be ok, even if it forces the client to put the streamed blocks somewhere handy while processing is going on. Streaming audio and video over plan 9 links this way might be nice. But then I start to wonder why we feel we want to compete with HTTP when it already works, and is still fairly simple. Nothing wrong with improving 9P I suppose, but what's so wrong with HTTP transfers that it warrants changing our beloved resource sharing protocol? Maybe I'm being too practical, and not feeling adventurous or something :-) Dave