9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
From: dorin bumbu <bumbudorin@gmail.com>
To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
Subject: Re: [9fans] πp
Date: Sat, 16 Oct 2010 02:54:20 +0300	[thread overview]
Message-ID: <AANLkTik084ne_GLhqGN7MU+3_Zsd93++DHdDbTcx7Xf3@mail.gmail.com> (raw)
In-Reply-To: <6d8e09991be97d61af9b495b8d56686f@gmx.de>

ok, let's say that i want to read a 2 gb file :)

2010/10/16  <cinap_lenrek@gmx.de>:
> when you are in kiev, video streams YOU!
>
> --
> cinap
>
>
> ---------- Forwarded message ----------
> From: dorin bumbu <bumbudorin@gmail.com>
> To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
> Date: Sat, 16 Oct 2010 02:30:42 +0300
> Subject: Re: [9fans] πp
> 2010/10/16 erik quanstrom <quanstro@quanstro.net>:
>> On Fri Oct 15 17:14:18 EDT 2010, bumbudorin@gmail.com wrote:
>>> 2010/10/15 erik quanstrom <quanstro@quanstro.net>:
>>> >> isn't tag field for this intended?
>>> >
>>> > [...]
>>> >
>>> >> so this means to me that a client can send some T-messages and then
>>> >> (or concurrently) wait the R-messages.
>>> >> in inferno from mount(1) and styxmon(8) i deduced that this case is
>>> >> also considered.
>>> >> it's true that most of the servers i seen until now doesn't take
>>> >> advantage of this feature, they respond to each T-message before
>>> >> processing next message.
>>> >
>>> > there is no defined limit to the number of
>>> > outstanding tags allowed. [...]
>>>
>>> so the real problem is not 9p itself than the transport protocol: TCP/IP.
>>
>> the transport has nothing to do with it;
>
>
> consider the following (imaginary) example:
>
> let say that i want to watch a live videocast, and i am in kiev and
> the video is emitted from seattle as a 9p service.
> i'll mount that service in my local namespace, and my imaginary video
> player istead of processig 9p messages in traditional way:
> [...] TRead -> RRead -> TRead -> RRead -> TRead -> [...]
>
> will have one thread that:  [...] TRead -> TRead-> TRead->
> TRead->TRead->TRead -> [...]
> and another one that :      [...]  <--     t0      -->RRead -> RRead->
> RRead -> RRead-> RRead-> RRead-> [...]
>
>
> if transport protocol hasn't anything to do with, please explain me
> what other drawbacks will be involved other than a t0 delay, if
> latency would be constant?
>
>>
>>> i think that a solution might be to run 9p over an information
>>> dispersal based protocol, to eliminate roundtrips but guaranteeing
>>> reliability.
>>
>> the definitiion of 9p relies on having round trips.
>
> i was reffering on TCP/IP's roundtrips for each package confirmation,
> and not to eliminate 9p's round trips.
> in 9p case i suggested to process some of them as i wrote on previous
> lines, when high data traffic is involved, like in video streaming.
>
>>
>> - erik
>>
>>
>
>



  reply	other threads:[~2010-10-15 23:54 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-10-13 19:46 Eric Van Hensbergen
2010-10-13 22:13 ` David Leimbach
2010-10-13 22:36   ` roger peppe
2010-10-13 23:15     ` David Leimbach
2010-10-14  4:25       ` blstuart
2010-10-13 23:24     ` Venkatesh Srinivas
2010-10-14 21:12       ` Latchesar Ionkov
2010-10-14 21:48         ` erik quanstrom
2010-10-14 22:14         ` dorin bumbu
2010-10-15 14:07           ` erik quanstrom
2010-10-15 14:45             ` Russ Cox
2010-10-15 15:41               ` Latchesar Ionkov
2010-10-15 21:13             ` dorin bumbu
2010-10-15 22:13               ` erik quanstrom
2010-10-15 23:30                 ` dorin bumbu
2010-10-15 23:45                   ` cinap_lenrek
2010-10-15 23:54                     ` dorin bumbu [this message]
2010-10-15  7:24         ` Sape Mullender
2010-10-15 10:41           ` hiro
2010-10-15 12:11             ` Jacob Todd
2010-10-15 13:06               ` Iruatã Souza
2010-10-15 13:23             ` Sape Mullender
2010-10-15 14:57           ` David Leimbach
2010-10-15 14:54         ` David Leimbach
2010-10-15 14:59           ` Francisco J Ballesteros
2010-10-15 16:19             ` cinap_lenrek
2010-10-15 16:28               ` Lyndon Nerenberg
2010-10-15 17:54                 ` Nemo
2010-10-15 17:49                   ` ron minnich
2010-10-15 18:56                     ` erik quanstrom
2010-10-15 16:31               ` Latchesar Ionkov
2010-10-15 16:40                 ` cinap_lenrek
2010-10-15 16:43                   ` Latchesar Ionkov
2010-10-15 17:11                     ` cinap_lenrek
2010-10-15 17:15                       ` Latchesar Ionkov
2010-10-15 19:10                 ` erik quanstrom
2010-10-15 19:16                   ` Latchesar Ionkov
2010-10-15 21:43                     ` Julius Schmidt
2010-10-15 22:02                       ` David Leimbach
2010-10-15 22:05                       ` John Floren
2010-10-15 16:45               ` ron minnich
2010-10-15 17:26                 ` Eric Van Hensbergen
2010-10-15 17:45                 ` Charles Forsyth
2010-10-15 17:45                   ` ron minnich
2010-10-15 18:03               ` Bakul Shah
2010-10-15 18:45               ` David Leimbach

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AANLkTik084ne_GLhqGN7MU+3_Zsd93++DHdDbTcx7Xf3@mail.gmail.com \
    --to=bumbudorin@gmail.com \
    --cc=9fans@9fans.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).