9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
* [9fans] πp
@ 2010-10-13 19:46 Eric Van Hensbergen
  2010-10-13 22:13 ` David Leimbach
  0 siblings, 1 reply; 46+ messages in thread
From: Eric Van Hensbergen @ 2010-10-13 19:46 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

For folks interested in more info on the πp portion of Noah's Osprey talk,
Anant's thesis is available online: http://proness.kix.in/misc/πp-v2.pdf

      -eric



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-13 19:46 [9fans] πp Eric Van Hensbergen
@ 2010-10-13 22:13 ` David Leimbach
  2010-10-13 22:36   ` roger peppe
  0 siblings, 1 reply; 46+ messages in thread
From: David Leimbach @ 2010-10-13 22:13 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

I guess I do not understand how 9p doesn't support pipelining.   All
requests are tagged and can be dealt with between client and server in
any order right?

On Wednesday, October 13, 2010, Eric Van Hensbergen <ericvh@gmail.com> wrote:
> For folks interested in more info on the πp portion of Noah's Osprey talk,
> Anant's thesis is available online: http://proness.kix.in/misc/πp-v2.pdf
>
>       -eric
>
>



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-13 22:13 ` David Leimbach
@ 2010-10-13 22:36   ` roger peppe
  2010-10-13 23:15     ` David Leimbach
  2010-10-13 23:24     ` Venkatesh Srinivas
  0 siblings, 2 replies; 46+ messages in thread
From: roger peppe @ 2010-10-13 22:36 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

2010/10/13 David Leimbach <leimy2k@gmail.com>:
> I guess I do not understand how 9p doesn't support pipelining.   All
> requests are tagged and can be dealt with between client and server in
> any order right?

two issues (at least):

1) concurrently sent requests can be reordered (they're serviced in separate
threads on the server) which means that, when reading from a streaming file
which ignores file offsets, you don't necessarily get data back in the
order that
you asked for it.

2) you can't pipeline requests if the result of one request depends on the
result of a previous. for instance: walk to file, open it, read it, close it.
if the first operation fails, then subsequent operations will be invalid.

>
> On Wednesday, October 13, 2010, Eric Van Hensbergen <ericvh@gmail.com> wrote:
>> For folks interested in more info on the πp portion of Noah's Osprey talk,
>> Anant's thesis is available online: http://proness.kix.in/misc/πp-v2.pdf
>>
>>       -eric
>>
>>
>
>



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-13 22:36   ` roger peppe
@ 2010-10-13 23:15     ` David Leimbach
  2010-10-14  4:25       ` blstuart
  2010-10-13 23:24     ` Venkatesh Srinivas
  1 sibling, 1 reply; 46+ messages in thread
From: David Leimbach @ 2010-10-13 23:15 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: text/plain, Size: 1269 bytes --]

2010/10/13 roger peppe <rogpeppe@gmail.com>

> 2010/10/13 David Leimbach <leimy2k@gmail.com>:
> > I guess I do not understand how 9p doesn't support pipelining.   All
> > requests are tagged and can be dealt with between client and server in
> > any order right?
>
> two issues (at least):
>
> 1) concurrently sent requests can be reordered (they're serviced in
> separate
> threads on the server) which means that, when reading from a streaming file
> which ignores file offsets, you don't necessarily get data back in the
> order that
> you asked for it.
>
> 2) you can't pipeline requests if the result of one request depends on the
> result of a previous. for instance: walk to file, open it, read it, close
> it.
> if the first operation fails, then subsequent operations will be invalid.
>

I guess I'm trying to imagine how specifically you could pipeline, not the
general ways in which pipelining will fail with 9P.




>
> >
> > On Wednesday, October 13, 2010, Eric Van Hensbergen <ericvh@gmail.com>
> wrote:
> >> For folks interested in more info on the πp portion of Noah's Osprey
> talk,
> >> Anant's thesis is available online: http://proness.kix.in/misc/
> πp-v2.pdf
> >>
> >>       -eric
> >>
> >>
> >
> >
>
>

[-- Attachment #2: Type: text/html, Size: 2011 bytes --]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-13 22:36   ` roger peppe
  2010-10-13 23:15     ` David Leimbach
@ 2010-10-13 23:24     ` Venkatesh Srinivas
  2010-10-14 21:12       ` Latchesar Ionkov
  1 sibling, 1 reply; 46+ messages in thread
From: Venkatesh Srinivas @ 2010-10-13 23:24 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: text/plain, Size: 376 bytes --]

>
> 2) you can't pipeline requests if the result of one request depends on the
> result of a previous. for instance: walk to file, open it, read it, close
> it.
> if the first operation fails, then subsequent operations will be invalid.
>

Given careful allocation of FIDs by a client, that can be dealt with -
operations on an invalid FID just get RErrors.

-- vs

[-- Attachment #2: Type: text/html, Size: 577 bytes --]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-13 23:15     ` David Leimbach
@ 2010-10-14  4:25       ` blstuart
  0 siblings, 0 replies; 46+ messages in thread
From: blstuart @ 2010-10-14  4:25 UTC (permalink / raw)
  To: 9fans

> I guess I'm trying to imagine how specifically you could pipeline, not the
> general ways in which pipelining will fail with 9P.

Well.... as it turns out, I got inspired by the discussions of
streaming and implemented one approach in Inferno on the
plane on the way back home.  Unfortunately, my laptop
battery ran down before I could do much testing.  If it
works the way I hope, I'll be back with more details and
do the same to Plan 9.  So we can have multiple approaches
to compare.

BLS




^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-13 23:24     ` Venkatesh Srinivas
@ 2010-10-14 21:12       ` Latchesar Ionkov
  2010-10-14 21:48         ` erik quanstrom
                           ` (3 more replies)
  0 siblings, 4 replies; 46+ messages in thread
From: Latchesar Ionkov @ 2010-10-14 21:12 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

It can't be dealt with the current protocol. It doesn't guarantee that
Topen will be executed once Twalk is done. So can get Rerrors even if
Twalk succeeds.

2010/10/13 Venkatesh Srinivas <me@acm.jhu.edu>:
>> 2) you can't pipeline requests if the result of one request depends on the
>> result of a previous. for instance: walk to file, open it, read it, close
>> it.
>> if the first operation fails, then subsequent operations will be invalid.
>
> Given careful allocation of FIDs by a client, that can be dealt with -
> operations on an invalid FID just get RErrors.
>
> -- vs
>



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-14 21:12       ` Latchesar Ionkov
@ 2010-10-14 21:48         ` erik quanstrom
  2010-10-14 22:14         ` dorin bumbu
                           ` (2 subsequent siblings)
  3 siblings, 0 replies; 46+ messages in thread
From: erik quanstrom @ 2010-10-14 21:48 UTC (permalink / raw)
  To: 9fans

On Thu Oct 14 17:21:35 EDT 2010, lucho@ionkov.net wrote:
> It can't be dealt with the current protocol. It doesn't guarantee that
> Topen will be executed once Twalk is done. So can get Rerrors even if
> Twalk succeeds.

exactly.  a set of outstanding can be overlapped in the current
protocol iff there are no ordering dependencies in the set.

so in particular, overlapped reads/writes on files that respect
offets are always overlappable as are reads/writes of different
files.

- erik



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-14 21:12       ` Latchesar Ionkov
  2010-10-14 21:48         ` erik quanstrom
@ 2010-10-14 22:14         ` dorin bumbu
  2010-10-15 14:07           ` erik quanstrom
  2010-10-15  7:24         ` Sape Mullender
  2010-10-15 14:54         ` David Leimbach
  3 siblings, 1 reply; 46+ messages in thread
From: dorin bumbu @ 2010-10-14 22:14 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

isn't tag field for this intended?
so if Twalk succeeded and Topen failed, souldn't the client be able to
track the failure of Topen based on tag?

in intro(5) it says:
" Each T-message has a tag field, chosen and used by the
          client to identify the message.  The reply to the message
          will have the same tag.  Clients must arrange that no two
          outstanding messages on the same connection have the same
          tag."

so this means to me that a client can send some T-messages and then
(or concurrently) wait the R-messages.
in inferno from mount(1) and styxmon(8) i deduced that this case is
also considered.
it's true that most of the servers i seen until now doesn't take
advantage of this feature, they respond to each T-message before
processing next message.

Dorin

2010/10/15 Latchesar Ionkov <lucho@ionkov.net>:
> It can't be dealt with the current protocol. It doesn't guarantee that
> Topen will be executed once Twalk is done. So can get Rerrors even if
> Twalk succeeds.
>
> 2010/10/13 Venkatesh Srinivas <me@acm.jhu.edu>:
>>> 2) you can't pipeline requests if the result of one request depends on the
>>> result of a previous. for instance: walk to file, open it, read it, close
>>> it.
>>> if the first operation fails, then subsequent operations will be invalid.
>>
>> Given careful allocation of FIDs by a client, that can be dealt with -
>> operations on an invalid FID just get RErrors.
>>
>> -- vs
>>
>
>



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-14 21:12       ` Latchesar Ionkov
  2010-10-14 21:48         ` erik quanstrom
  2010-10-14 22:14         ` dorin bumbu
@ 2010-10-15  7:24         ` Sape Mullender
  2010-10-15 10:41           ` hiro
  2010-10-15 14:57           ` David Leimbach
  2010-10-15 14:54         ` David Leimbach
  3 siblings, 2 replies; 46+ messages in thread
From: Sape Mullender @ 2010-10-15  7:24 UTC (permalink / raw)
  To: 9fans; +Cc: 9fans

Are we talking about πP or 9P?

ΠP doesn't have Twalk.  It has open, which combines clone, walk, and open from
9P.  Before you start jumping up and down and telling me that you can't open
without revieing the qids from the walk (to check them for mount points), let
me tell you that we are also tackling mount tables.  Mount tables will no longer
match qids but longest prefix path names.
We know the semantics are different.  But you have to look hard to find
realistic situations where the difference matters.

I intend to write a πP design document that explains the whole concept in
excruciating detail.  There's a lot more to it than just changing walk and
open.

	Sape


> From: lucho@ionkov.net
> To: 9fans@9fans.net
> Reply-To: 9fans@9fans.net
> Date: Thu Oct 14 23:13:59 CES 2010
> Subject: Re: [9fans] πp
>
> It can't be dealt with the current protocol. It doesn't guarantee that
> Topen will be executed once Twalk is done. So can get Rerrors even if
> Twalk succeeds.
>
> 2010/10/13 Venkatesh Srinivas <me@acm.jhu.edu>:
> >> 2) you can't pipeline requests if the result of one request depends on the
> >> result of a previous. for instance: walk to file, open it, read it, close
> >> it.
> >> if the first operation fails, then subsequent operations will be invalid.
> >
> > Given careful allocation of FIDs by a client, that can be dealt with -
> > operations on an invalid FID just get RErrors.
> >
> > -- vs
> >



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15  7:24         ` Sape Mullender
@ 2010-10-15 10:41           ` hiro
  2010-10-15 12:11             ` Jacob Todd
  2010-10-15 13:23             ` Sape Mullender
  2010-10-15 14:57           ` David Leimbach
  1 sibling, 2 replies; 46+ messages in thread
From: hiro @ 2010-10-15 10:41 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

Is it Pi-Ro, or do you actually call it Pipi?



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 10:41           ` hiro
@ 2010-10-15 12:11             ` Jacob Todd
  2010-10-15 13:06               ` Iruatã Souza
  2010-10-15 13:23             ` Sape Mullender
  1 sibling, 1 reply; 46+ messages in thread
From: Jacob Todd @ 2010-10-15 12:11 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: text/plain, Size: 18 bytes --]

Eh,  what's Πp?

[-- Attachment #2: Type: text/html, Size: 30 bytes --]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 12:11             ` Jacob Todd
@ 2010-10-15 13:06               ` Iruatã Souza
  0 siblings, 0 replies; 46+ messages in thread
From: Iruatã Souza @ 2010-10-15 13:06 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

2010/10/15 Jacob Todd <jaketodd422@gmail.com>:
> Eh,  what's Πp?

check http://iwp9.org/slides/osprey.pdf



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 10:41           ` hiro
  2010-10-15 12:11             ` Jacob Todd
@ 2010-10-15 13:23             ` Sape Mullender
  1 sibling, 0 replies; 46+ messages in thread
From: Sape Mullender @ 2010-10-15 13:23 UTC (permalink / raw)
  To: 9fans; +Cc: 9fans

We pronouce it pie pee

> Is it Pi-Ro, or do you actually call it Pipi?



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-14 22:14         ` dorin bumbu
@ 2010-10-15 14:07           ` erik quanstrom
  2010-10-15 14:45             ` Russ Cox
  2010-10-15 21:13             ` dorin bumbu
  0 siblings, 2 replies; 46+ messages in thread
From: erik quanstrom @ 2010-10-15 14:07 UTC (permalink / raw)
  To: 9fans

> isn't tag field for this intended?

[...]

> so this means to me that a client can send some T-messages and then
> (or concurrently) wait the R-messages.
> in inferno from mount(1) and styxmon(8) i deduced that this case is
> also considered.
> it's true that most of the servers i seen until now doesn't take
> advantage of this feature, they respond to each T-message before
> processing next message.

there is no defined limit to the number of
outstanding tags allowed.  but keep in mind the
server may process them in any order, so whenever
you can't have dependencies between outstanding
requests.  servers can only respond to the requests
issued by the client.  in high latency cases (eg the
network latency is > than the time it takes to service
the request) the concurrency of the server might not
be very important.

- erik



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 14:07           ` erik quanstrom
@ 2010-10-15 14:45             ` Russ Cox
  2010-10-15 15:41               ` Latchesar Ionkov
  2010-10-15 21:13             ` dorin bumbu
  1 sibling, 1 reply; 46+ messages in thread
From: Russ Cox @ 2010-10-15 14:45 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

A simpler way to add pipelining to 9P is to
define that multiple outstanding messages
with the same tag are allowed and that the
server must process and respond to messages
with a given tag in the order it receives them.
This only requires changes to servers that
are actually multithreaded.  All the tiny servers
only do one message at a time anyway.

Given that extension, you can just send
Twalk Topen Twrite Tclunk back to back
with a single tag and then process the four
replies.

This was proposed at the Spain IWP9.

Russ


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-14 21:12       ` Latchesar Ionkov
                           ` (2 preceding siblings ...)
  2010-10-15  7:24         ` Sape Mullender
@ 2010-10-15 14:54         ` David Leimbach
  2010-10-15 14:59           ` Francisco J Ballesteros
  3 siblings, 1 reply; 46+ messages in thread
From: David Leimbach @ 2010-10-15 14:54 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: text/plain, Size: 999 bytes --]

2010/10/14 Latchesar Ionkov <lucho@ionkov.net>

> It can't be dealt with the current protocol. It doesn't guarantee that
> Topen will be executed once Twalk is done. So can get Rerrors even if
> Twalk succeeds.
>
>
It can be dealt with if the scheduling of the pipeline is done properly.
 You just have to eliminate the dependencies.

I can imagine having a few concurrent queues of "requests" in a client that
contain items with dependencies, and running those queues in a pipelined way
against a 9P server.



> 2010/10/13 Venkatesh Srinivas <me@acm.jhu.edu>:
> >> 2) you can't pipeline requests if the result of one request depends on
> the
> >> result of a previous. for instance: walk to file, open it, read it,
> close
> >> it.
> >> if the first operation fails, then subsequent operations will be
> invalid.
> >
> > Given careful allocation of FIDs by a client, that can be dealt with -
> > operations on an invalid FID just get RErrors.
> >
> > -- vs
> >
>
>

[-- Attachment #2: Type: text/html, Size: 1580 bytes --]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15  7:24         ` Sape Mullender
  2010-10-15 10:41           ` hiro
@ 2010-10-15 14:57           ` David Leimbach
  1 sibling, 0 replies; 46+ messages in thread
From: David Leimbach @ 2010-10-15 14:57 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: text/plain, Size: 1967 bytes --]

2010/10/15 Sape Mullender <sape@plan9.bell-labs.com>

> Are we talking about πP or 9P?
>

It's about both.  I was just curious about how 9P was deficient in terms of
pipelining.  It might not be optimal for all cases of pipelining, but the
protocol seems to support it in certain cases just fine.

ΠP deals with it in a superior way, and I need to finish reading the paper
on it.


>
> ΠP doesn't have Twalk.  It has open, which combines clone, walk, and open
> from
> 9P.  Before you start jumping up and down and telling me that you can't
> open
> without revieing the qids from the walk (to check them for mount points),
> let
> me tell you that we are also tackling mount tables.  Mount tables will no
> longer
> match qids but longest prefix path names.
> We know the semantics are different.  But you have to look hard to find
> realistic situations where the difference matters.
>
> I intend to write a πP design document that explains the whole concept in
> excruciating detail.  There's a lot more to it than just changing walk and
> open.
>

I'm looking forward to it!


>
>        Sape
>
>
> > From: lucho@ionkov.net
> > To: 9fans@9fans.net
> > Reply-To: 9fans@9fans.net
> > Date: Thu Oct 14 23:13:59 CES 2010
> > Subject: Re: [9fans] πp
> >
> > It can't be dealt with the current protocol. It doesn't guarantee that
> > Topen will be executed once Twalk is done. So can get Rerrors even if
> > Twalk succeeds.
> >
> > 2010/10/13 Venkatesh Srinivas <me@acm.jhu.edu>:
> > >> 2) you can't pipeline requests if the result of one request depends on
> the
> > >> result of a previous. for instance: walk to file, open it, read it,
> close
> > >> it.
> > >> if the first operation fails, then subsequent operations will be
> invalid.
> > >
> > > Given careful allocation of FIDs by a client, that can be dealt with -
> > > operations on an invalid FID just get RErrors.
> > >
> > > -- vs
> > >
>
>

[-- Attachment #2: Type: text/html, Size: 2910 bytes --]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 14:54         ` David Leimbach
@ 2010-10-15 14:59           ` Francisco J Ballesteros
  2010-10-15 16:19             ` cinap_lenrek
  0 siblings, 1 reply; 46+ messages in thread
From: Francisco J Ballesteros @ 2010-10-15 14:59 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

It's not just that you can stream requests or not.
If you have caches in the path to the server, you'd like to batch together (or
stream or whatever you'd like to call that) requests so that if a client is
reading a file and a single rpc suffices, the cache, in the worst case, knows
that it has to issue a single rpc to the server.

Somehow, you need to group requests to retain the idea that a bunch of
requests have some meaning as a whole.

2010/10/15 David Leimbach <leimy2k@gmail.com>:
>
>
> 2010/10/14 Latchesar Ionkov <lucho@ionkov.net>
>>
>> It can't be dealt with the current protocol. It doesn't guarantee that
>> Topen will be executed once Twalk is done. So can get Rerrors even if
>> Twalk succeeds.
>>
>
> It can be dealt with if the scheduling of the pipeline is done properly.
>  You just have to eliminate the dependencies.
> I can imagine having a few concurrent queues of "requests" in a client that
> contain items with dependencies, and running those queues in a pipelined way
> against a 9P server.
>
>>
>> 2010/10/13 Venkatesh Srinivas <me@acm.jhu.edu>:
>> >> 2) you can't pipeline requests if the result of one request depends on
>> >> the
>> >> result of a previous. for instance: walk to file, open it, read it,
>> >> close
>> >> it.
>> >> if the first operation fails, then subsequent operations will be
>> >> invalid.
>> >
>> > Given careful allocation of FIDs by a client, that can be dealt with -
>> > operations on an invalid FID just get RErrors.
>> >
>> > -- vs
>> >
>>
>
>



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 14:45             ` Russ Cox
@ 2010-10-15 15:41               ` Latchesar Ionkov
  0 siblings, 0 replies; 46+ messages in thread
From: Latchesar Ionkov @ 2010-10-15 15:41 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

I already implemented that behavior in go9p on the plane back from
Seattle. I need to test it a bit before I check it in the repository.
The good thing about it is that it doesn't break any existing clients
or servers.

How is it going to be exposed to the clients is another issue. I am
still not sure about the right solution.

Another nice thing about that approach is that it allows long-lived
"transactions" (can't think of a better word). The client can reserve
a tag, use it as long as he needs it and send as many messages to the
server that need to be processed serially. With the πp approach, you
need to know in advance the messages you want to group.

Thanks,
    Lucho

2010/10/15 Russ Cox <rsc@swtch.com>:
> A simpler way to add pipelining to 9P is to
> define that multiple outstanding messages
> with the same tag are allowed and that the
> server must process and respond to messages
> with a given tag in the order it receives them.
> This only requires changes to servers that
> are actually multithreaded.  All the tiny servers
> only do one message at a time anyway.
>
> Given that extension, you can just send
> Twalk Topen Twrite Tclunk back to back
> with a single tag and then process the four
> replies.
>
> This was proposed at the Spain IWP9.
>
> Russ
>
>



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 14:59           ` Francisco J Ballesteros
@ 2010-10-15 16:19             ` cinap_lenrek
  2010-10-15 16:28               ` Lyndon Nerenberg
                                 ` (4 more replies)
  0 siblings, 5 replies; 46+ messages in thread
From: cinap_lenrek @ 2010-10-15 16:19 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 535 bytes --]

i wonder if making 9p work better over high latency connections is
even the right answer to the problem.  the real problem is that the
data your program wants to work on in miles away from you and
transfering it all will suck.  would it not be cool to have a way to
teleport/migrate your process to a cpu server close to the data?

i know, this is a crazy blue sky idea that has lots of problems on its
own...  but it poped up again when i read the "bring the computation
to the data" point from the ospray talk.

--
cinap

[-- Attachment #2: Type: message/rfc822, Size: 5318 bytes --]

From: Francisco J Ballesteros <nemo@lsub.org>
To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
Subject: Re: [9fans] πp
Date: Fri, 15 Oct 2010 16:59:02 +0200
Message-ID: <AANLkTin-8v4UZkxt1oYzS=mkkr79cRzS3T1EJyiKMCzN@mail.gmail.com>

It's not just that you can stream requests or not.
If you have caches in the path to the server, you'd like to batch together (or
stream or whatever you'd like to call that) requests so that if a client is
reading a file and a single rpc suffices, the cache, in the worst case, knows
that it has to issue a single rpc to the server.

Somehow, you need to group requests to retain the idea that a bunch of
requests have some meaning as a whole.

2010/10/15 David Leimbach <leimy2k@gmail.com>:
>
>
> 2010/10/14 Latchesar Ionkov <lucho@ionkov.net>
>>
>> It can't be dealt with the current protocol. It doesn't guarantee that
>> Topen will be executed once Twalk is done. So can get Rerrors even if
>> Twalk succeeds.
>>
>
> It can be dealt with if the scheduling of the pipeline is done properly.
>  You just have to eliminate the dependencies.
> I can imagine having a few concurrent queues of "requests" in a client that
> contain items with dependencies, and running those queues in a pipelined way
> against a 9P server.
>
>>
>> 2010/10/13 Venkatesh Srinivas <me@acm.jhu.edu>:
>> >> 2) you can't pipeline requests if the result of one request depends on
>> >> the
>> >> result of a previous. for instance: walk to file, open it, read it,
>> >> close
>> >> it.
>> >> if the first operation fails, then subsequent operations will be
>> >> invalid.
>> >
>> > Given careful allocation of FIDs by a client, that can be dealt with -
>> > operations on an invalid FID just get RErrors.
>> >
>> > -- vs
>> >
>>
>
>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 16:19             ` cinap_lenrek
@ 2010-10-15 16:28               ` Lyndon Nerenberg
  2010-10-15 17:54                 ` Nemo
  2010-10-15 16:31               ` Latchesar Ionkov
                                 ` (3 subsequent siblings)
  4 siblings, 1 reply; 46+ messages in thread
From: Lyndon Nerenberg @ 2010-10-15 16:28 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

> i wonder if making 9p work better over high latency connections is
> even the right answer to the problem.  the real problem is that the
> data your program wants to work on in miles away from you and
> transfering it all will suck.  would it not be cool to have a way to
> teleport/migrate your process to a cpu server close to the data?

This works when you can colocate CPU with the data.  But think of sensor
networks where the data lives on very cpu- and power-challenged SOCs
behind, e.g., high-latency radio links.

--lyndon



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 16:19             ` cinap_lenrek
  2010-10-15 16:28               ` Lyndon Nerenberg
@ 2010-10-15 16:31               ` Latchesar Ionkov
  2010-10-15 16:40                 ` cinap_lenrek
  2010-10-15 19:10                 ` erik quanstrom
  2010-10-15 16:45               ` ron minnich
                                 ` (2 subsequent siblings)
  4 siblings, 2 replies; 46+ messages in thread
From: Latchesar Ionkov @ 2010-10-15 16:31 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

What if the data your process needs is located on more than one
server? Play ping-pong?

Thanks,
    Lucho

2010/10/15  <cinap_lenrek@gmx.de>:
> i wonder if making 9p work better over high latency connections is
> even the right answer to the problem.  the real problem is that the
> data your program wants to work on in miles away from you and
> transfering it all will suck.  would it not be cool to have a way to
> teleport/migrate your process to a cpu server close to the data?
>
> i know, this is a crazy blue sky idea that has lots of problems on its
> own...  but it poped up again when i read the "bring the computation
> to the data" point from the ospray talk.
>
> --
> cinap
>
>
> ---------- Forwarded message ----------
> From: Francisco J Ballesteros <nemo@lsub.org>
> To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
> Date: Fri, 15 Oct 2010 16:59:02 +0200
> Subject: Re: [9fans] πp
> It's not just that you can stream requests or not.
> If you have caches in the path to the server, you'd like to batch together (or
> stream or whatever you'd like to call that) requests so that if a client is
> reading a file and a single rpc suffices, the cache, in the worst case, knows
> that it has to issue a single rpc to the server.
>
> Somehow, you need to group requests to retain the idea that a bunch of
> requests have some meaning as a whole.
>
> 2010/10/15 David Leimbach <leimy2k@gmail.com>:
>>
>>
>> 2010/10/14 Latchesar Ionkov <lucho@ionkov.net>
>>>
>>> It can't be dealt with the current protocol. It doesn't guarantee that
>>> Topen will be executed once Twalk is done. So can get Rerrors even if
>>> Twalk succeeds.
>>>
>>
>> It can be dealt with if the scheduling of the pipeline is done properly.
>>  You just have to eliminate the dependencies.
>> I can imagine having a few concurrent queues of "requests" in a client that
>> contain items with dependencies, and running those queues in a pipelined way
>> against a 9P server.
>>
>>>
>>> 2010/10/13 Venkatesh Srinivas <me@acm.jhu.edu>:
>>> >> 2) you can't pipeline requests if the result of one request depends on
>>> >> the
>>> >> result of a previous. for instance: walk to file, open it, read it,
>>> >> close
>>> >> it.
>>> >> if the first operation fails, then subsequent operations will be
>>> >> invalid.
>>> >
>>> > Given careful allocation of FIDs by a client, that can be dealt with -
>>> > operations on an invalid FID just get RErrors.
>>> >
>>> > -- vs
>>> >
>>>
>>
>>
>
>



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 16:31               ` Latchesar Ionkov
@ 2010-10-15 16:40                 ` cinap_lenrek
  2010-10-15 16:43                   ` Latchesar Ionkov
  2010-10-15 19:10                 ` erik quanstrom
  1 sibling, 1 reply; 46+ messages in thread
From: cinap_lenrek @ 2010-10-15 16:40 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 20 bytes --]

fork!

--
cinap

[-- Attachment #2: Type: message/rfc822, Size: 6033 bytes --]

From: Latchesar Ionkov <lucho@ionkov.net>
To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
Subject: Re: [9fans] πp
Date: Fri, 15 Oct 2010 10:31:47 -0600
Message-ID: <AANLkTimO00vOzdEeX9fKAAd6QZ4Z-68ieJSLhaYSA0WE@mail.gmail.com>

What if the data your process needs is located on more than one
server? Play ping-pong?

Thanks,
    Lucho

2010/10/15  <cinap_lenrek@gmx.de>:
> i wonder if making 9p work better over high latency connections is
> even the right answer to the problem.  the real problem is that the
> data your program wants to work on in miles away from you and
> transfering it all will suck.  would it not be cool to have a way to
> teleport/migrate your process to a cpu server close to the data?
>
> i know, this is a crazy blue sky idea that has lots of problems on its
> own...  but it poped up again when i read the "bring the computation
> to the data" point from the ospray talk.
>
> --
> cinap
>
>
> ---------- Forwarded message ----------
> From: Francisco J Ballesteros <nemo@lsub.org>
> To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
> Date: Fri, 15 Oct 2010 16:59:02 +0200
> Subject: Re: [9fans] πp
> It's not just that you can stream requests or not.
> If you have caches in the path to the server, you'd like to batch together (or
> stream or whatever you'd like to call that) requests so that if a client is
> reading a file and a single rpc suffices, the cache, in the worst case, knows
> that it has to issue a single rpc to the server.
>
> Somehow, you need to group requests to retain the idea that a bunch of
> requests have some meaning as a whole.
>
> 2010/10/15 David Leimbach <leimy2k@gmail.com>:
>>
>>
>> 2010/10/14 Latchesar Ionkov <lucho@ionkov.net>
>>>
>>> It can't be dealt with the current protocol. It doesn't guarantee that
>>> Topen will be executed once Twalk is done. So can get Rerrors even if
>>> Twalk succeeds.
>>>
>>
>> It can be dealt with if the scheduling of the pipeline is done properly.
>>  You just have to eliminate the dependencies.
>> I can imagine having a few concurrent queues of "requests" in a client that
>> contain items with dependencies, and running those queues in a pipelined way
>> against a 9P server.
>>
>>>
>>> 2010/10/13 Venkatesh Srinivas <me@acm.jhu.edu>:
>>> >> 2) you can't pipeline requests if the result of one request depends on
>>> >> the
>>> >> result of a previous. for instance: walk to file, open it, read it,
>>> >> close
>>> >> it.
>>> >> if the first operation fails, then subsequent operations will be
>>> >> invalid.
>>> >
>>> > Given careful allocation of FIDs by a client, that can be dealt with -
>>> > operations on an invalid FID just get RErrors.
>>> >
>>> > -- vs
>>> >
>>>
>>
>>
>
>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 16:40                 ` cinap_lenrek
@ 2010-10-15 16:43                   ` Latchesar Ionkov
  2010-10-15 17:11                     ` cinap_lenrek
  0 siblings, 1 reply; 46+ messages in thread
From: Latchesar Ionkov @ 2010-10-15 16:43 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

And how is fork going to help when the forked processes need to
exchange the data over the same high-latency link?

2010/10/15  <cinap_lenrek@gmx.de>:
> fork!
>
> --
> cinap
>
>
> ---------- Forwarded message ----------
> From: Latchesar Ionkov <lucho@ionkov.net>
> To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
> Date: Fri, 15 Oct 2010 10:31:47 -0600
> Subject: Re: [9fans] πp
> What if the data your process needs is located on more than one
> server? Play ping-pong?
>
> Thanks,
>    Lucho
>
> 2010/10/15  <cinap_lenrek@gmx.de>:
>> i wonder if making 9p work better over high latency connections is
>> even the right answer to the problem.  the real problem is that the
>> data your program wants to work on in miles away from you and
>> transfering it all will suck.  would it not be cool to have a way to
>> teleport/migrate your process to a cpu server close to the data?
>>
>> i know, this is a crazy blue sky idea that has lots of problems on its
>> own...  but it poped up again when i read the "bring the computation
>> to the data" point from the ospray talk.
>>
>> --
>> cinap
>>
>>
>> ---------- Forwarded message ----------
>> From: Francisco J Ballesteros <nemo@lsub.org>
>> To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
>> Date: Fri, 15 Oct 2010 16:59:02 +0200
>> Subject: Re: [9fans] πp
>> It's not just that you can stream requests or not.
>> If you have caches in the path to the server, you'd like to batch together (or
>> stream or whatever you'd like to call that) requests so that if a client is
>> reading a file and a single rpc suffices, the cache, in the worst case, knows
>> that it has to issue a single rpc to the server.
>>
>> Somehow, you need to group requests to retain the idea that a bunch of
>> requests have some meaning as a whole.
>>
>> 2010/10/15 David Leimbach <leimy2k@gmail.com>:
>>>
>>>
>>> 2010/10/14 Latchesar Ionkov <lucho@ionkov.net>
>>>>
>>>> It can't be dealt with the current protocol. It doesn't guarantee that
>>>> Topen will be executed once Twalk is done. So can get Rerrors even if
>>>> Twalk succeeds.
>>>>
>>>
>>> It can be dealt with if the scheduling of the pipeline is done properly.
>>>  You just have to eliminate the dependencies.
>>> I can imagine having a few concurrent queues of "requests" in a client that
>>> contain items with dependencies, and running those queues in a pipelined way
>>> against a 9P server.
>>>
>>>>
>>>> 2010/10/13 Venkatesh Srinivas <me@acm.jhu.edu>:
>>>> >> 2) you can't pipeline requests if the result of one request depends on
>>>> >> the
>>>> >> result of a previous. for instance: walk to file, open it, read it,
>>>> >> close
>>>> >> it.
>>>> >> if the first operation fails, then subsequent operations will be
>>>> >> invalid.
>>>> >
>>>> > Given careful allocation of FIDs by a client, that can be dealt with -
>>>> > operations on an invalid FID just get RErrors.
>>>> >
>>>> > -- vs
>>>> >
>>>>
>>>
>>>
>>
>>
>
>



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 16:19             ` cinap_lenrek
  2010-10-15 16:28               ` Lyndon Nerenberg
  2010-10-15 16:31               ` Latchesar Ionkov
@ 2010-10-15 16:45               ` ron minnich
  2010-10-15 17:26                 ` Eric Van Hensbergen
  2010-10-15 17:45                 ` Charles Forsyth
  2010-10-15 18:03               ` Bakul Shah
  2010-10-15 18:45               ` David Leimbach
  4 siblings, 2 replies; 46+ messages in thread
From: ron minnich @ 2010-10-15 16:45 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

2010/10/15  <cinap_lenrek@gmx.de>:
> i wonder if making 9p work better over high latency connections is
> even the right answer to the problem.

The reason I care is that the link from a CPU node to a file server on
blue gene is high latency. It might as well be cross-country, it's so
bad.

>  would it not be cool to have a way to
> teleport/migrate your process to a cpu server close to the data?

If only ... but Blue Gene is very heavy! It's too hard to carry around
like that.

> i know, this is a crazy blue sky idea that has lots of problems on its
> own...  but it poped up again when i read the "bring the computation
> to the data" point from the ospray talk.

it's a very attractive idea that has been tried extensively at
different times and places for several decades and in  almost all
cases has never worked out.

Here's a streaming idea that worked out well. In 1994, Larry McVoy
while at SGI did something called "NFS bypass". Basically the code did
the standard NFS lookup/getattr cycle but when it came time to do the
read, what was returned was a TCP socket that in effect implemented
streamed data from server to client. Flow control became a TCP
problem, and SGI TCP did a good job with it. At that point the (fast
for the time) 800 mbit/second HIPPI channel ran at full rate, as
opposed to the much slower NFS file-oriented read. Simple
stream-oriented communications replaced all the games people play to
do prefetch, which in many cases is just making a packet-oriented
interface look like a stream anyway.

Look at it this way. Start rio under ratrace. Watch all the file IO.
You'd be hard pressed to find the ones that don't look like streams.

I also don't buy the "oh but this fails for a 2 GB file" argument.
Find me the 2 GB file you're using and then let's talk about not
streaming files that are too big. But if you are using a 2 GB *output*
file then you're almost always better off streaming that file out --
also verified on the LLNL Blue Gene where the current checkpointing of
32 TB of memory looks like a minor explosion, because the file system
protocol is such a bad fit to what is needed -- a stream from memory
to the server.

Streaming has been shown to work for HPC systems because most of our
applications stream data in our out. The transition from a stream to
the packet-oriented file IO protocol has never been comfortable. That
doesn't mean one only does streams, just that one should not make them
impossible.

ron



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 16:43                   ` Latchesar Ionkov
@ 2010-10-15 17:11                     ` cinap_lenrek
  2010-10-15 17:15                       ` Latchesar Ionkov
  0 siblings, 1 reply; 46+ messages in thread
From: cinap_lenrek @ 2010-10-15 17:11 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 263 bytes --]

if it doesnt help, you apply the mechanism to the wrong problem :) or
the mechanism is not that usefull as i thought...  thanks ron for your
comment!  i was just hoping to get some responses from the osprey
dudes as they had it on ther slides :)

--
cinap

[-- Attachment #2: Type: message/rfc822, Size: 6538 bytes --]

From: Latchesar Ionkov <lucho@ionkov.net>
To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
Subject: Re: [9fans] πp
Date: Fri, 15 Oct 2010 10:43:34 -0600
Message-ID: <AANLkTi=71xM55yXEqbziLrpPpMDSFpbfNCwPwAHSaUP-@mail.gmail.com>

And how is fork going to help when the forked processes need to
exchange the data over the same high-latency link?

2010/10/15  <cinap_lenrek@gmx.de>:
> fork!
>
> --
> cinap
>
>
> ---------- Forwarded message ----------
> From: Latchesar Ionkov <lucho@ionkov.net>
> To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
> Date: Fri, 15 Oct 2010 10:31:47 -0600
> Subject: Re: [9fans] πp
> What if the data your process needs is located on more than one
> server? Play ping-pong?
>
> Thanks,
>    Lucho
>
> 2010/10/15  <cinap_lenrek@gmx.de>:
>> i wonder if making 9p work better over high latency connections is
>> even the right answer to the problem.  the real problem is that the
>> data your program wants to work on in miles away from you and
>> transfering it all will suck.  would it not be cool to have a way to
>> teleport/migrate your process to a cpu server close to the data?
>>
>> i know, this is a crazy blue sky idea that has lots of problems on its
>> own...  but it poped up again when i read the "bring the computation
>> to the data" point from the ospray talk.
>>
>> --
>> cinap
>>
>>
>> ---------- Forwarded message ----------
>> From: Francisco J Ballesteros <nemo@lsub.org>
>> To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
>> Date: Fri, 15 Oct 2010 16:59:02 +0200
>> Subject: Re: [9fans] πp
>> It's not just that you can stream requests or not.
>> If you have caches in the path to the server, you'd like to batch together (or
>> stream or whatever you'd like to call that) requests so that if a client is
>> reading a file and a single rpc suffices, the cache, in the worst case, knows
>> that it has to issue a single rpc to the server.
>>
>> Somehow, you need to group requests to retain the idea that a bunch of
>> requests have some meaning as a whole.
>>
>> 2010/10/15 David Leimbach <leimy2k@gmail.com>:
>>>
>>>
>>> 2010/10/14 Latchesar Ionkov <lucho@ionkov.net>
>>>>
>>>> It can't be dealt with the current protocol. It doesn't guarantee that
>>>> Topen will be executed once Twalk is done. So can get Rerrors even if
>>>> Twalk succeeds.
>>>>
>>>
>>> It can be dealt with if the scheduling of the pipeline is done properly.
>>>  You just have to eliminate the dependencies.
>>> I can imagine having a few concurrent queues of "requests" in a client that
>>> contain items with dependencies, and running those queues in a pipelined way
>>> against a 9P server.
>>>
>>>>
>>>> 2010/10/13 Venkatesh Srinivas <me@acm.jhu.edu>:
>>>> >> 2) you can't pipeline requests if the result of one request depends on
>>>> >> the
>>>> >> result of a previous. for instance: walk to file, open it, read it,
>>>> >> close
>>>> >> it.
>>>> >> if the first operation fails, then subsequent operations will be
>>>> >> invalid.
>>>> >
>>>> > Given careful allocation of FIDs by a client, that can be dealt with -
>>>> > operations on an invalid FID just get RErrors.
>>>> >
>>>> > -- vs
>>>> >
>>>>
>>>
>>>
>>
>>
>
>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 17:11                     ` cinap_lenrek
@ 2010-10-15 17:15                       ` Latchesar Ionkov
  0 siblings, 0 replies; 46+ messages in thread
From: Latchesar Ionkov @ 2010-10-15 17:15 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

If the mechanism cannot be applied to every (even wrong) problem, then
it still doesn't solve the file I/O over high-latency links issue that
we started with first.

2010/10/15  <cinap_lenrek@gmx.de>:
> if it doesnt help, you apply the mechanism to the wrong problem :) or
> the mechanism is not that usefull as i thought...  thanks ron for your
> comment!  i was just hoping to get some responses from the osprey
> dudes as they had it on ther slides :)
>
> --
> cinap
>
>
> ---------- Forwarded message ----------
> From: Latchesar Ionkov <lucho@ionkov.net>
> To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
> Date: Fri, 15 Oct 2010 10:43:34 -0600
> Subject: Re: [9fans] πp
> And how is fork going to help when the forked processes need to
> exchange the data over the same high-latency link?
>
> 2010/10/15  <cinap_lenrek@gmx.de>:
>> fork!
>>
>> --
>> cinap
>>
>>
>> ---------- Forwarded message ----------
>> From: Latchesar Ionkov <lucho@ionkov.net>
>> To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
>> Date: Fri, 15 Oct 2010 10:31:47 -0600
>> Subject: Re: [9fans] πp
>> What if the data your process needs is located on more than one
>> server? Play ping-pong?
>>
>> Thanks,
>>    Lucho
>>
>> 2010/10/15  <cinap_lenrek@gmx.de>:
>>> i wonder if making 9p work better over high latency connections is
>>> even the right answer to the problem.  the real problem is that the
>>> data your program wants to work on in miles away from you and
>>> transfering it all will suck.  would it not be cool to have a way to
>>> teleport/migrate your process to a cpu server close to the data?
>>>
>>> i know, this is a crazy blue sky idea that has lots of problems on its
>>> own...  but it poped up again when i read the "bring the computation
>>> to the data" point from the ospray talk.
>>>
>>> --
>>> cinap
>>>
>>>
>>> ---------- Forwarded message ----------
>>> From: Francisco J Ballesteros <nemo@lsub.org>
>>> To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
>>> Date: Fri, 15 Oct 2010 16:59:02 +0200
>>> Subject: Re: [9fans] πp
>>> It's not just that you can stream requests or not.
>>> If you have caches in the path to the server, you'd like to batch together (or
>>> stream or whatever you'd like to call that) requests so that if a client is
>>> reading a file and a single rpc suffices, the cache, in the worst case, knows
>>> that it has to issue a single rpc to the server.
>>>
>>> Somehow, you need to group requests to retain the idea that a bunch of
>>> requests have some meaning as a whole.
>>>
>>> 2010/10/15 David Leimbach <leimy2k@gmail.com>:
>>>>
>>>>
>>>> 2010/10/14 Latchesar Ionkov <lucho@ionkov.net>
>>>>>
>>>>> It can't be dealt with the current protocol. It doesn't guarantee that
>>>>> Topen will be executed once Twalk is done. So can get Rerrors even if
>>>>> Twalk succeeds.
>>>>>
>>>>
>>>> It can be dealt with if the scheduling of the pipeline is done properly.
>>>>  You just have to eliminate the dependencies.
>>>> I can imagine having a few concurrent queues of "requests" in a client that
>>>> contain items with dependencies, and running those queues in a pipelined way
>>>> against a 9P server.
>>>>
>>>>>
>>>>> 2010/10/13 Venkatesh Srinivas <me@acm.jhu.edu>:
>>>>> >> 2) you can't pipeline requests if the result of one request depends on
>>>>> >> the
>>>>> >> result of a previous. for instance: walk to file, open it, read it,
>>>>> >> close
>>>>> >> it.
>>>>> >> if the first operation fails, then subsequent operations will be
>>>>> >> invalid.
>>>>> >
>>>>> > Given careful allocation of FIDs by a client, that can be dealt with -
>>>>> > operations on an invalid FID just get RErrors.
>>>>> >
>>>>> > -- vs
>>>>> >
>>>>>
>>>>
>>>>
>>>
>>>
>>
>>
>
>



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 16:45               ` ron minnich
@ 2010-10-15 17:26                 ` Eric Van Hensbergen
  2010-10-15 17:45                 ` Charles Forsyth
  1 sibling, 0 replies; 46+ messages in thread
From: Eric Van Hensbergen @ 2010-10-15 17:26 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

2010/10/15 ron minnich <rminnich@gmail.com>:
> 2010/10/15  <cinap_lenrek@gmx.de>:
>> i wonder if making 9p work better over high latency connections is
>> even the right answer to the problem.
>
> The reason I care is that the link from a CPU node to a file server on
> blue gene is high latency. It might as well be cross-country, it's so
> bad.
>

a mere 6 GB/s (at the bottleneck) for folks who don't understand what
Ron's idea of bad is :)  In comparison to the torus in both latency
and bandwidth (which with aggregate bandwidth of all links could be
almost an order of magnitude higher bandwidth), the path to persistent
storage is quite bad.

As a point of clarification, its not actually the link that is
currently the main problem, its the software infrastructure we
currently have deployed (since we are opting not to use the production
Blue Gene file servers/service).  Although with our desktop extension
model which uses your laptop (or whatever) as part of the file server
namespace -- latency can be quite long (in fact in many cases it is
across the country, and sometimes in a different one).

     -eric



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 17:45                 ` Charles Forsyth
@ 2010-10-15 17:45                   ` ron minnich
  0 siblings, 0 replies; 46+ messages in thread
From: ron minnich @ 2010-10-15 17:45 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

2010/10/15 Charles Forsyth <forsyth@terzarima.net>:
>>The transition from a stream to the packet-oriented file IO protocol has never been comfortable.
>
> `RPC-oriented' might be more accurate than `packet-oriented', given the
> way that streams are implemented.

Correct.

ron



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 16:45               ` ron minnich
  2010-10-15 17:26                 ` Eric Van Hensbergen
@ 2010-10-15 17:45                 ` Charles Forsyth
  2010-10-15 17:45                   ` ron minnich
  1 sibling, 1 reply; 46+ messages in thread
From: Charles Forsyth @ 2010-10-15 17:45 UTC (permalink / raw)
  To: 9fans

>The transition from a stream to the packet-oriented file IO protocol has never been comfortable.

`RPC-oriented' might be more accurate than `packet-oriented', given the
way that streams are implemented.



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 17:54                 ` Nemo
@ 2010-10-15 17:49                   ` ron minnich
  2010-10-15 18:56                     ` erik quanstrom
  0 siblings, 1 reply; 46+ messages in thread
From: ron minnich @ 2010-10-15 17:49 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

2010/10/15 Nemo <nemo.mbox@gmail.com>:

> So, when I hear migration, I just tend to see what happens after it has
> been implemented and faces the spaghethy phase.

And even if you get that right, it may not work well on hardware. We
saw cases with linux migration, while migrating from one x86 to
another, where valid FP values would cause the target to get an FP
trap. Made no sense, but it's what happened, because the two x86's
were different *implementations* of the same architecture. So,
migration works well for all the really easy cases -- CPU you migrate
to was fabbed by the same vendor in the same place with the same mask
and microcode as the one you migrate from -- and can fail on anything
tricky. That's why I don't like it either :-)

ron



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 16:28               ` Lyndon Nerenberg
@ 2010-10-15 17:54                 ` Nemo
  2010-10-15 17:49                   ` ron minnich
  0 siblings, 1 reply; 46+ messages in thread
From: Nemo @ 2010-10-15 17:54 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

back to the 80s, or was it 70s?

many people have implemented migration.
the first i remember is swaping (yes, not paging) out a process, then swapping
it back into a different machine. iirc, it might be the sprite unix, not quite sure.

the point is, the migrated process still needs all the connections it had with the
outer world in the old location. perhaps, some of them to other processes still
at the old location. they got it working, but they were caught in the resulting
spaghetty. 

So, when I hear migration, I just tend to see what happens after it has
been implemented and faces the spaghethy phase.

Of course it might be just that this word scares the hell out of me.


On Oct 15, 2010, at 6:28 PM, Lyndon Nerenberg <lyndon@orthanc.ca> wrote:

>> i wonder if making 9p work better over high latency connections is
>> even the right answer to the problem.  the real problem is that the
>> data your program wants to work on in miles away from you and
>> transfering it all will suck.  would it not be cool to have a way to
>> teleport/migrate your process to a cpu server close to the data?
> 
> This works when you can colocate CPU with the data.  But think of sensor networks where the data lives on very cpu- and power-challenged SOCs behind, e.g., high-latency radio links.
> 
> --lyndon
> 



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 16:19             ` cinap_lenrek
                                 ` (2 preceding siblings ...)
  2010-10-15 16:45               ` ron minnich
@ 2010-10-15 18:03               ` Bakul Shah
  2010-10-15 18:45               ` David Leimbach
  4 siblings, 0 replies; 46+ messages in thread
From: Bakul Shah @ 2010-10-15 18:03 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

On Fri, 15 Oct 2010 18:19:50 +0200 cinap_lenrek@gmx.de  wrote:
>
> i wonder if making 9p work better over high latency connections is
> even the right answer to the problem.  the real problem is that the
> data your program wants to work on in miles away from you and
> transfering it all will suck.  would it not be cool to have a way to
> teleport/migrate your process to a cpu server close to the data?

Pipelining solves the problem of inefficient use of long fat
pipes.  Migration just moves the problem, doesn't solve it.

In fact, if you don't use pipelining, even your migration
will be extremely slow :-)



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 16:19             ` cinap_lenrek
                                 ` (3 preceding siblings ...)
  2010-10-15 18:03               ` Bakul Shah
@ 2010-10-15 18:45               ` David Leimbach
  4 siblings, 0 replies; 46+ messages in thread
From: David Leimbach @ 2010-10-15 18:45 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: text/plain, Size: 2547 bytes --]

2010/10/15 <cinap_lenrek@gmx.de>

> i wonder if making 9p work better over high latency connections is
> even the right answer to the problem.  the real problem is that the
> data your program wants to work on in miles away from you and
> transfering it all will suck.  would it not be cool to have a way to
> teleport/migrate your process to a cpu server close to the data?
>
> i know, this is a crazy blue sky idea that has lots of problems on its
> own...  but it poped up again when i read the "bring the computation
> to the data" point from the ospray talk.
>
>
I thought migrating processes was part of Osprey's story :-).


> --
> cinap
>
>
> ---------- Forwarded message ----------
> From: Francisco J Ballesteros <nemo@lsub.org>
> To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
> Date: Fri, 15 Oct 2010 16:59:02 +0200
> Subject: Re: [9fans] πp
> It's not just that you can stream requests or not.
> If you have caches in the path to the server, you'd like to batch together
> (or
> stream or whatever you'd like to call that) requests so that if a client is
> reading a file and a single rpc suffices, the cache, in the worst case,
> knows
> that it has to issue a single rpc to the server.
>
> Somehow, you need to group requests to retain the idea that a bunch of
> requests have some meaning as a whole.
>
> 2010/10/15 David Leimbach <leimy2k@gmail.com>:
> >
> >
> > 2010/10/14 Latchesar Ionkov <lucho@ionkov.net>
> >>
> >> It can't be dealt with the current protocol. It doesn't guarantee that
> >> Topen will be executed once Twalk is done. So can get Rerrors even if
> >> Twalk succeeds.
> >>
> >
> > It can be dealt with if the scheduling of the pipeline is done properly.
> >  You just have to eliminate the dependencies.
> > I can imagine having a few concurrent queues of "requests" in a client
> that
> > contain items with dependencies, and running those queues in a pipelined
> way
> > against a 9P server.
> >
> >>
> >> 2010/10/13 Venkatesh Srinivas <me@acm.jhu.edu>:
> >> >> 2) you can't pipeline requests if the result of one request depends
> on
> >> >> the
> >> >> result of a previous. for instance: walk to file, open it, read it,
> >> >> close
> >> >> it.
> >> >> if the first operation fails, then subsequent operations will be
> >> >> invalid.
> >> >
> >> > Given careful allocation of FIDs by a client, that can be dealt with -
> >> > operations on an invalid FID just get RErrors.
> >> >
> >> > -- vs
> >> >
> >>
> >
> >
>
>

[-- Attachment #2: Type: text/html, Size: 3572 bytes --]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 17:49                   ` ron minnich
@ 2010-10-15 18:56                     ` erik quanstrom
  0 siblings, 0 replies; 46+ messages in thread
From: erik quanstrom @ 2010-10-15 18:56 UTC (permalink / raw)
  To: 9fans

> And even if you get that right, it may not work well on hardware. We
> saw cases with linux migration, while migrating from one x86 to
> another, where valid FP values would cause the target to get an FP
> trap. Made no sense, but it's what happened, because the two x86's
> were different *implementations* of the same architecture. So,
> migration works well for all the really easy cases -- CPU you migrate
> to was fabbed by the same vendor in the same place with the same mask
> and microcode as the one you migrate from -- and can fail on anything
> tricky. That's why I don't like it either :-)

the existance of broken hardware doesn't say anything about the idea.
we don't give up on page tables because some mmus don't work.

- erik



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 16:31               ` Latchesar Ionkov
  2010-10-15 16:40                 ` cinap_lenrek
@ 2010-10-15 19:10                 ` erik quanstrom
  2010-10-15 19:16                   ` Latchesar Ionkov
  1 sibling, 1 reply; 46+ messages in thread
From: erik quanstrom @ 2010-10-15 19:10 UTC (permalink / raw)
  To: 9fans

On Fri Oct 15 12:33:19 EDT 2010, lucho@ionkov.net wrote:
> What if the data your process needs is located on more than one
> server? Play ping-pong?

one either plays ping pong with the process or data.  one
could imagine cases where the former case makes sense.

- erik



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 19:10                 ` erik quanstrom
@ 2010-10-15 19:16                   ` Latchesar Ionkov
  2010-10-15 21:43                     ` Julius Schmidt
  0 siblings, 1 reply; 46+ messages in thread
From: Latchesar Ionkov @ 2010-10-15 19:16 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

There are definitely cases when moving the code instead of the data
makes sense. But that discussion is mostly unrelated to the one on how
to make the file I/O work better over high-latency links.

2010/10/15 erik quanstrom <quanstro@quanstro.net>:
> On Fri Oct 15 12:33:19 EDT 2010, lucho@ionkov.net wrote:
>> What if the data your process needs is located on more than one
>> server? Play ping-pong?
>
> one either plays ping pong with the process or data.  one
> could imagine cases where the former case makes sense.
>
> - erik
>
>



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 14:07           ` erik quanstrom
  2010-10-15 14:45             ` Russ Cox
@ 2010-10-15 21:13             ` dorin bumbu
  2010-10-15 22:13               ` erik quanstrom
  1 sibling, 1 reply; 46+ messages in thread
From: dorin bumbu @ 2010-10-15 21:13 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

2010/10/15 erik quanstrom <quanstro@quanstro.net>:
>> isn't tag field for this intended?
>
> [...]
>
>> so this means to me that a client can send some T-messages and then
>> (or concurrently) wait the R-messages.
>> in inferno from mount(1) and styxmon(8) i deduced that this case is
>> also considered.
>> it's true that most of the servers i seen until now doesn't take
>> advantage of this feature, they respond to each T-message before
>> processing next message.
>
> there is no defined limit to the number of
> outstanding tags allowed.  but keep in mind the
> server may process them in any order, so whenever
> you can't have dependencies between outstanding
> requests.  servers can only respond to the requests
> issued by the client.  in high latency cases (eg the
> network latency is > than the time it takes to service
> the request) the concurrency of the server might not
> be very important.
>
> - erik
>
>

so the real problem is not 9p itself than the transport protocol: TCP/IP.
i think that a solution might be to run 9p over an information
dispersal based protocol, to eliminate roundtrips but guaranteeing
reliability.
in this case if the server is concurrent, the effect of latency will
be almost a constant delay in response.
also an isochronous mode will not be necessary anymore, because the
protocol itself would be "isochronous".

Dorin

P.S. if there isn't implemented such a protocol, i think this will be
the way to implement 9p over UDP. i saw that in inferno there is
implemented a ida algorithm in ida(2), maybe this can be taken as a
reference.



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 19:16                   ` Latchesar Ionkov
@ 2010-10-15 21:43                     ` Julius Schmidt
  2010-10-15 22:02                       ` David Leimbach
  2010-10-15 22:05                       ` John Floren
  0 siblings, 2 replies; 46+ messages in thread
From: Julius Schmidt @ 2010-10-15 21:43 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: TEXT/PLAIN, Size: 945 bytes --]

Perhaps I'm getting this all wrong, but to me this seems like an
interesting idea, especially if you consider the impact of being "near
the files" on some classically considered computationally stressy tasks
like compiling (esp. with kencc). So moving the code near the data
definitely seems worth trying.

aiju


On Fri, 15 Oct 2010, Latchesar Ionkov wrote:

> There are definitely cases when moving the code instead of the data
> makes sense. But that discussion is mostly unrelated to the one on how
> to make the file I/O work better over high-latency links.
>
> 2010/10/15 erik quanstrom <quanstro@quanstro.net>:
>> On Fri Oct 15 12:33:19 EDT 2010, lucho@ionkov.net wrote:
>>> What if the data your process needs is located on more than one
>>> server? Play ping-pong?
>>
>> one either plays ping pong with the process or data.  one
>> could imagine cases where the former case makes sense.
>>
>> - erik
>>
>>
>
>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 21:43                     ` Julius Schmidt
@ 2010-10-15 22:02                       ` David Leimbach
  2010-10-15 22:05                       ` John Floren
  1 sibling, 0 replies; 46+ messages in thread
From: David Leimbach @ 2010-10-15 22:02 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

[-- Attachment #1: Type: text/plain, Size: 1225 bytes --]

CPUs have big caches to move the code closer to the data (well a copy of the
data anyway).

Closeness in general is good, the question is what to move and how :-)

Dave

2010/10/15 Julius Schmidt <aiju@phicode.de>

> Perhaps I'm getting this all wrong, but to me this seems like an
> interesting idea, especially if you consider the impact of being "near
> the files" on some classically considered computationally stressy tasks
> like compiling (esp. with kencc). So moving the code near the data
> definitely seems worth trying.
>
> aiju
>
>
>
> On Fri, 15 Oct 2010, Latchesar Ionkov wrote:
>
>  There are definitely cases when moving the code instead of the data
>> makes sense. But that discussion is mostly unrelated to the one on how
>> to make the file I/O work better over high-latency links.
>>
>> 2010/10/15 erik quanstrom <quanstro@quanstro.net>:
>>
>>> On Fri Oct 15 12:33:19 EDT 2010, lucho@ionkov.net wrote:
>>>
>>>> What if the data your process needs is located on more than one
>>>> server? Play ping-pong?
>>>>
>>>
>>> one either plays ping pong with the process or data.  one
>>> could imagine cases where the former case makes sense.
>>>
>>> - erik
>>>
>>>
>>>
>>

[-- Attachment #2: Type: text/html, Size: 2053 bytes --]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 21:43                     ` Julius Schmidt
  2010-10-15 22:02                       ` David Leimbach
@ 2010-10-15 22:05                       ` John Floren
  1 sibling, 0 replies; 46+ messages in thread
From: John Floren @ 2010-10-15 22:05 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

It is probably worth trying. However, it wouldn't make copying a file
from sources any faster, or help a Blue Gene node do a snapshot any
quicker.


John

2010/10/15 Julius Schmidt <aiju@phicode.de>:
> Perhaps I'm getting this all wrong, but to me this seems like an
> interesting idea, especially if you consider the impact of being "near
> the files" on some classically considered computationally stressy tasks
> like compiling (esp. with kencc). So moving the code near the data
> definitely seems worth trying.
>
> aiju
>
>
> On Fri, 15 Oct 2010, Latchesar Ionkov wrote:
>
>> There are definitely cases when moving the code instead of the data
>> makes sense. But that discussion is mostly unrelated to the one on how
>> to make the file I/O work better over high-latency links.
>>
>> 2010/10/15 erik quanstrom <quanstro@quanstro.net>:
>>>
>>> On Fri Oct 15 12:33:19 EDT 2010, lucho@ionkov.net wrote:
>>>>
>>>> What if the data your process needs is located on more than one
>>>> server? Play ping-pong?
>>>
>>> one either plays ping pong with the process or data.  one
>>> could imagine cases where the former case makes sense.
>>>
>>> - erik
>>>
>>>
>>
>



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 21:13             ` dorin bumbu
@ 2010-10-15 22:13               ` erik quanstrom
  2010-10-15 23:30                 ` dorin bumbu
  0 siblings, 1 reply; 46+ messages in thread
From: erik quanstrom @ 2010-10-15 22:13 UTC (permalink / raw)
  To: 9fans

On Fri Oct 15 17:14:18 EDT 2010, bumbudorin@gmail.com wrote:
> 2010/10/15 erik quanstrom <quanstro@quanstro.net>:
> >> isn't tag field for this intended?
> >
> > [...]
> >
> >> so this means to me that a client can send some T-messages and then
> >> (or concurrently) wait the R-messages.
> >> in inferno from mount(1) and styxmon(8) i deduced that this case is
> >> also considered.
> >> it's true that most of the servers i seen until now doesn't take
> >> advantage of this feature, they respond to each T-message before
> >> processing next message.
> >
> > there is no defined limit to the number of
> > outstanding tags allowed. [...]
>
> so the real problem is not 9p itself than the transport protocol: TCP/IP.

the transport has nothing to do with it;

> i think that a solution might be to run 9p over an information
> dispersal based protocol, to eliminate roundtrips but guaranteeing
> reliability.

the definitiion of 9p relies on having round trips.

- erik



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 22:13               ` erik quanstrom
@ 2010-10-15 23:30                 ` dorin bumbu
  2010-10-15 23:45                   ` cinap_lenrek
  0 siblings, 1 reply; 46+ messages in thread
From: dorin bumbu @ 2010-10-15 23:30 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

2010/10/16 erik quanstrom <quanstro@quanstro.net>:
> On Fri Oct 15 17:14:18 EDT 2010, bumbudorin@gmail.com wrote:
>> 2010/10/15 erik quanstrom <quanstro@quanstro.net>:
>> >> isn't tag field for this intended?
>> >
>> > [...]
>> >
>> >> so this means to me that a client can send some T-messages and then
>> >> (or concurrently) wait the R-messages.
>> >> in inferno from mount(1) and styxmon(8) i deduced that this case is
>> >> also considered.
>> >> it's true that most of the servers i seen until now doesn't take
>> >> advantage of this feature, they respond to each T-message before
>> >> processing next message.
>> >
>> > there is no defined limit to the number of
>> > outstanding tags allowed. [...]
>>
>> so the real problem is not 9p itself than the transport protocol: TCP/IP.
>
> the transport has nothing to do with it;


consider the following (imaginary) example:

let say that i want to watch a live videocast, and i am in kiev and
the video is emitted from seattle as a 9p service.
i'll mount that service in my local namespace, and my imaginary video
player istead of processig 9p messages in traditional way:
[...] TRead -> RRead -> TRead -> RRead -> TRead -> [...]

will have one thread that:  [...] TRead -> TRead-> TRead->
TRead->TRead->TRead -> [...]
and another one that :      [...]  <--     t0      -->RRead -> RRead->
RRead -> RRead-> RRead-> RRead-> [...]


if transport protocol hasn't anything to do with, please explain me
what other drawbacks will be involved other than a t0 delay, if
latency would be constant?

>
>> i think that a solution might be to run 9p over an information
>> dispersal based protocol, to eliminate roundtrips but guaranteeing
>> reliability.
>
> the definitiion of 9p relies on having round trips.

i was reffering on TCP/IP's roundtrips for each package confirmation,
and not to eliminate 9p's round trips.
in 9p case i suggested to process some of them as i wrote on previous
lines, when high data traffic is involved, like in video streaming.

>
> - erik
>
>



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 23:30                 ` dorin bumbu
@ 2010-10-15 23:45                   ` cinap_lenrek
  2010-10-15 23:54                     ` dorin bumbu
  0 siblings, 1 reply; 46+ messages in thread
From: cinap_lenrek @ 2010-10-15 23:45 UTC (permalink / raw)
  To: 9fans

[-- Attachment #1: Type: text/plain, Size: 55 bytes --]

when you are in kiev, video streams YOU!

--
cinap

[-- Attachment #2: Type: message/rfc822, Size: 5785 bytes --]

From: dorin bumbu <bumbudorin@gmail.com>
To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
Subject: Re: [9fans] πp
Date: Sat, 16 Oct 2010 02:30:42 +0300
Message-ID: <AANLkTimWAOTVXkONNbM-fFXt-NjeAuM2gNS059nFBVVw@mail.gmail.com>

2010/10/16 erik quanstrom <quanstro@quanstro.net>:
> On Fri Oct 15 17:14:18 EDT 2010, bumbudorin@gmail.com wrote:
>> 2010/10/15 erik quanstrom <quanstro@quanstro.net>:
>> >> isn't tag field for this intended?
>> >
>> > [...]
>> >
>> >> so this means to me that a client can send some T-messages and then
>> >> (or concurrently) wait the R-messages.
>> >> in inferno from mount(1) and styxmon(8) i deduced that this case is
>> >> also considered.
>> >> it's true that most of the servers i seen until now doesn't take
>> >> advantage of this feature, they respond to each T-message before
>> >> processing next message.
>> >
>> > there is no defined limit to the number of
>> > outstanding tags allowed. [...]
>>
>> so the real problem is not 9p itself than the transport protocol: TCP/IP.
>
> the transport has nothing to do with it;


consider the following (imaginary) example:

let say that i want to watch a live videocast, and i am in kiev and
the video is emitted from seattle as a 9p service.
i'll mount that service in my local namespace, and my imaginary video
player istead of processig 9p messages in traditional way:
[...] TRead -> RRead -> TRead -> RRead -> TRead -> [...]

will have one thread that:  [...] TRead -> TRead-> TRead->
TRead->TRead->TRead -> [...]
and another one that :      [...]  <--     t0      -->RRead -> RRead->
RRead -> RRead-> RRead-> RRead-> [...]


if transport protocol hasn't anything to do with, please explain me
what other drawbacks will be involved other than a t0 delay, if
latency would be constant?

>
>> i think that a solution might be to run 9p over an information
>> dispersal based protocol, to eliminate roundtrips but guaranteeing
>> reliability.
>
> the definitiion of 9p relies on having round trips.

i was reffering on TCP/IP's roundtrips for each package confirmation,
and not to eliminate 9p's round trips.
in 9p case i suggested to process some of them as i wrote on previous
lines, when high data traffic is involved, like in video streaming.

>
> - erik
>
>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [9fans] πp
  2010-10-15 23:45                   ` cinap_lenrek
@ 2010-10-15 23:54                     ` dorin bumbu
  0 siblings, 0 replies; 46+ messages in thread
From: dorin bumbu @ 2010-10-15 23:54 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

ok, let's say that i want to read a 2 gb file :)

2010/10/16  <cinap_lenrek@gmx.de>:
> when you are in kiev, video streams YOU!
>
> --
> cinap
>
>
> ---------- Forwarded message ----------
> From: dorin bumbu <bumbudorin@gmail.com>
> To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
> Date: Sat, 16 Oct 2010 02:30:42 +0300
> Subject: Re: [9fans] πp
> 2010/10/16 erik quanstrom <quanstro@quanstro.net>:
>> On Fri Oct 15 17:14:18 EDT 2010, bumbudorin@gmail.com wrote:
>>> 2010/10/15 erik quanstrom <quanstro@quanstro.net>:
>>> >> isn't tag field for this intended?
>>> >
>>> > [...]
>>> >
>>> >> so this means to me that a client can send some T-messages and then
>>> >> (or concurrently) wait the R-messages.
>>> >> in inferno from mount(1) and styxmon(8) i deduced that this case is
>>> >> also considered.
>>> >> it's true that most of the servers i seen until now doesn't take
>>> >> advantage of this feature, they respond to each T-message before
>>> >> processing next message.
>>> >
>>> > there is no defined limit to the number of
>>> > outstanding tags allowed. [...]
>>>
>>> so the real problem is not 9p itself than the transport protocol: TCP/IP.
>>
>> the transport has nothing to do with it;
>
>
> consider the following (imaginary) example:
>
> let say that i want to watch a live videocast, and i am in kiev and
> the video is emitted from seattle as a 9p service.
> i'll mount that service in my local namespace, and my imaginary video
> player istead of processig 9p messages in traditional way:
> [...] TRead -> RRead -> TRead -> RRead -> TRead -> [...]
>
> will have one thread that:  [...] TRead -> TRead-> TRead->
> TRead->TRead->TRead -> [...]
> and another one that :      [...]  <--     t0      -->RRead -> RRead->
> RRead -> RRead-> RRead-> RRead-> [...]
>
>
> if transport protocol hasn't anything to do with, please explain me
> what other drawbacks will be involved other than a t0 delay, if
> latency would be constant?
>
>>
>>> i think that a solution might be to run 9p over an information
>>> dispersal based protocol, to eliminate roundtrips but guaranteeing
>>> reliability.
>>
>> the definitiion of 9p relies on having round trips.
>
> i was reffering on TCP/IP's roundtrips for each package confirmation,
> and not to eliminate 9p's round trips.
> in 9p case i suggested to process some of them as i wrote on previous
> lines, when high data traffic is involved, like in video streaming.
>
>>
>> - erik
>>
>>
>
>



^ permalink raw reply	[flat|nested] 46+ messages in thread

end of thread, other threads:[~2010-10-15 23:54 UTC | newest]

Thread overview: 46+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-10-13 19:46 [9fans] πp Eric Van Hensbergen
2010-10-13 22:13 ` David Leimbach
2010-10-13 22:36   ` roger peppe
2010-10-13 23:15     ` David Leimbach
2010-10-14  4:25       ` blstuart
2010-10-13 23:24     ` Venkatesh Srinivas
2010-10-14 21:12       ` Latchesar Ionkov
2010-10-14 21:48         ` erik quanstrom
2010-10-14 22:14         ` dorin bumbu
2010-10-15 14:07           ` erik quanstrom
2010-10-15 14:45             ` Russ Cox
2010-10-15 15:41               ` Latchesar Ionkov
2010-10-15 21:13             ` dorin bumbu
2010-10-15 22:13               ` erik quanstrom
2010-10-15 23:30                 ` dorin bumbu
2010-10-15 23:45                   ` cinap_lenrek
2010-10-15 23:54                     ` dorin bumbu
2010-10-15  7:24         ` Sape Mullender
2010-10-15 10:41           ` hiro
2010-10-15 12:11             ` Jacob Todd
2010-10-15 13:06               ` Iruatã Souza
2010-10-15 13:23             ` Sape Mullender
2010-10-15 14:57           ` David Leimbach
2010-10-15 14:54         ` David Leimbach
2010-10-15 14:59           ` Francisco J Ballesteros
2010-10-15 16:19             ` cinap_lenrek
2010-10-15 16:28               ` Lyndon Nerenberg
2010-10-15 17:54                 ` Nemo
2010-10-15 17:49                   ` ron minnich
2010-10-15 18:56                     ` erik quanstrom
2010-10-15 16:31               ` Latchesar Ionkov
2010-10-15 16:40                 ` cinap_lenrek
2010-10-15 16:43                   ` Latchesar Ionkov
2010-10-15 17:11                     ` cinap_lenrek
2010-10-15 17:15                       ` Latchesar Ionkov
2010-10-15 19:10                 ` erik quanstrom
2010-10-15 19:16                   ` Latchesar Ionkov
2010-10-15 21:43                     ` Julius Schmidt
2010-10-15 22:02                       ` David Leimbach
2010-10-15 22:05                       ` John Floren
2010-10-15 16:45               ` ron minnich
2010-10-15 17:26                 ` Eric Van Hensbergen
2010-10-15 17:45                 ` Charles Forsyth
2010-10-15 17:45                   ` ron minnich
2010-10-15 18:03               ` Bakul Shah
2010-10-15 18:45               ` David Leimbach

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).