From mboxrd@z Thu Jan 1 00:00:00 1970 From: erik quanstrom Date: Sat, 19 Feb 2011 16:15:47 -0500 To: 9fans@9fans.net Message-ID: <98f813a8949fced7833e956973b97a99@brasstown.quanstro.net> In-Reply-To: <20110219200901.575D75B63@mail.bitblocks.com> References: <201102181445.41877.dexen.devries@gmail.com> <201102181753.30125.dexen.devries@gmail.com> <7769a67a9fbc1fae2186ff9315457e0d@ladd.quanstro.net> <20110219200901.575D75B63@mail.bitblocks.com> MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Subject: Re: [9fans] Modern development language for Plan 9, WAS: Re: RESOLVED: recoving important header file rudely Topicbox-Message-UUID: b1d64936-ead6-11e9-9d60-3106f5b1d025 On Sat Feb 19 15:10:58 EST 2011, bakul+plan9@bitblocks.com wrote: > On Sat, 19 Feb 2011 10:09:08 EST erik quanstrom wrote: > > > It is inherent to 9p (and RPC). > > > > please defend this. i don't see any evidence for this bald claim. > > We went over latency issues multiple times in the past but > let us take your 80ms latency. You can get 12.5 rpc calls > through in 1 sec even if you take 0 seconds to generate & > process each request & response. If each call transfers 64K, > at most you get a throughput of 800KB/sec. If you pipeline > your requests, without waiting for each reply, you are using > streaming. To avoid `streaming' you can setup N parallel > connections but that is again adding a lot of complexity to a > relatively simple problem. i think that your analysis of 9p rpc is missing the fact that each read() or write() can have 1 out standing through the mount driver. so the mountpt can have > 800kb/s in your example if multiple reads or writes are oustanding. also, sending concurrent rpc doesn't seem the the same to me as a stream since there's no requirement that the rpcs are sent, arrive or are processed in order. in plan 9, one would assume they would be sent in order. (wikipedia's definition.) and i don't really see that this is complicated. the same logic is in devaoe(3). the logic is trivial. > > what is the goal? > > Better handling of latency at a minimum? If I were to do > this I would experiment with extending the channel concept. hmm. let me try again ... do you have a concrete goal? it's hard to know why a new file protocol would be necessary given the very abstract goal of "reduced latency". for example, i would like to use 1 file server at 3 locations. 2 on the east coast, 1 on the west coast. the rtt is a:b 80ms a:c 20ms b:c 100ms. i don't think any sort of streaming will help that much. 100ms is a long time. i think (pre)caching will need to be the answer. > For things like remote copy of a whole bunch of files caching > is not going to help you much but streaming will. So will > increasing parllelism (upto a point). Compression might. that depends entirely on your setup. it may make sense to copy the files before anyone asks for them. - erik