From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: <8ccc8ba40709090552icfe0c22xfeebd72bf178672c@mail.gmail.com> Date: Sun, 9 Sep 2007 14:52:13 +0200 From: "Francisco J Ballesteros" To: "Fans of the OS Plan 9 from Bell Labs" <9fans@cse.psu.edu> Subject: Re: [9fans] The Web9 Project In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <8ccc8ba40709090503q38577a98v351702557dce67e1@mail.gmail.com> Topicbox-Message-UUID: bc592828-ead2-11e9-9d60-3106f5b1d025 > > the way i read the protocol, 9p does support readahead. Tread > takes tag, fid, offset, count. i can have up to 64k tags, so i think > this means i can have up to 64k outstanding reads. what am i missing? > > why do we need to bundle requests? that seems like the wrong level. > in effect, that is creating an execution environment on the fs. Latency. I want all requests on two different fids(!) for the same file issued within a tiny window to coallesce on just one rpc. Yes, breaks coherency a little bit but AFAIK there's no other way. An example. If you stat a directory, it would take the same time to read all its contents within the same rpc. if you later stat a file in the directory, you could be done just with what you got. Things get a lot worse for latency if you stat the top-level dir in this example before you read it. Because caching issues, and rpc bundling, seemed to mix just to achieve a particular interaction pattern, it seemed more clear just to use a protocol using that pattern than forcing 9p into that. You need a program in the server side anyway, because the client would not know what it is retrieving before retrieving it for the first time. Thus, if you want to get an entire dir at a time, you'd have to stat (9p) and once you know it's a dir you could read it all ahead. By this time, the client had stall waiting for the dir. On the other hand, if the "protocol" specifies that a "get" of a dir sends it all to the client, things stay simple and a single rpc suffices. > if latency is that bad and readahead won't work because the files > are too small, why not treat the remote storage as a block device > and run the fs locally? the "fs" could be a simple flat file. Because we wanted the protocol to work for things like o/mero, which is not a real fs at all. not to talk about other synthetic file servers. In any case, I'd like to get this thing screened by others. It might be a more simple way that happens work and we'd like to learn which one. I'd be more than happy to throw this all away and implement a replacement provided it simplifies things.