From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: Date: Wed, 31 Oct 2007 20:25:43 -0500 From: "Eric Van Hensbergen" To: "Fans of the OS Plan 9 from Bell Labs" <9fans@cse.psu.edu> Subject: Re: [9fans] QTCTL? In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <8ccc8ba40710311526o4a5e55a7wf04f12844d4d4b66@mail.gmail.com> Topicbox-Message-UUID: e28c66d6-ead2-11e9-9d60-3106f5b1d025 On 10/31/07, erik quanstrom wrote: > > If the file is "decent", the cache must still check out > > that the file is up to date. It might not do so all the times > > (as we do, to trade for performance, as you say). That means > > that the cache would get further file data as soon as it sees > > a new qid.vers for the file. And tail -f would still work. > > the problem is that no files are "decent" as long as concurrent > access is allowed. "control" files have the decency at least > to behave the same way all the time. > Sure - however, there is a case for loose caches as well. For example, lots of remote file data is essentially read-only, or at the very worst its updated very infrequently. Brucee had sessionfs, which although more specialized (I'm going to oversimplify here Brzr, so don't shoot me), could essentially be thought of as serving a snapshot of the system. You could cache to your hearts content because you'd always be reading from the same snapshot. If you ever wanted to roll the snapshot forward, you could blow away the cache -- or for optimum safety, restart the entire node. With such a mechanism you could even keep the cache around on disk for long periods of time (as long as the session was still exported by the file server). -eric