From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Ronald G. Minnich" To: 9fans@cse.psu.edu Subject: Re: [9fans] fidtab revisited In-Reply-To: <20030217073038.BCFD19653.mta2-rme.xtra.co.nz@[210.54.206.123]> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Date: Mon, 17 Feb 2003 10:59:21 -0700 Topicbox-Message-UUID: 654653aa-eacb-11e9-9e20-41e7f4b1d025 On Mon, 17 Feb 2003, Andrew Simmons wrote: > Pondering further on fidtab, I was wondering if anyone had any statistics on > the average and maximum number of fids in the table for any given connection > to u9fs. Since a hash table was considered appropriate, I would guess that > quite a few fids were anticipated, but since the current implementation is > effectively a linked list, I would also guess that there are not that many > in practice. My interest in this is that I'm tooling around with a knock-off > of u9fs running as an NT service serving NT files, mainly as an exercise to > nail down my understanding of how 9p works. On v9fs on Linux, once the dcache came into the picture, we found fid use increased to the hundreds, pretty much to the sum of all files opened for the uptime of the machine, until such time as resources were needed. We capped fids at 256 to test our code for handling "out of fids" on the client (v9fs) side, and there we ran into an interesting problem. Linux dcache entry flushing is pretty primitive: you pretty much say 'flush all them there dcache entries for the superblock' and they all go away (preferable would be something SunOS did in some areas, which was to say 'try to flush 100 or so of "this thing"' but maybe Linux will improve in that area). Since we almost have the 9p2000 client working on linux now, I expect to get these numbers for u9fs -- I'll let you know. ron