From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: <42fe5d2488144155692f859a62a64b07@quanstro.net> From: erik quanstrom Date: Sun, 21 Dec 2008 09:45:55 -0500 To: 9fans@9fans.net In-Reply-To: <1229835929.11463.55.camel@goose.sun.com> MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Subject: Re: [9fans] 9pfuse and O_APPEND Topicbox-Message-UUID: 6b13004a-ead4-11e9-9d60-3106f5b1d025 > > is your 9p server ever going to be running on an nfs-mounted > > partition? > > As with any software -- it would be pretty difficult for me to prevent > somebody from doing that, but in general -- no. i use "in general" to mean the exact opposite of what you are saying here; there is a case where it could happen. > As I have indicated some time ago -- its NOT a problem for me to have > X clients do random access and Y clients doing DMAPPEND on the same > file. The integrity of the file will NOT be affected. It is protected by > the chunk_size that ALL I/O is coming to me at. The only > thing that I have to guarantee at the server end is that all of the > writes coming from clients asking for DMAPPEND will, in fact, be > written at the offset corresponding to the EOF (whatever that might > be at the moment when the I/O arrives). okay, so you're using DMAPPEND like sbrk(2). how do you avoid clients caring about the address of this new hunk of memory?^u clients caring about the offset of this hunk of the file? that is, the same problem malloc has in a multi-threaded app with sbrk. sounds like distributed shared memory with lazy allocation. why not put the things your storing in seperate files, or of there are an unwielded number of things, use some sort of clone interface to create a new $thing that the server carefully maps into the big file? - erik