From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: <13426df10708081420k15b295dfh1bf2357098c3b4d@mail.gmail.com> Date: Wed, 8 Aug 2007 14:20:40 -0700 From: "ron minnich" To: "Fans of the OS Plan 9 from Bell Labs" <9fans@cse.psu.edu> Subject: Re: [9fans] Bay Area Plan 9 Users Group Meeting (August '07) In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <13426df10708081341p63bd6637qdafd5d1766471108@mail.gmail.com> <13426df10708081408x48559821m1be70cacf3cc62bb@mail.gmail.com> Topicbox-Message-UUID: a0b2bcc4-ead2-11e9-9d60-3106f5b1d025 On 8/8/07, Eric Van Hensbergen wrote: > That really doesn't seem right, I looked more at the block and the > console code, but it seems like there should be code which "claims" > posted DMA buffers and then the guest should have to re-post them. That's how I would have designed it, and how I expected it to work, but it doesn't work that way. Once you do a bind for a dma descriptor, that's it -- and the enet code will happily overwrite that buffer. You have to move fast and have a few of them. See the handle_tun_input in lguest.c > a) follow the console driver style and just shuffle to named/pipe > socket to 9p server > b) follow the libos style and just use a shared memory buffer > posted in dev->mem that is mmaped from shared memory with the server > c) use dev->mem to store fcall slots and use the dma buffers to > shuffle payload -- this should be the optimal zero-copy case > > (a) can theoretically target any 9p server, (b) will work against > inferno-tx and (c) will work against a modified spfs and will take > some pretty heavy modifications to v9fs (don't need to marshall the > fcall, just stick it in a struct in shared memory). The idea is to > compare the performance of the three approaches and see just how much > cost (performance and complexity) is involved in each. Should have > (a) done in a matter of hours. When (a) is done let me know :-) ron