From mboxrd@z Thu Jan 1 00:00:00 1970 MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: text/plain Date: Tue, 4 Aug 2009 16:51:57 -0700 From: Roman V Shaposhnik In-reply-to: <13426df10908031956t5cb01b8bl611ef09669ca5f19@mail.gmail.com> To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Message-id: <1249429917.479.16567.camel@work.SFBay.Sun.COM> References: <1248914582.479.7837.camel@work.SFBay.Sun.COM> <140e7ec30907300931v3dbd8ebdl32a6b74792be144@mail.gmail.com> <28CF259C-21F4-4071-806E-6D5DA02C985D@sun.com> <13426df10907312241q409b9f9w412974485aec7fee@mail.gmail.com> <13426df10908010847v5f4f891fq9510ad4b671660ea@mail.gmail.com> <1249349522.479.15191.camel@work.SFBay.Sun.COM> <13426df10908031956t5cb01b8bl611ef09669ca5f19@mail.gmail.com> Subject: Re: [9fans] ceph Topicbox-Message-UUID: 38930d58-ead5-11e9-9d60-3106f5b1d025 On Mon, 2009-08-03 at 19:56 -0700, ron minnich wrote: > > 2. do we have anybody successfully managing that much storage that is > > also spread across the nodes? And if so, what's the best practices > > out there to make the client not worry about where does the storage > > actually come from (IOW, any kind of proxying of I/O, etc) > > Google? By "we" I mostly meant this community, but even if we don't focus on 9fans, Google is a non-example. They have no clients for this filesystem per-se. > >> The request: for each of the (lots of) compute nodes, have them mount > >> over 9p to, say 100x fewer io nodes, each of those to run lustre. > > > > Sorry for being dense, but what exactly is going to be accomplished > > by proxying I/O in such a way? > > it makes the unscalable distributed lock manager and other such stuff > work, because you stop asking it to scale. So strictly speaking you are not really using 9P as a filesystem protocol, but rather as a convenient way for doing RPC, right? Thanks, Roman.