From mboxrd@z Thu Jan 1 00:00:00 1970 MIME-Version: 1.0 In-Reply-To: <1249349522.479.15191.camel@work.SFBay.Sun.COM> References: <1248914582.479.7837.camel@work.SFBay.Sun.COM> <140e7ec30907300931v3dbd8ebdl32a6b74792be144@mail.gmail.com> <28CF259C-21F4-4071-806E-6D5DA02C985D@sun.com> <13426df10907312241q409b9f9w412974485aec7fee@mail.gmail.com> <13426df10908010847v5f4f891fq9510ad4b671660ea@mail.gmail.com> <1249349522.479.15191.camel@work.SFBay.Sun.COM> Date: Mon, 3 Aug 2009 19:56:15 -0700 Message-ID: <13426df10908031956t5cb01b8bl611ef09669ca5f19@mail.gmail.com> From: ron minnich To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Subject: Re: [9fans] ceph Topicbox-Message-UUID: 3770a2fa-ead5-11e9-9d60-3106f5b1d025 On Mon, Aug 3, 2009 at 6:32 PM, Roman V Shaposhnik wrote: > Is all of this storage attached to a very small number of IO nodes, or > is it evenly spread across the cluster? it's on a server. A big Lustre server using a DDN over (currently, I believe) fiber channel. > =A02. do we have anybody successfully managing that much storage that is > =A0 =A0 also spread across the nodes? And if so, what's the best practice= s > =A0 =A0 out there to make the client not worry about where does the stora= ge > =A0 =A0 actually come from (IOW, any kind of proxying of I/O, etc) Google? >> The request: for each of the (lots of) compute nodes, have them mount >> over 9p to, say 100x fewer io nodes, each of those to run lustre. > > Sorry for being dense, but what exactly is going to be accomplished > by proxying I/O in such a way? it makes the unscalable distributed lock manager and other such stuff work, because you stop asking it to scale. ron