From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Mon, 3 Aug 2009 21:23:38 -1000 From: Tim Newsham To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> In-Reply-To: <1249349522.479.15191.camel@work.SFBay.Sun.COM> Message-ID: References: <1248914582.479.7837.camel@work.SFBay.Sun.COM> <140e7ec30907300931v3dbd8ebdl32a6b74792be144@mail.gmail.com> <28CF259C-21F4-4071-806E-6D5DA02C985D@sun.com> <13426df10907312241q409b9f9w412974485aec7fee@mail.gmail.com> <13426df10908010847v5f4f891fq9510ad4b671660ea@mail.gmail.com> <1249349522.479.15191.camel@work.SFBay.Sun.COM> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Subject: Re: [9fans] ceph Topicbox-Message-UUID: 379ad598-ead5-11e9-9d60-3106f5b1d025 > 2. do we have anybody successfully managing that much storage that is > also spread across the nodes? And if so, what's the best practices > out there to make the client not worry about where does the storage > actually come from (IOW, any kind of proxying of I/O, etc) http://labs.google.com/papers/gfs.html http://hadoop.apache.org/common/docs/current/hdfs_design.html > I'm trying to see how the life after NFSv4 or AFS might look like for > the clients still clinging to the old ways of doing things, yet > trying to cooperatively use hundreds of T of storage. the two I mention above are both used in conjunction with distributed map/reduce calculations. Calculations are done on the nodes where the data is stored... > Thanks, > Roman. Tim Newsham http://www.thenewsh.com/~newsham/