From mboxrd@z Thu Jan 1 00:00:00 1970 MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: text/plain Date: Tue, 4 Aug 2009 17:27:06 -0700 From: Roman V Shaposhnik In-reply-to: To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Message-id: <1249432026.479.16637.camel@work.SFBay.Sun.COM> References: <1248914582.479.7837.camel@work.SFBay.Sun.COM> <140e7ec30907300931v3dbd8ebdl32a6b74792be144@mail.gmail.com> <28CF259C-21F4-4071-806E-6D5DA02C985D@sun.com> <13426df10907312241q409b9f9w412974485aec7fee@mail.gmail.com> <13426df10908010847v5f4f891fq9510ad4b671660ea@mail.gmail.com> <1249349522.479.15191.camel@work.SFBay.Sun.COM> Subject: Re: [9fans] ceph Topicbox-Message-UUID: 38acade4-ead5-11e9-9d60-3106f5b1d025 On Mon, 2009-08-03 at 21:23 -1000, Tim Newsham wrote: > > 2. do we have anybody successfully managing that much storage that is > > also spread across the nodes? And if so, what's the best practices > > out there to make the client not worry about where does the storage > > actually come from (IOW, any kind of proxying of I/O, etc) > > http://labs.google.com/papers/gfs.html > http://hadoop.apache.org/common/docs/current/hdfs_design.html > > > I'm trying to see how the life after NFSv4 or AFS might look like for > > the clients still clinging to the old ways of doing things, yet > > trying to cooperatively use hundreds of T of storage. > > the two I mention above are both used in conjunction with > distributed map/reduce calculations. Calculations are done > on the nodes where the data is stored... Hadoop and GFS are good examples and they work great for the single distributed application that is *written* with them in mind. Unfortunately, I can not stretch my imagination hard enough to see them as general purpose filesystems backing up data for gazillions of non-cooperative applications. The sort of thing NFS and AFS were built to accomplish. In that respect, ceph is more what I have in mind: it assembles storage from clusters of unrelated OSDs into a a hierarchy with a single point of entry for every user/application. The question, however, is how to avoid the complexity of ceph and still have it look like a humongous kenfs or fossil from the outside. Thanks, Roman.