From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: <40b85517d889fadc8df2c782817f6bef@quanstro.net> From: erik quanstrom Date: Fri, 10 Aug 2007 08:26:45 -0400 To: weigelt@metux.de, 9fans@cse.psu.edu Subject: Re: [9fans] FS to skip/put-together duplicate files In-Reply-To: <20070810113921.GE18939@nibiru.local> MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Cc: Topicbox-Message-UUID: a3d6cb2a-ead2-11e9-9d60-3106f5b1d025 > I'm host a lot of web applications which share 99% of their code. > Disk space is not the issue, but bandwidth on remote backup. > So my idea is to let an filesystem automatically link together > equal files in the storage, but present them as separate ones. > Once an file gets changed, it will be unlinked/copied automatically. > > Is there already such an filesystem ? no. however there are updatedb/compactdb which can be used to create a list of changed files and replica/applylog which can be used to apply them. i used these tools to copy history from one kenfs to a new one. i actually used cphist (/n/sources/patch/saved/cphist) and not applylog. you could also use the log on the generating machine to build a mkfs archive and compress that, ftp it and apply it on the other end. - erik