From mboxrd@z Thu Jan 1 00:00:00 1970 From: erik quanstrom Date: Sat, 14 Jan 2012 10:29:21 -0500 To: 9fans@9fans.net Message-ID: In-Reply-To: References: <20120113113026.GA419@polynum.com> <20120114003032.1C08F1CC8F@mail.bitblocks.com> <201201140201.51504.dexen.devries@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Subject: Re: [9fans] fossil pb: FOUND! Topicbox-Message-UUID: 5c18775c-ead7-11e9-9d60-3106f5b1d025 On Sat Jan 14 10:07:25 EST 2012, charles.forsyth@gmail.com wrote: > Although drives are larger now, even SSDs, there is great satisfaction in > being able to make copies of large trees arbitrarily, without having to > worry about them adding any more than just the changed files to the > write-once stored set. > I do this fairly often during testing. (as an aside, one assumes changed files + directory tree as the a/mtimes are changed.) such satisfaction is not denyable, but is it a good tradeoff? my /sys/src is 191mb. i can make a complete copy and never deleted it every day for the next 2739 years (give or take :)) and not run out of disk space. (most of the copies i make are deleted before they are commited to the worm.) i think it would be fair to argue that source and executables are negligeable users of storage. media files, which are already compressed, tend to dominate. the tradeoff for this compression is a large amount of memory, fragmentation, and cpu usage. that is to say, storage latency. so i wonder if we're not spending all our resources trying to optimize only a few percent of our storage needs. - erik