From mboxrd@z Thu Jan 1 00:00:00 1970 From: Lucio De Re To: 9fans mailing list <9fans@cse.psu.edu> Message-ID: <20040210124725.K17981@cackle.proxima.alt.za> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Subject: [9fans] Replica - just a thought. Date: Tue, 10 Feb 2004 12:47:25 +0200 Topicbox-Message-UUID: dc576dca-eacc-11e9-9e20-41e7f4b1d025 This is probably boring _and_ ignorant, but maybe just once I've hit on a good idea. You be the judge. I've been downloading a fresh Plan 9 image weekly, religiously adding it to a 9660 "dump" image that has now grown to a little less than 500 megabytes. When I get an opportunity, I burn the dump image onto a rewritable CD and so I have a little bit of development history that may come in handy one day. In the meantime, I've been wondering how hard it would be to unwind the CD dump for various uses, but I always refrained from inspecting the actual dump9660 sources as probably much too complex for my limited understanding. The recent oddities with replica, whatever their nature, as well as a personal need for replication that does not seem to coincide exactly with replica's capabilities has made me think about a convergence of replica and dump9660 that might just be preferable to the present situation. As I see it, replica effectively records externally what dump9660 records implicitly. Again, without having studied either of these I may be barking up the wrong tree, but those in the know may be able to correct me. If we imagine dump9660 as two parts communicating via some channel (in a figurative sense), the first comparing the fresh filesystem against an archive copy and the second actually recording the differences by allocating new disk space to new copies of the affected files, do we not come pretty close to replica's operation at least in principle? The next step would then be to define the protocol between these two portions of dump9660 formally and arrange for it to be exchanged between replica masters and slaves. It may need enhancing to cater for changes to the metadata (and then it may not), but largely there would no longer be two or more replication mechanisms, one would be sufficient and would be merely a formalisation of a feature (the dump) that Plan 9 has had for many years. Whether we can have the granularity of the FS dump, I'm not sure, but file level granularity may even be preferable. Am I missing something important and therefore spouting nonsense, or have I struck on an idea that hasn't yet been obvious to anyone else? Please let me know. I don't mind trying to implement this myself (and more USB devices and SCSI LUs and one or two new SCSI controller drivers and S/MIME and, and, and - life's too short by far) if it isn't flawed in some fatal way. ++L PS: To put it another way, from the CD as described above, I can update my system by selecting the image I want, the most recent being the norm. But it uses the replica database information to perform the update and I believe that, possibly with some adjustments to the format of the metadata, the information required is already in the structure of the dump and ought to be used instead. The reason I can't do that is that there is no formal specification that would then perhaps be more complete than is the case presently. Being able to derive it instead of recording it separately should provide a degree of greater reliability. And one could then make an external copy at any stage as well.