From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: <14ec7b180901201552v2250c9a0k98b23daaae79b74d@mail.gmail.com> Date: Tue, 20 Jan 2009 16:52:03 -0700 From: "andrey mirtchovski" To: "Fans of the OS Plan 9 from Bell Labs" <9fans@9fans.net> In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <1a605cf7ccd9e5ba7aaf6f3ad42e0f4b@terzarima.net> <140e7ec30901031403y66a3d67epac5a9800026e7609@mail.gmail.com> <1231131954.11463.459.camel@goose.sun.com> <14ec7b180901042124v45e6e5a0x4a9f405f78b4e49b@mail.gmail.com> <14ec7b180901060622q2a179705g53c7fe62e70ee90a@mail.gmail.com> Subject: Re: [9fans] Changelogs & Patches? Topicbox-Message-UUID: 837649da-ead4-11e9-9d60-3106f5b1d025 > Is it how it was from the get go, or did you use venti-based solutions > before? it's how i found it. >> i have two zfs servers and about 10 pools of >> different sizes with several hundred different zfs filesystems and >> volumes of raw disk exported via iscsi. > > What kind of clients are on the other side of iscsi? linux machines. > You're using it on Linux? the zfs servers are OpenSolaris boxes. > Aha! And here are my first questions: you say that I can run multiple > fossils > off of the same venti and thus have a setup that is very close to zfs > clones: > 1. how do you do that exactly? fossil -f doesn't work for me (nor should > it according to the docs) i meant formatting the fossil disk with flfmt -v, sorry. it had been quite a while since i last had to restart from an old venti score :) > 2. how do you work around the fact that each fossil needs its own > partition (unlike ZFS where all the clones can share the same pool > of blocks)? ultimately all blocks are shared on the same venti server unless you use separate ones. fossil does have the functionality to serve two different file systems from two different disks, but i don't think anyone has used that (but see example at the end). > I think I understand it now (except for the fossil -f part), but how do > you promote (zfs promote) such a clone? i'm unconvinced that 'promoting' is a genuine feature: it seems to me that the designers had to invent 'promoting' because they made the decision to make snapshots read-only in the first place. perhaps i'm wrong, but if the purpose of promoting something is to make it a true member of the filesystem community (with all capabilities that entails), then the corresponding feature in fossil would be to instantiate one from the particular venti score for the dump. i.e., flfmt -v. > I see what you mean, but in case of venti -- nothing disappears, really. > From that perspective you can sort of make those zfs clones linger. > The storage consumption won't be any different, right? the storage consumption should be the same, i presume. my problem is that in the case of zfs having several hundred snapshots significantly degrades the performance of the management tools to the extend that zfs list takes 30 seconds with about a thousand entries. compared to fossil handling 5 years worth of daily dumps in less than a second. but that's not really a serious argument ;) > > Great! I tired to do as much homework as possible (hence the delay) but > I still have some questions left: > 0. A dumb one: what's the proper way of cleanly shutting down fossil > and venti? see fshalt. it used to be, like most other things, that one could just turn the machine off without worry. then some bad things happened and fshalt was written. > 1. What's the use of copying arenas to CD/DVD? Is it purely back up, > since they have to stay on-line forever? people who back up to cd/dvd can answer that :) > 3. Do you think its a good idea to have volume management be > part of filesystems, since that way you can try to heal the data > on-the-fly? i don't know... > 4. If I have a venti server and a bunch of sha1 codes, can I somehow > instantiate a single fossil serving all of them under /archive? not sure if this will work. you'll need as many partitions as the sha1 scores you have. then for each do fossil/flfmt -v score partition. once you've started fossil on the console type, for each partition/score: fsys somename config partition fsys somename venti ventiserver fsys somename open it's convoluted, yes. there may be an easier way. i know of people using vacfs and vac to backup their linux machines to venti. actions like the ones you're describing would be much easier there, although i am not sure vacfs has all the functionality to be a usable file system (for example, it's read-only). for my personal $0.02 i will say that this argument seems to revolve around trying to bend fossil and venti to match the functionality of zfs and the design decisions of the team that wrote it. i, frankly, think that it should be the other way around; zfs should provide the equivalent of the fossil/venti snapshot/dump functionality to its users. that, to me would be a benefit (of course it gets you sued by netapp too, but that's besides the point). all this filesystem/snapshot/clone games are just a bunch of toys to make the admins happy and have little effective use for the end user.