Hello, Le Thu, 28 Feb 2008 20:14:27 -0500 (EST), Brian Hurt a écrit : > So, mistake number one: either use the data, and structure your data > (at that layer) to take advantage of it, or don't use a database. > [...] > So that's mistake number two: you're communicating between different > versions of the program with an ill-defined (at best) and not > generic protocol/file format. Right. But imagine you're communicating with yourself (saving and restoring data). And you need to retrieve the data *efficiently*. Converting from a generic file format is not efficient - not if you have to retrieve the data 100 times per second (imagine a CMS on a popular website). It would be faster to use an internal file format. And, of course, keeping a backup in a generic file format. But now, here is the big deal: when your internal data structure changes (and it might not even be under your control, imagine you're using a third-party library), you have to convert your generic backup to the newer internal format. And if you have, say, an awful lot of backups, it might take soooo long... Of course, it's only once in a while, but when this happens, how do you deal with it? Remember the efficiency is the key point, here. I don't think there is a generic solution to this problem. But I'm just pointing out the underlying requirements, in case some would have one. Regards, -- Gabriel Kerneis