From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: Date: Mon, 24 Nov 2008 15:13:22 +0100 From: "Giacomo Tesio" To: "Fans of the OS Plan 9 from Bell Labs" <9fans@9fans.net> In-Reply-To: <492A88DD.2070900@proweb.co.uk> MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_Part_126659_15432768.1227536002355" References: <20081123220653.GA25062@nibiru.local> <492A88DD.2070900@proweb.co.uk> Subject: Re: [9fans] dbfs and web framework for plan 9 Topicbox-Message-UUID: 51794860-ead4-11e9-9d60-3106f5b1d025 ------=_Part_126659_15432768.1227536002355 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline On Mon, Nov 24, 2008 at 11:58 AM, matt wrote: > > That's exactly what constraints, rules in SQL etc are for. Maybe some >> similar ruling system for filesystems would be fine :) >> (any suggestions ?) >> >> > That's what I was driving at. To map SQL <> FS you end up replicating lots > of SQL logic in your client FS. Reads are *always* out of date. Writes can > happen across tables but need to be atomic and able to roll back. > Probably I was not clear on what I'm thinking about. I think that rebuild a relational database on a filesystem is (quite) pointless. What I'm proposing is to design/develop the interface to interact with (any?) rdbms through a filesystem. A kind of "proxy" to the db with a filesystem interface. A "draft" could be (even if I've already found some problems in it): > - a "ctrl" file which accept only COMMIT, ROLLBACK and ABORT > - an "error" file > - a "notice" file (postgresql has RAISE NOTICE... may be others have it > too) > - a "data" file (append only in the transaction, but not outside) where > the INSERT, UPDATES, DELETE and all the DDL and DCL commands will be written > - each SELECT have to be written in a different file (named > sequentially): after writing the query, the same file could be read to get > the results (in xml...) > - on errors, sequents writes fails and error file will be readable > > The problems: - transaction -> directory conversion require creating a new connection to the backend (if I'm right thinking that transaction are connection wide) - xml output of fetchable results (SELECT, FETCH, SHOW...) require a tool to easily query such an output. It seem Plan 9 miss such a tool. xmlfs actually is unsuitable. (I'm thinking about an xpath command accepting xml in stdin and the xpath query as an argument, and return to stdout the results) Giacomo ------=_Part_126659_15432768.1227536002355 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline On Mon, Nov 24, 2008 at 11:58 AM, matt <mattmobile@proweb.co.uk> wrote:

That's exactly what constraints, rules in SQL etc are for. Maybe some similar ruling system for filesystems would be fine :)
(any suggestions ?)
 
That's what I was driving at. To map SQL <> FS you end up replicating lots of SQL logic in your client FS. Reads are *always* out of date. Writes can happen across tables but need to be atomic and able to roll back.

Probably I was not clear on what I'm thinking about.

I think that rebuild a relational database on a filesystem is (quite) pointless.


What I'm proposing is to design/develop the interface to interact with (any?) rdbms through a filesystem.

A kind of "proxy" to the db with a filesystem interface.

A "draft" could be (even if I've already found some problems in it):

  • a "ctrl" file which accept only COMMIT, ROLLBACK and ABORT
  • an "error" file
  • a "notice" file (postgresql has RAISE NOTICE... may be others have it too)
  • a "data" file (append only in the transaction, but not outside) where the INSERT, UPDATES, DELETE and all the DDL and DCL commands will be written
  • each SELECT have to be written in a different file (named sequentially): after writing the query, the same file could be read to get the results (in xml...)
  • on errors, sequents writes fails and error file will be readable
The problems: 
  • transaction -> directory conversion require creating a new connection to the backend (if I'm right thinking that transaction are connection wide)
  • xml output of fetchable results (SELECT, FETCH, SHOW...) require a tool to easily query such an output. It seem Plan 9 miss such a tool. xmlfs actually is unsuitable.
    (I'm thinking about an xpath command accepting xml in stdin and the xpath query as an argument, and return to stdout the results)

Giacomo
------=_Part_126659_15432768.1227536002355--