From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Tue, 25 Aug 1998 12:13:19 -0400 From: rob@plan9.bell-labs.com rob@plan9.bell-labs.com Subject: [9fans] X device Topicbox-Message-UUID: 7cfc9a70-eac8-11e9-9e20-41e7f4b1d025 Message-ID: <19980825161319.d775btZoApG1pN9Q74mJE8nh2rQWbnrMkLN9vGFAG9o@z> I did this for Inferno, which has a different graphics protocol but the same overall architecture, so it can be done. I don't wish the job on anyone; it's a nasty mess to get two unrelated graphics models to function together. I did it by modifying the underlying library, not the driver, although I probably had to tweak the driver a bit; it was a while ago and my memory has faded. The basic notion was to allocate a bitmap on the server for each image and then, for each operation, try to use remote operations to do the work. If they couldn't be done there, the operation was done locally and sent over as an array of pixels. Things like ldepth conversion were the hardest to handle. In the end, I lazily allocated a different ldepth pixmap (?) on the server for each image in Inferno, which I could at least use as a valid source. But X doesn't mix ldepths much, other than having special pattern operators defined with ldepth 0, so any of the myriad things we do with mixing ldepths had to be done cleverly or go slow. I did not use X fonts, but translated all character drawing into already-implemented operations, to keep things simple and portable. Getting all this to go fast involved much trickery and analysis of X internals. The result is barely satisfactory. Plus, a huge piece of the code deals with the psychotic pas de deux of allocating a screen and color map in X. When it came to 16-bit X screens, we cut bait; it wasn't worth any more time. Similar games can be done on Windows, but it's much easier there, at least conceptually, because you can write code that runs on (in X terminology) the server and also just paint data on the screen, in other words, decode the Plan 9 protocol on the machine with the frame buffer. -rob