From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Wed, 19 Jul 2000 14:27:31 +0200 From: Lucio De Re To: 9fans@cse.psu.edu Subject: Re: [9fans] mothra Message-ID: <20000719142731.F3081@cackle.proxima.alt.za> References: , <200007181831.TAA12571@whitecrow.demon.co.uk> <86vgy33uej.fsf@gollum.esys.ca> <006f01bff176$dab42fe0$62356887@HWTPC> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii In-Reply-To: <006f01bff176$dab42fe0$62356887@HWTPC>; from Howard Trickey on Wed, Jul 19, 2000 at 07:45:36AM -0400 Topicbox-Message-UUID: e4c5bd44-eac8-11e9-9e20-41e7f4b1d025 On Wed, Jul 19, 2000 at 07:45:36AM -0400, Howard Trickey wrote: > > You're kidding yourself if you think this comes anywhere near solving > the big problems in writing a web browser. Actually fetching the bits > and passing them along is trivial. (And, in any case, fetching the bits > is more closely tied to the logic of the browser than you might think: > you have to deal with redirections, errors, and authorization requests. > And, it is good to be able to start rendering before all of the HTML > has arrived, and certainly before all of the images have arrived.) > I'm sure Howard is as good a judge of difficulty here as any. But there is one key issue that we are a little luckier with: we do not have "clients" to satisfy. We _need_ a browser for mundane operations, but we are not dependent on it, nor are we _here_ hellbent on having our pages delivered exactly like Netscape of IE5. > I wrote the first version of the charon browser with a "webget" filesystem > to serve the pages. I abandoned it in later rewrites, mainly for speed > reasons, but also because it wasn't buying me anything. We only ever > had one web client attached to the damn thing anyway. But that could > change in a Plan 9 environment... > I keep thinking SQUID here. Squid does a hell of a lot of useful work, without having the foggiest idea what it's about. Webget, presumably, was along the same lines. > The first real hard part is lexing/parsing the html in a way that is > forgiving > of errors in exactly the same ways as Netscape and IE. The next real hard > part > is getting the layout (especially tables!) exactly the same as Netscape and > IE. html2ps gets this bit done well enough to be a useful tool. _I_ have little to complain about that. Again, the audience isn't a commercial buyer. > Another hard part is SSL, just because ASN1 is a pain in the butt. No, that can't be hard. Tedious, certainly, but useful. > The hardest hard part is making Javascript objects and methods that behave > exactly the same as Netscape and IE (especially if you want to do something > different with respect to the concepts of "top level windows" or "frames"). > This, and Java, naturally, are bugbears. But we can perhaps refine these as conditions demand. The possibility of adding plug-ins seems the only useful route. Perhaps that is particularly hard, but not yet daunting. > And don't say "it doesn't have to be exactly the same as Netscape and IE" > until you've had users. > You made me say it. We have users, but not clients. Often, all I want is a single page, preferably stripped of images and banner adverts. What I do think is invaluable is a protocol that interacts more intelligently with the proxy server. ++L PS: another point worth making is that IE-5 is far trimmer than Netscape. Netscape carries far too much baggage, even Navigator. I have little idea how this is reflected in Mozilla.