From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: <4232834834cb615e6196fd51ce05806e@collyer.net> To: 9fans@cse.psu.edu Subject: Re: [9fans] new links binary+source From: Geoff Collyer In-Reply-To: <6f9f894bbd181956d09c161d6e561004@granite.cias.osakafu-u.ac.jp> MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Date: Wed, 17 Dec 2003 18:33:39 -0800 Content-Transfer-Encoding: quoted-printable Topicbox-Message-UUID: a8b979a4-eacc-11e9-9e20-41e7f4b1d025 Links is still quite slow on some pages, notably nytimes.com and cnn.com. I haven't looked into why, but I suspect it's not good at retrieving lots of small images. There are no doubt lots of bugs left in links. One workaround is to select "Kill all connections" from the File menu and then go to that URL again. Whatever has been cached already should be displayed while it fetches the rest. I believe Andrey did all the font work and the faster rendering. The fundamental structure of links is just wrong: it tries to simulate threads or processes sharing memory by using select to trigger functions when input arrives, and using a convention that upon completion of some activity, there's a always a `completion routine' to be called next, possibly by select. It would be better on Plan 9 to just rfork a process for each http request and let it run to completion, but doing that may require doing considerable violence to links (not that I'm opposed to that =E2=98=BA). A few of us are also looking at using webfs, at Russ's suggestion. The more we can move out of links, the better.