* Often you go from one page to the next to the next on a website, all calling the same js and css files. Yes, we issue a head request then pull them out of cash, which is pretty good, but they're already in edbrowse, and these pages are very static, so if we pulled or verified the page within the past few hours, it's probably fine to use the same data that is already sitting in the parent page. Would save time and also space. If more than a few hours then ok issue the head request and confirm, but then point to the earlier data. We should reuse it at the c and javascript levels. Implement a new mini-cache at the end of the file cache.c, apart from the disk cache, only for js and css files, and have we seen this and if yes we are holdlding the data, and ongoing script tags can use this as their script data, and we can put it in mw0 on the js side and the javascript tags script.data can point to it. * Download large files in background, and the experimental downloading of all the js and css files in background and in parallel, which all modern browsers do, because it speeds things up especially for a new uncached website, we do this by forking processes which is bad at many levels. So bad that it doesn't work (for https) at all under gnutls, though it does work under openssl. We neeed to convert all this to threads. I'm not worried about windows any more, so we only need understand the standard unix thread manager. Anyone feel like tackling one of these? Karl Dahlke