On Thu, Dec 24, 2015 at 08:15:38AM -0500, Karl Dahlke wrote: > > Also, we're going to *have to* make js run asynchronously one of these days > > Well I hope we don't, in the most literal sense of "asynchronous", > Because it's 10 times harder. I agree, but yes, we will need this eventually I think. > Here's what I mean. > > Let http requests run in the background, > truly in the background, in parallel with everything else, > because these are the only tasks that might take more than a millisecond. Not necessarily, there are js confirmation prompts which could block and all sorts of other things. For example encryption in js (yes this is actually a thing) and many more examples. > We already do this with our background downloads, and it is truly asynchronous. > When a request is complete, from xhr for example, > then a piece of js could run, serialized with other pieces of js > and with edbrowse itself. > Separate instances of js never run together, and how could they? > Think AutoCompartment in mozjs 24, which puts you in a specific context. > So js from two different windows can't really run in parallel, it all has to be serialized, > this chunk running in this window in this context in this compartment, > now that chunk in that compartment etc. > It has to be a timeshare system, > which is better for us anyways, easier to design, > ammenable to messages and pipes, > and *much* more threadsafe if we go with threads in a process. Unfortunately that will start to have problems with ajax etc so we really will need asynchronous in the truest sense of the word and fairly soon now. This is why I've been pushing for 1 buffer 1 process (well 2 processes one for js and one for edbrowse) as a design. In that case we have serialisable time sharing etc, async windows, and if we do it right, an interface inwhich one buffer can block whilst you go on doing other things in other buffers. It's going to be a lot of work and we'll need a message passing interface which is a bit more than pipes, but it can be done. > Timers work this way now. > It looks like they're completely asynchronous, and they work just like > they're suppose to, but in reality it's timeshared > to keep the code simple and the interactions down to a minimum. > A snippet of js runs under a timer, or some js code runs on behalf > of edbrowse, or edbrowse itself is editing or formatting or posting > or whatever. All serialized. > The big test is > http://www.eklhad.net/async > run three of these in parallel and edit something in buffer 4. > Timers queue up with the work you are doing but it all serializes, > so when a timer fires in buffer 1, js runs in context 1, > including AutoCompartment on the js side, and then it finishes and something else happens, > and the buffers and js and doms don't get mixed up, > and they don't get confused with the work you are doing in buffer 4 in the foreground, > and this was a fair piece of work to get right > and it's not trivial, but now that it's timeshared > I'm pretty confident the windows and contexts won't collide. > Oh another test which I have done is to run async as above > but in the same buffer edit or browse something else on top of it, > which pushes async down on the stack. > But it's still there and its timer still runs > and when you use your back key to go back > it has the right time on the display. That's cool, but what if async starts taking a long time, like some sort of exponential algorithm or something? > This timeshare approach keeps things manageable, and I hope we are able to > continue with it. > It also makes the one process design at least feasible, > which we can continue to bat back and forth. I agree time sharing in a single thread is a very nice design where it works, indeed I've a lot of experience working with such a design. However it has certain limitations which is where true multi-threading and/or multi-processing comes in for parallelisation. Unfortunately, one can't guarantee that js will execute in a "short" amount of time hence the need for some form of truely parallel design. > As for xhr I would probably create a mock timer that fires 5 times a second > and does nothing if the background http is stillin progress, > but if it's done then the hxr js snippet runs, under the timer, > and the timer deletes and goes away. > It keeps everything timeshared as above. > At first this design seems awkward and contrived, > but it might be a bit like democracy, the worst, > except for all the others. That has some merits actually, though I'd do it out of the js world somehow. > If we do retain our two processes, or even more, > it would be easier if they were the same image, as they are now, and they just > do different things. > We tried to keep edbrowse-js lean and mean but found that it needed 30%, 50%, 60% of edbrowse, > so now it's all there, > with a global variable that reminds us we're really the js process. > cmake under windows really works better if each sourcefile is mentioned once, > as Geoff explained to me, so putting things together in libraries, > no problem for two processes but becomes combinatorially difficult for more, > so again one image is easy. > Only down side is you can't even build the basic edbrowse without the js library. > That is a downer, > but fortunately Chris maintains statics so people can run the program > even if they have trouble building it for any reason. > And we don't have to make our libraries shared or dll, > one image on disk and in memory, just running different ways. I agree with this, it also makes things easier from a packaging and general sanity point of view. Of course, if we want to get fancier in the future we could support launching edbrowse and it plugging into the (for example) comms manager of an existing edbrowse running under the same user. that'd fix the parallel instances in different consoles issue. > > I think that maintaining two concurrent DOMs is only sustainable to a point. > > I think we'll have to forever, in that we want edbrowse to browse even > if there is no js, yet when js does run it mucks with the tree > directly with native methods and that tree changes line by line, > moment by moment, before we can resync with edbrowse. > Anyways it's not easy no matter how you slice it. I agree, but I'd like to move toward something where js communicates what it wants to do to the DOM or any html it wants parsed back to edbrowse and for edbrowse to then inform js of the DOM as and when it needs to know. What this basically means is have all the js DOM operations as wrappers which actually query the main edbrowse process for the state of the DOM when they're performed. This also gets us closer to the DOM idea of certain things being truely read-only and host objects etc. Of course there'll be performance impacts, but lets get things standards compliant and working then optimise later. Cheers, Adam.