On Wed, Dec 23, 2015 at 03:44:50PM -0500, Karl Dahlke wrote: > > The biggest concern is that we have to share a whole bunch of > > HTTP-related state across process boundaries. > > Multi-threading sounds like the answer. > > Yes, and the existing message serializing protocol > should prevent the kind of threading nightmares that we've > all experienced in the past. > One thread runs while the other waits. Not necessarily, there are lots of globals and static buffers in the current code base, any one of which is a potential threading issue. Also, we're going to *have to* make js run asynchronously one of these days and then all the assumptions about the synchronising nature of the protocol will go out the window. If we're multi-threading based on those that's going to make a difficult task even harder. In addition such segfaults are horrible to debug usually since they tend to be difficult to reproduce (i.e. due to timing issues etc). > > wrap all of our networking code in its own process. > > Pipes everywhere, looks like a mess, > and where does it end, how many more processes? > Remember that IPC is limited on windows. > I think this creates entropy, whereas one process would probably work just fine. No because we're going to have to pull comms out anyway eventually in order to get async networking which is increasingly required. Managing this by encapsulating them into a (possibly completely thread-safe and thus multi-threaded) comms manager makes a lot of sense in this case. > > allowing edbrowse-js to call into the main edbrowse > > process to make HTTP requests? > > We thought about this one before, > when we first learned js had to parse html and create corresponding js objects > right now, as a native method, before the next line of js ran. > Those objects have to be there. > So native code would have to stop and send a message back to edbrowse, > but it's waiting for the return from running the native code, > it's not waiting for a message that says > "hey hold up and here's some html and parse it > and then call me reentrantly so I can create those javascript objects > and then we unwind the stack together and hope that the reentrant calls into js > have not disturbed the thread of execution that was running, > which I can't guarantee for edbrowse or whatever > js engine we're using this week or next week." > Anyways it was way to convoluted, and much easier > to make sure both processes had access to tidy and could do that work. > In the same way, both processes, or both threads, > as you prefer, now have access to httpConnect for making that call. Actually, although this was a quick fix, I think that maintaining two concurrent DOMs is only sustainable to a point. Since the interface is going to have to become more asynchronous anyway, this should not be so much of an issue in future. > > Does js really need to have direct, unmediated access to HTTP, > > It needs to make the http call before the next line of js code runs. > A native method that does not disturb the flow of execution. Actually that's incorrect. as far as I know, although synchronous calls are implemented on top of it, AJAX is fundimentally asynchronous, event driven, networking. That's what the state variables are used for. Hence the Asynchronous part. If we keep designing as if the world of the internet is synchronous then fixing our design to be asynchronous is going to become increasingly difficult. Regards, Adam. PS: I'm also going to look into portable IPC options to replace the pipes everywhere mechanism we're currently using.