On Thu, Aug 13, 2015 at 11:37:54PM -0400, Karl Dahlke wrote: > > In terms of an architecture I'm thinking of aiming to have the DOM as an > > abstraction which can be used by both the rendering code and the js. Thus: > > html is parsed into a node tree which is converted to our DOM objects > > These objects are exposed to js via wrapper objects in the js world such that > > any changes js makes are automatically passed through to the DOM > > The renderer renders the DOM automatically on page load, > > with support for re-rendering on a user command (with some sort of > > notifications for js induced changes) > > Form fields are altered in the DOM, which may or may not trigger a re-rendering > > Yes this can cause a rerender, example onchange or onselect code, > as exercised by the regression tests in jsrt. Yeah, that was my thinking. > > Any re-rendering would be partial, i.e. > > only the changed segments of the DOM are re-rendered > > This sounds like a diff between the old dom and the new, > but it's easier to just rerender and then diff the old buffer against the new, > and then report the lines that have changed, which is how edbrowse works today. > Realize that a small change in dom could change the buffer > on down the page, even into dom elements that have not changed. > So I think you always want to just call render() and then > diff the two buffers. > Maybe even a diff library we can use, if not /bin/diff itself. Yeah, I guess I'm just concerned about the js intensive pages which are becoming much more common taking a long time to re-render, but I can se that always doing a full re-render is the easiest and probably most robust approach. I'd quite like to have some smart approach to avoid making copies of unchanged buffer lines whilst rendering, but I'm not too sure how that'd work. > These are minor points; and you are definitely on track. > This is where we need to be. Thanks. Regards, Adam.