On Sat, Dec 26, 2015 at 12:45:24AM -0500, Karl Dahlke wrote: > In follow up to my previous post on processes and threads, > I propose the following small project / test. > > At present, http or ftp can download in the background, at least on unix. > This is useful for large files or slow internet or not a lot of ram etc. > It works by forking the process and reissuing the http request. > This is harmless 99% of the time, but even here you wonder > if the separate processes and separate curl instances could mismatch. > You go on to look at 3 more websites during the download, more cookies are added to the jar, > then the download finishes and that process writes out the jar > and clobbers the new cookies. > Since I don't really understand when curl writes the jar it's hard to say > whether this could happen or not. Neither do I tbh, it seems like it could though, unless curl uses some sort of shared memory if one shares a curl handle across processes. > With this in mind, how bout this modest project. > > A download determined to be in the background > spins up a thread, rather than forking a process. > The thread runs the http request and when that is done > it sets, in the appropriate BG_JOB structure, state = 0, > then the thread exits. > Seems manageable, and it has the advantage of working on windows, > which currently does not support the download in background feature at all. > Also, and more important, we can test the threadsafe nature of curl. > While downloading a large file in background, pull up some other small web pages. > Do these all run in their threads, and does the whole assemblage > still use one cookie space, > making transient cookies available to all threads and consistently writing the cookie jar? > We really need to know this. Yeah, that sounds like a good idea. In fact, I'm wondering how possible it would be to make all network stuff spin up threads by default, but then choose whether we want to background the thread or not. The only parts we need to make thread safe for this are the curl buffers and that job structure. > It's incremental, and as Adam writes, > it increments us in the right direction. > What do you think? I think if this works then it'll certainly help with a lot of things. I'd still quite like to have a shared comms process in the end and keep js in a separate process, but I was thinking of multi-threading the comms stuff anyway. So that works out as 3 processes for the moment, and fixes the issues of js and curl with parallel cookie jar writing etc. I'm going to look into IPC on Windows (though can't test it) to see if there is *anything* we can do to make this happen. Cheers, Adam.