Am Freitag, den 19.09.2014, 11:18 -0400 schrieb Yaron Minsky: > We had a fair number of problems with omake > > - We've run into lots of performance problems on large builds (2-3 > million lines, many thousands of targets) was that omake took a very > long time (a few minutes) to be restarted. Well, never got into that dimensions. The largest builds had only around 250Kloc, and omake worked well for that size. I don't know much about the internals of omake, but I can imagine that certain things are recomputed too often, and then you run into performance problems at a certain size. But why can't this be nailed down? > - The build itself has limited parallelism because of various > bottlenecks inside omake, like the fact that it computes its md5sums > in a single process. Hmm, is that evident? Maybe switching to a faster hash algorithm could solve this? (E.g. just use RC4 and XOR up the output stream; should be ultimately fast.) > - The default rules didn't do a good job of clearing out stale build > artifacts (important for getting reliable incremental builds), and > we had to put quite a lot of painful engineering in place to make > that work. We needed to do similar work in Jenga to make the same > thing happen, but it was a lot more fun writing that code in OCaml! If you just stick to the built-in macros, incremental builds are very reliable. If you start writing your own rules, there is some chance that you overlook dependencies. But that's independent of which build system you use. > I am not convinced that putting more complicated calculations into > programs will work well. I know of no system that does a good job > allowing you to deal with complex dependencies that are discovered > over the course of a build (which you get with staged programming) > that take this approach. It seems that a more expressive, monadic > structure fits the bill, and hammering that into the round peg of > shell invocations won't work well, I suspect. After all, I think that it would be naive to think that one size fits all. When you have a large build like yours you probably have quite special requirements, and are willing to do without comfort features like a macro language. So: no one build system to rule them all. Nevertheless, I'm very much for improving omake. Gerd > y > > On Fri, Sep 19, 2014 at 10:00 AM, Alain Frisch wrote: > > On 09/19/2014 03:36 PM, Gerd Stolpmann wrote: > >> > >> Well, I run frequently into the difficulty that I need some special > >> omake function that would be trivial to develop in OCaml (e.g. > >> associating some data with some other data, filtering things, doing > >> string transformations), but for writing it in the omake language I need > >> some time for developing and testing. I have a quite simple idea to > >> improve this: Besides OMakefile there could also be an OMakefile.ml, and > >> you can define any helper functions there, and they would be > >> automatically callable from the OMakefile. I think this is not really > >> complicated to do - you'd need to build a custom omake executable > >> whenever OMakefile.ml changes, and need to scan the OMakefile.ml > >> interface for function signatures that match the form that is callable, > >> and you need to generate some glue code. (Later this idea could be > >> extended by allowing OCaml code to emit new rules, as described in an > >> earlier post.) > > > > > > I can see some cases where it would indeed be more comfortable to implement > > build-system calculations in OCaml. But couldn't most of these cases be > > implemented as external programs called by omake functions and implemented > > e.g. in OCaml? This forces to pass explicitly all the data required by the > > calculation to those external programs, but how often is this a problem? > > With some convention, the external program could even return description of > > new dependencies, to be interpreted by some omake code and injected into the > > actual graph. AFAICT, all this is already possible. > > > > > >> I see what you mean. In a recent project I had to define all variables > >> with library names, findlib names, intra-project library dependencies > >> etc. in the global OMakefile, because they are needed in potentially all > >> sub-OMakefiles. That's of course not where these things are best > >> naturally defined. > > > > > > A variant is to have e.g. a OPreOMakefile file in each directory and arrange > > so that the toplevel OMakefile includes all of them (with a proper ordering) > > without processing the rest of the project. This way, you only need to > > "lift" the full list of directories, and actual data definitions can be put > > where they belong. > > > >> Maybe we should allow to switch to global context anywhere? I think this > >> is solvable. > > > > > > I'm not sure this would easily fit in the current functional semantics. > > > >> Could be something simple, like matching the wildcard rules against the > >> real files. > > > > > > Reading the directory content should be quite cheap, and then it is just > > string processing, which should be even cheaper (if done properly). It > > would be great to identify such hot spots; maybe some very local tweaks to > > algorithmics or data structures could improve performance a lot. > > > > > > > > Alain > > > > > > -- > > Caml-list mailing list. Subscription management and archives: > > https://sympa.inria.fr/sympa/arc/caml-list > > Beginner's list: http://groups.yahoo.com/group/ocaml_beginners > > Bug reports: http://caml.inria.fr/bin/caml-bugs > -- ------------------------------------------------------------ Gerd Stolpmann, Darmstadt, Germany gerd@gerd-stolpmann.de My OCaml site: http://www.camlcity.org Contact details: http://www.camlcity.org/contact.html Company homepage: http://www.gerd-stolpmann.de ------------------------------------------------------------