* Rich Felker [2013-07-02 03:49:20 -0400]: > What about a mix? Have the makefile include another makefile fragment > with a rule to generate that fragment, where the fragment is generated > from comments in the source files. Then you have full dependency > tracking via make, and self-contained tests. i wrote some tests but the build system became a bit nasty i attached the current layout with most of the test cases removed so someone can take a look and/or propose a better buildsystem before i do too much work in the wrong direction each directory has separate makefile because they work differently functional/ and regression/ tests have the same makefile, they set up a lot of make variables for each .c file in the directory, the variables and rules can be overridden by a similarly named .mk file (this seems to be more reliable than target specific vars) (now it builds both static and dynamic linked binaries this can be changed) i use the srcdir variable so it is possible to build the binaries into a different directory (so a single source tree can be used for glibc and musl test binaries) i'm not sure how useful is that (one could use several git repos as well) another approach would be one central makefile that collects all the sources and then you have to build tests from the central place (but i thought that sometimes you just want to run a subset of the tests and that's easier with the makefile per dir approach, another issue is dlopen and ldso tests need the .so binary at the right path at runtime so you cannot run the tests from arbitrary directory) yet another approach would be to use a simple makefile with explicit rules without fancy gnu make tricks but then the makefile needs to be edited whenever a new test is added i'm not sure what's the best way to handle common/ code in case of decentralized makefiles, now i collected them into a separate directory that is built into a 'libtest.a' that is linked to all tests so you have to build common/ first i haven't yet done proper collection of the reports and i'll need some tool to run the test cases: i don't know how to report the signal name or number (portably) from sh when a test is killed by a signal (the shell prints segfault etc to its stderr which may be used) and i don't know how to kill the test reliably after a timeout i hope this makes sense