On Thu, Jul 31, 2014 at 05:47:16PM -0400, Rich Felker wrote: > I'd like to figure out how to setup the openadk test framework, or > adapt things from it, for automated testing all musl ports. The repo I like this idea. I'll probably start looking into putting this together soon if nobody gets to it first. > is here: > > http://www.openadk.org/cgi-bin/gitweb.cgi?p=adk-test-framework.git > > There's a lot of stuff hard-coded for the openadk toolchains, whereas > I'd like to be able to use it with the musl-cross toolchains which are > more canonical. The scripts also seem to be incompatible with busybox > (using GNU features in something for making the initramfs, probably > cpio?). And by default it tries to test musl 1.0.1 and doesn't have an > obvious way to test from git. > > The idea I have in mind for using it as a basis for automated testing > would be: > > 1. Have a Makefile and a subdirectory full of cross compiler trees. > The set of archs could be based purely on the set of cross > compilers, computed by the Makefile based on directory contents. > > 2. Have the Makefile clone/pull one musl tree from upstream git and > then clone/pull an additional copy of the tree per arch/subarch > (with the set of archs determined as in point 1). > > 3. Build each musl (via recursive make, i.e. make calling configure > and make in each musl clone) and install them into the cross > compiler include/lib dirs for their corresponding cross compilers. > > 4. Do like points 2 and 3, but for libc-test repo. > > 5. Use the kernel, initrd, and qemu stuff from the adk-test-framework > to build initrd images and run them. > Last I looked, libc-test didn't support being built on one system and run on another. Has this changed? > The idea of using a Makefile (aside from declarative being The Right > Way to do anything like this :) is to make it easy to make a single > fix and re-run the test without rebuilding everything. Of course +1 > rebuilding (via a "make clean" or similar) should aof coursebe an > option and what's usually used. > > Does any of this make sense? Or is it simpler to get this kind of > automated testing just making a few tweaks to the existing scripts? > > It would be ideal if the whole thing would work with dumping the test > repo on a system not necessarily setup for running it (e.g. some kind > of build farm or 'cloud computing' resource) and just produce the > results that could be read offline. +1 > > Rich -- Bobby Bingham