From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Thu, 18 Nov 2010 00:48:18 +0100 From: Enrico Weigelt To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Message-ID: <20101117234817.GA10365@nibiru.local> References: <703b2539-027e-4f9f-a739-00b59f6d3d82@v28g2000vbb.googlegroups.com> <20101113192425.GC22589@nibiru.local> <79E9F966-3C4E-44D9-8B1F-D22C9548CE74@gnu.org> <20101115010237.GA2116@nibiru.local> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.1i Subject: Re: [9fans] Plan9 development Topicbox-Message-UUID: 85c786c0-ead6-11e9-9d60-3106f5b1d025 * Gary V. Vaughan wrote: > I have never used sysroot in all that time, Maybe that's the problem. ;-p I'm using sysroot all the time, since I don't want to tweak every single package so that the right pathes are found, at build- as well as runtime. W/ sysroot you build and install everything as it would be on the target system (despite that certain unneeded filesnwill be removed from the production target image). Most packages wont need any special handling for that, as long as there's nobody in the middle who messes up the pathes. > and no else offered patches until quite recently. Actually, I was trying to fix it several years ago, but the whole codebase was so utterly complex that I decided to write some new universal toolchain wrapper with an consistent and platform-agnostic interface and additionally an drop-in replacement for libtool (ltmain.sh). > Libtool has been immeasurably useful to me entirely without > that particular feature. For my projects it had been an absolutely catastrope, since I simply *need* sysroot. And in the end it was much easier replacing it completely than trying to fix it. > > The whole idea of libtool essentially being a command line filter > > instead of defining an own coherent abstraction interface > > What is incoherent or unabstract about offering a static-library > like interface to building shared libraries, or an ELF like > library versioning scheme? I'm talking about libtool, the big script, not the ltdl library. The main design problem here is that it's called instead of the direct toolchain commands, but with an derivative of their command line interface, changing commands in unobvious ways. It's interface varies between platforms/toolchains. With "coherent abstraction interface" I mean some completely differnet interface that wraps behind all the platform/toolchain specific things, which stays the same everywhere. > > and one > > implementation/configuration instance per target instead of an > > autofooled instance per individual package build. > > How does that scale in the face of a dozen or more OSes, For each platform you'll need an proper target configuration. But only once per platform. And you can easily tweak it to your specific needs, w/o touching all the invidual packages. > each with at least a handful of supported library releases Libraries simply have provide an coherent interface, across releases. Otherwise they're simply broken. Either fix them or dont use them. Everyting else gives to maintainance overhead of exponential complexity. > and compiler revisions each with a Same as w/ OS'es. Configure the target config once and for all. (most likely the job of the distro maintainers). > handful of vendor maintenance patches, each with several hundred API > entry-points of which several dozen of each have non-conformances or > outright bugs. Fix the bugs instead of "supporting" broken stuff. If you can't fix the system itself, the fixes can be put into the toolchain (eg. use fixed versions of broken/missing libc functions). > Worse, many of my clients mix and match gcc with vendor ldd and > runtime in various combinations or install and support 3 or more > compilers per architecture. They simply shouldn't do this. If the vendor's toolchain/libc is broken, use a fixed one. > Libtool figures out what to do in all of those thousands of combinations, > by probing the environment at build time... There're sometimes cases where these things cannot be guessed at build time or it simply guesses wrong. I dont see that libtool offers some clean interface for _specifying_ those things once per target type. > I'd *much* rather wait 90 seconds for each build that try to maintain a giant > tabulation with thousands of entries, which go out of date every time a new > patch or revision of libc or the compiler or the os or the linker comes along. You dont need either. Just have one exact configuration per target (which itself *might* be created in assistance of some separate detection tool). Normally the job of the distro maintainers. > > See git://pubgit.metux.de/projects/unitool.git > > Java? For a bootstrapping tool? This is just an reference implementation, an proof-of-concept. > Does Java even get worthwhile support outside of Windows and Solaris these days? Yes, works fine for example on GNU/Linux - gcc builds ELF executables out of it easily. > If it works for you, that's great, but I would have an extremely > hard sell on my hands if I told my clients they would need to have > a working Java runtime on Tru64 Unix before I could provide a zlib > build recipe for them :-o In my model, unitool is part of the toolchain. In a way, it *is* the toolchain (or at least the front side). BTW: zlib doesnt use neither libtool nor autoconf. > > That's an issue of individual platform backends, which should be > > completely transparent to the calling package. > > Agreed, that's what libtool provides, but to do that it needs to be intimately > familiar with how each combination works. It certainly shouldn't be trying > to do that without calling the vendor compiler and linker. Still it's not completely transparent. It rewrites parts of the command line and passes the rest through. Similar to m4, it's an filter, not an real abstraction. > >> 3. There's no use in fighting against GNU Autoconf and GNU Automake, > > > > Ah, resistance is futile ? ;-o > > Without user acceptance, that 2 man years of effort I could sink into a new > all singing all dancing build system would be a waste of my time. That's not necessary. Just clean up the code of the packages you need. It doesnt take any new super-duper build system - in very most cases, clean Makefiles and an _properly_ written shell script (where most of the functions can come from a separate library package, so collecting the knowledge) will suffice. All that's now done via complex and error-prone m4 macros could be easily done using shell functions. > Yep. If I'm porting a cmake package (for example) to the 30 architectures > we support, and shared libraries are required - calling the installed libtool > from the cmake rules is an order of magnitude less work than trying to > encode all the different link flags, install and post install rules or other > system specific peculiarities into the compile and link rules in every build > file... hmm, cmake doesnt support including an common file which contains several variables and that is generated by some well-written shellscript or could be tweaked manually when required ? cu -- ---------------------------------------------------------------------- Enrico Weigelt, metux IT service -- http://www.metux.de/ phone: +49 36207 519931 email: weigelt@metux.de mobile: +49 151 27565287 icq: 210169427 skype: nekrad666 ---------------------------------------------------------------------- Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme ----------------------------------------------------------------------