From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dave Eckhardt Subject: Re: [9fans] impressive To: Fans of the OS Plan 9 from Bell Labs <9fans@cse.psu.edu> In-Reply-To: <8ccc8ba40605081525p7b703dbaq318fbdb790d118d2@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-ID: <13739.1147211763.1@piper.nectar.cs.cmu.edu> Date: Tue, 9 May 2006 17:56:03 -0400 Message-ID: <13740.1147211763@piper.nectar.cs.cmu.edu> Topicbox-Message-UUID: 4fb145c6-ead1-11e9-9d60-3106f5b1d025 The "old way" was to create an explicit portability layer, with the application running on top of that layer and one version of the layer for each platform. So each application would need one expert to maintain the SunOS blob, one to maintain the HP-UX blob, etc. The autoconf approach (following perl's pioneering configuration infrastructure) was based on the observation that the 1995 version of the SunOS blob and the 1995 version of the Irix blob looked somewhat like each other, especially in the sense that if you looked at the 1998 version of each the underlying platforms had added many of the same features. Also, if you looked at the SunOS blob for one application and the SunOS blob for another application, they would of course share many similarities. So the theory was to ignore platforms and start testing for *features*, and the hope was that once a *feature* had been coded for platforms would add the feature over time and all software using autoconf would automatically start using the feature. Great! But there are problems: 1. The language chosen for the configuration infrastructure, sh, is an issue because it is REALLY hard to debug 5,000-line shell scripts which were generated by sh and m4. In the old days if you tried to build a package on a too-new version of Solaris you might run into trouble with the Solaris blob and need to contact the Solaris expert who maintained it. But *nobody* is an expert at debugging 5,000-line machine-generated sh scripts, so you just lose. 2. Feature-detection code ages very fast. If the foo() function used to be provided in -lmumble but suddenly is moved -lfrobotz, or if /etc/foo.cfg is moved to /var/etc/sbin/foo.cfg, then the test code for the foo feature (cleverly written in m4 and turned into 237 lines of unreadable sh) is broken. Maybe you can figure out how to fix it for your system, but the bad news is that you need to fix it the same way for multiple applications as you try to build them. 3. The interface between applications and the auto* infrastructure changes pretty rapidly. So if version 3 of your application is designed to work with version 47 of auto* and version 49 of auto* has better foo-detection code, that doesn't help you at all because if you ask version 49 of auto* to rebuild the 5,000-line shell script to configure the application you'll find yourself debugging amusing m4/sh errors (e.g., AC_META_FIZZBANG has been replaced by AC_HYPER_FIZZBANG which has 7 more args and uses a different quoting style). The end result is that an autoconf-maintained package will build pretty much on exactly the (platform, version) tuples that one of the package maintainers built it on as part of the release process, and likely not anywhere else. This is uncannily like the bad old days. Furthermore, when something is broken you're not trying to fix C code in the application's Solaris blob, but trying to debug a broken 5,000-line machine-generated sh script. Sometimes some of the application-supplied feature tests depend on gawk as opposed to any other dialect of awk, so despite the huge tracts of portability infrastructure actually all of the (platform, version) tuples which work are some flavor of Linux. Resistance is futile! You will be assimilated! Dave Eckhardt