From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dave Eckhardt Subject: Re: [9fans] First-timer help To: 9fans@cse.psu.edu MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-ID: <1227.1121877529.1@piper.nectar.cs.cmu.edu> Date: Wed, 20 Jul 2005 12:38:49 -0400 Message-ID: <1228.1121877529@piper.nectar.cs.cmu.edu> Topicbox-Message-UUID: 6cb50708-ead0-11e9-9d60-3106f5b1d025 > it might be `ironic' if `autoconf' had ever met > its notional specification. `autoconf' is not > even an `oxymoron', just a `moron'. The old (pre-autoconf) idea was that each package would try to encapsulate platform dependencies into a couple of files, maybe SysV.c, BSD.c, etc. When a new release of one of those came out, somebody on that platform would have to hack that file. Often the code for SysV.c and BSD.c would have lots of similar chunks in it. If you had N platforms some chunks might appear in N/3 of the platform files. More likely, *almost*-identical chunks would be gratuitously different just because they were maintained by different people. And while the package would build on M releases of N platforms, it would build on *only* them--anything new was guaranteed to break something. The vision of autoconf was to focus on the chunks, not the platforms, with probe code (in "portable /bin/sh") choosing which chunks to use on each release of each platform. This way when a package finds itself on a new platform it can automatically choose a locking method from column A, a networking API from column B, etc., thus *theoretically* working with zero porting effort by anybody. But this approach has a fatal flaw. It turns out that the *probe* code is non-portable. Just one example: I saw a piece of code designed to check whether to link against Heimdal or MIT Kerberos libraries. It "worked" by testing for the existence of a particular function, which at one moment in history was present in MIT Kerberos but not Heimdal. Six months later, of course, the Heimdal guys caught up and the probe code broke. The result of auto* is that a package will build on M releases of N platforms, but will build on *only* them--anything new is guaranteed to break something. In other words, pretty much the same as before. The bad news is that a large complex infrastructure has been deployed against a problem and the problem is still there. The *awful* news is that now when something goes wrong it isn't a matter of fixing a snippet of C code inside BSD.c, but of finding, decoding, and increasing the brittleness of some probe code written in a punitively complex mixture of sh and m4. For example, maybe AC_MUMBLE(foo,[bletch blobble]) works on Linux but on Solaris it has to be AC_MUMBLE(foo,[[bletch blobble]]) Good luck figuring that out when the error you get is sh: configure.sh: 7225: syntax error (yes, that's a seven thousand line shell script, and of course it's machine-generated code for maximal readability). Maybe the problem was gnu m4 versus m4, maybe it was bash versus sh, more likely you'll never know. Dave Eckhardt