From mboxrd@z Thu Jan 1 00:00:00 1970 To: 9fans@cse.psu.edu Date: Wed, 2 Aug 2000 08:32:55 +0000 From: "Bruce G. Stewart" Message-ID: <39878BDB.BFD4CEEC@worldnet.att.net> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit References: <200008011707.NAA25570@cse.psu.edu> Subject: Re: [9fans] Installing the updates Topicbox-Message-UUID: f22f2f60-eac8-11e9-9e20-41e7f4b1d025 Russ Cox wrote: > > Where's the horror here? Computers are fast. Pushing extra work on > programmers and creating an unnecessary portability issue is a high > cost. Reading a header file five or more times during compilation is > a low cost (and one which can be optimized away for ifdef-protected > headers; I'm told gcc does so). > > While it may be simpler when it works, when > it fails it does so in mysterious ways. I don't know > how many times I've tried to figure out why > some header file I wanted wasn't getting included, > only to find that it had _already_ been included, > by someone else, with different things #defined, > so the definition or prototype_I_ wanted wasn't > there. > > I'd much rather have the compiler barf on a > #define or something like that, than hop over > the whole file as though it weren't there. > > Russ I think nested includes also lead to excessive factoring - there is a folklore that says small include files are better because fewer things need to be recompiled when a header is changed. More includes makes for more interdependencies, so one resorts to nesting to reduce the amount of typing. Then one needs a tool to find all the dependencies to create the mkfile. One wants to run it whenever a header changes in case some new nested dependency has been introduced. Perhaps a mkfile to mk your mkfile is the answer.. and on it goes. I also find the one library / one h principal attractive. I'd rather have the compiler wasting its time scanning one behemoth text than scanning a lot of little files looking for #endif.