From mboxrd@z Thu Jan 1 00:00:00 1970 Message-Id: <200008011632.MAA24052@cse.psu.edu> From: "James A. Robinson" To: 9fans@cse.psu.edu Subject: Re: [9fans] Installing the updates In-reply-to: Message from Greg Hudson of "Tue, 01 Aug 2000 12:03:38 EDT."References: <200008011603.MAA05857@egyptian-gods.MIT.EDU> <200008011603.MAA05857@egyptian-gods.MIT.EDU> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-ID: <21635.965147555.1@aubrey.stanford.edu> Date: Tue, 1 Aug 2000 09:32:35 -0700 Topicbox-Message-UUID: f0b65226-eac8-11e9-9e20-41e7f4b1d025 > It would be nice if Pike could present a compelling argument. > [...] > Where's the horror here? Computers are fast. Pushing extra work on > programmers and creating an unnecessary portability issue is a high > cost. Reading a header file five or more times during compilation is > a low cost (and one which can be optimized away for ifdef-protected > headers; I'm told gcc does so). The horror is that many header files don't get 'the dance' right. Even OS header files are screwed up sometimes. Before I read Notes on Programming in C I had put all #include calls into my program's local header file because that was how I had always seen it done. When I started writing libraries, I found all kinds of problems with the compiler complaining about multiple definitions (both my own and the system headers). I think it even makes good sense from an interface perspective. The header file shows the interface for program file. The program should hide all the implementation details, which means you shouldn't be able to tell which system calls the program makes to get the job done. Realizing that you keep including the same files may also force an eye toward keeping solid boundries across the different files (one does x, and only x. The next only handles y). Of course, if the file hides a bad job (say, calls to strtok(2)) then you're screwed anyway. Why do people object to this rule? It's not hard to follow, right? It's really not that much extra work, in my opinion. Of course there are a few that always get included, stdlib for example, but I've found only a few overlaps in a library I've just written. It's spread over 9 files, and you can see that there isn't *that* much overlap: ; grep '#include' *.c|cut -d: -f2|sort|uniq -c|sort -nr 8 #include 5 #include 5 #include 5 #include 4 #include 4 #include 4 #include 4 #include 4 #include 2 #include 2 #include 2 #include 1 #include 1 #include 1 #include 1 #include 1 #include Jim