From mboxrd@z Thu Jan 1 00:00:00 1970 Message-ID: <56fcb87a07be1d1f41475d5d43b09485@hamnavoe.com> To: 9fans@9fans.net From: Richard Miller <9fans@hamnavoe.com> Date: Thu, 19 Dec 2013 10:01:33 +0000 In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Subject: Re: [9fans] mk time-check/slice issue Topicbox-Message-UUID: a0ad7808-ead8-11e9-9d60-3106f5b1d025 > So, I think you are saying, that for pieces in a mkfile that take less than > 1s to build it is possible for them to be build again, unnecessarily, when > mk is run again. This is normal and just the way it is. Is that correct? Correct except for "just the way it is". There is a principle involved which is so pervasive to Plan 9 that we often forget to make it explicit. To quote Ken Thompson: "Throughout, simplicity has been substituted for efficiency. Complex algorithms are used only if their complexity can be localized." He was writing in 1978 about UNIX, but Plan 9 follows firmly in this tradition. (Linux not so much.) Using the existing file time stamps costs some efficiency, when targets are built more often than necessary. The question is, how significant is this cost compared to the complexity of adding higher time resolution? Note that it's not necessary to run mk repeatedly until it converges -- the algorithm is conservative in the sense that it will not build less than required. So, how many seconds is the unnecessary building of targets actually costing?