From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Wed, 15 Apr 2009 15:42:14 +0100 From: Eris Discordia To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Message-ID: <76AC46BF74173E9B2C3BD019@[192.168.1.2]> In-Reply-To: <13426df10904150822n129b78dyf3b298f8b56e45ba@mail.gmail.com> References: <138575260904150753j74048448q2ba2f33a1b06f2f@mail.gmail.com> <13426df10904150807g582ed320n6ad687656797243@mail.gmail.com> <138575260904150818r66f8ed2ck1bbcb962ca14b060@mail.gmail.com> <13426df10904150822n129b78dyf3b298f8b56e45ba@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline Subject: Re: [9fans] some measurements in plan 9 Topicbox-Message-UUID: dc088f36-ead4-11e9-9d60-3106f5b1d025 >> OK, they just run for a few seconds, so, any suggestions are welcome >> (in fact needed) > > /dev/bintime Wouldn't many rounds of running with different data sets (to minimize cache hits) and timing with time also serve the same purpose? Does time introduce some unavoidable minimum margin of error for short runs that will get multiplied by the number of times and drown the actual measurement? If time only introduces fluctuations they will cancel each other out in many runs but if there's an unavoidable minimum error then multiple runs won't help. --On Wednesday, April 15, 2009 8:22 AM -0700 ron minnich wrote: > On Wed, Apr 15, 2009 at 8:18 AM, hugo rivera wrote: >>> seems reasonable to me, I assume you are looking at data consumption >>> only? >> >> well, I am not really sure what you mean. Data consumption? ;-) > > sorry. Memory data footprint. Not code + memory. This all depends on > lots of factors, but for code remember that text is shared. > > >> OK, they just run for a few seconds, so, any suggestions are welcome >> (in fact needed) > > /dev/bintime > > ron >