From mboxrd@z Thu Jan 1 00:00:00 1970 MIME-Version: 1.0 In-Reply-To: <103b012cdb5c55bca54c751bb88fb57c@brasstown.quanstro.net> References: <103b012cdb5c55bca54c751bb88fb57c@brasstown.quanstro.net> Date: Fri, 16 Sep 2011 00:56:10 -0500 Message-ID: From: ron minnich To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> Content-Type: text/plain; charset=ISO-8859-1 Subject: Re: [9fans] gar nix! Topicbox-Message-UUID: 22e3b938-ead7-11e9-9d60-3106f5b1d025 for the 2M pages -- I'm willing to see some measurement but let's get the #s -- I've done some simple measurements and it's not the hit one would expect. These new machines have about 10 GB/s bandwidth (well, the ones we are targeting do) and that translates to sub-millisecond times to zero a 2M page. Further, the text page is in the image cache. So after first exec of a program, the only text issue is locating the page. It's not simply a case of having to write 6M each time you exec. I note that starting a proc, allocating and zeroing 2 GiB, takes *less* time with 2M pages than 4K pages -- this was measured in May when we still were supporting 4K pages -- the page faults are far more expensive than the time to write the memory. Again, YMMV, esp. on an Atom, but the cost of taking (say) 6 page faults for a 24k text segment that's already in memory may not be what you want. There are plenty of games to be played to reduce the cost of zero filled pages but at least from what I measured the 2M pages are not a real hit. ron