9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
* [9fans] lguest fixed
@ 2008-07-03 17:07 ron minnich
  2008-07-04 16:16 ` sqweek
  0 siblings, 1 reply; 3+ messages in thread
From: ron minnich @ 2008-07-03 17:07 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

I got annoyed with the idea of lguest being broken, I was stuck on a
long flight, so I redid l.s and fixed it. The new l.s has the
improvements from sources, mainly the 8M pte map instead of the
earlier 4M pte map.

it's also seriously cleaned up to take into account the new lguest
improvements (dare I say "churn"?) which are, after all, a good thing,
since you  now start in low memory and can set up the high memory page
tables correctly for the MACH structure.

OK, could some of you with the loopback 9vx device time something for me?

on 9vx, I have a venti and fossil in the file system. hence:

index main
isect /usr/rminnich/disks/index
arenas /usr/rminnich/disks/arenas
bloom /usr/rminnich/disks/bloom
httpaddr tcp!*!8008
mem 10M
bcmem 20M
icmem 30M

fsys main config /usr/rminnich/disks/fossil
fsys main open -AWP
fsys main
create /active/adm adm sys d775
create /active/adm/users adm sys 664
users -w
srv -p fscons
srv fossil


mk clean is 3 minutes on 9vx, 20 seconds on lguest.

mk 'CONF=pccpuf' is 7 minutes on 9vx, 2:40 on lguest

Of course, xen on a slower box beats them all, at 12 seconds, so it's
not like I'm saying lguest is fast. I need to try kvm next. But I'd
be curious to see times on the 9vx /dev/sd. I will be surprised if it
makes a huge difference but I'm often surprised.

Thanks

ron



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [9fans] lguest fixed
  2008-07-03 17:07 [9fans] lguest fixed ron minnich
@ 2008-07-04 16:16 ` sqweek
  2008-07-04 16:37   ` ron minnich
  0 siblings, 1 reply; 3+ messages in thread
From: sqweek @ 2008-07-04 16:16 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs; +Cc: Fans of the OS Plan 9 from Bell Labs

On Fri, Jul 4, 2008 at 1:07 AM, ron minnich <rminnich@gmail.com> wrote:
> OK, could some of you with the loopback 9vx device time something for me?

 I assume you mean plain #Zplan9 here instead of fossil/venti.

> mk clean is 3 minutes on 9vx, 20 seconds on lguest.

0.00u 0.00s 14.61r 	 mk clean

> mk 'CONF=pccpuf' is 7 minutes on 9vx, 2:40 on lguest

0.00u 0.00s 83.09r 	 mk CONF=pccpuf

 Athlon XP 2500+, Linux 2.6.25, 9vx 0.11. "time mk clean time" within
9vx matches wall clock time, so I expect the pccpuf does too, but I'm
going to go to bed instead of trying it again. :P

-sqweek



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [9fans] lguest fixed
  2008-07-04 16:16 ` sqweek
@ 2008-07-04 16:37   ` ron minnich
  0 siblings, 0 replies; 3+ messages in thread
From: ron minnich @ 2008-07-04 16:37 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs; +Cc: Fans of the OS Plan 9 from Bell Labs

On Fri, Jul 4, 2008 at 9:16 AM, sqweek <sqweek@gmail.com> wrote:
> On Fri, Jul 4, 2008 at 1:07 AM, ron minnich <rminnich@gmail.com> wrote:
>> OK, could some of you with the loopback 9vx device time something for me?
>
>  I assume you mean plain #Zplan9 here instead of fossil/venti.

yes, that 7 minute build for me was with fossil/venti on 9vx.

I think the use of #Z is not going to yield as useful comparisons,
since there's no way to do that on real plan 9 until we put file
systems in the kernel :-)

I really meant with fossil and venti partitions on linux disk
partitions instead of in the linux file system. This will be more work
to set up and I don't expect a huge delta in time since Linux does a
lot of in-kernel caching of data.

ron



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2008-07-04 16:37 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-07-03 17:07 [9fans] lguest fixed ron minnich
2008-07-04 16:16 ` sqweek
2008-07-04 16:37   ` ron minnich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).