9fans - fans of the OS Plan 9 from Bell Labs
 help / color / mirror / Atom feed
* [9fans] gar nix!
@ 2011-09-16  1:19 erik quanstrom
  2011-09-16  5:44 ` Noah Evans
                   ` (2 more replies)
  0 siblings, 3 replies; 27+ messages in thread
From: erik quanstrom @ 2011-09-16  1:19 UTC (permalink / raw)
  To: 9fans

well, after a bit of a time resembling /sys/src/cmd/aux/vga/adventure
i currently have my atom box running nix.  unfortunately, only one
core is recognized.  but i'll just have to leave one or two things till
tomorrow.

here are a list of a few things i tripped on

1.  unfortunately, ppxeload isn't ready to load a .gz.  i added a bit of
code to at least not jump unconditionally to an x86 binary, but that's
just not enough.  need to revisit this.

2.  ppxeload accepts only the old-and-wierd serial baud setting and
not the kernel-standard, e.g. "0 b115200".  i did fix this.

3.  the 8169 driver wasn't working.  i just dropped the one from 9load
on top.  inelegant, but effective.

(i did have a 82563 chip going for a while that wasn't recognized.  also
needed to drop in a new driver for this, but since then the hardware has
died.)

4.  panic on memory init.  this was caused because the only page color was
6, thus when starting in the array of page colors at 0, you won't find any.
.  i worked around this with this bit of code.

; diffy -c physalloc.c
/n/dump/2011/0915/sys/src/nix/k10/physalloc.c:236,242 - physalloc.c:236,244
  	uintmem m;

  	DBG("physalloc\n");
- 	assert(b->size > 0);
+ 	if(b->size == 0)
+ 		return 0;
+ //	assert(b->size > 0);

  	avail = b->avail;
  	blocks = b->blocks;

i need to go back and investigate if this is a problem with memory recognition
or what.

5.  needed to update 8169 and 82563 in the kernel.

6.  had an old version of 6l that expected to make 4k pages.  by the way,
(won't we pay a heavy price for forking and/or execing small programs
with 2mb pages?  this means that each fork/exec is going to be at least
6mb worth of messing around.)

it looks like my processors aren't recognized, and i'm pretty sure that the
atom supports 1gb pages, but they aren't recognized either.

- erik



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
  2011-09-16  1:19 [9fans] gar nix! erik quanstrom
@ 2011-09-16  5:44 ` Noah Evans
  2011-09-17 19:41   ` erik quanstrom
  2011-09-16  5:56 ` ron minnich
       [not found] ` <CAP6exYLuD6ZeOwCfY9TcgSWEYp-2QH-7qUO713GL0cravJ2FzA@mail.gmail.c>
  2 siblings, 1 reply; 27+ messages in thread
From: Noah Evans @ 2011-09-16  5:44 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

Submit these changes as code reviews. See /PROCESS if anything is unclear about how to do that.

Sent from my iPad

On Sep 16, 2011, at 3:19 AM, erik quanstrom <quanstro@quanstro.net> wrote:

> well, after a bit of a time resembling /sys/src/cmd/aux/vga/adventure
> i currently have my atom box running nix.  unfortunately, only one
> core is recognized.  but i'll just have to leave one or two things till
> tomorrow.
> 
> here are a list of a few things i tripped on
> 
> 1.  unfortunately, ppxeload isn't ready to load a .gz.  i added a bit of
> code to at least not jump unconditionally to an x86 binary, but that's
> just not enough.  need to revisit this.
> 
> 2.  ppxeload accepts only the old-and-wierd serial baud setting and
> not the kernel-standard, e.g. "0 b115200".  i did fix this.
> 
> 3.  the 8169 driver wasn't working.  i just dropped the one from 9load
> on top.  inelegant, but effective.
> 
> (i did have a 82563 chip going for a while that wasn't recognized.  also
> needed to drop in a new driver for this, but since then the hardware has
> died.)
> 
> 4.  panic on memory init.  this was caused because the only page color was
> 6, thus when starting in the array of page colors at 0, you won't find any.
> .  i worked around this with this bit of code.
> 
> ; diffy -c physalloc.c
> /n/dump/2011/0915/sys/src/nix/k10/physalloc.c:236,242 - physalloc.c:236,244
>      uintmem m;
> 
>      DBG("physalloc\n");
> -    assert(b->size > 0);
> +    if(b->size == 0)
> +        return 0;
> + //    assert(b->size > 0);
> 
>      avail = b->avail;
>      blocks = b->blocks;
> 
> i need to go back and investigate if this is a problem with memory recognition
> or what.
> 
> 5.  needed to update 8169 and 82563 in the kernel.
> 
> 6.  had an old version of 6l that expected to make 4k pages.  by the way,
> (won't we pay a heavy price for forking and/or execing small programs
> with 2mb pages?  this means that each fork/exec is going to be at least
> 6mb worth of messing around.)
> 
> it looks like my processors aren't recognized, and i'm pretty sure that the
> atom supports 1gb pages, but they aren't recognized either.
> 
> - erik
> 



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
  2011-09-16  1:19 [9fans] gar nix! erik quanstrom
  2011-09-16  5:44 ` Noah Evans
@ 2011-09-16  5:56 ` ron minnich
       [not found] ` <CAP6exYLuD6ZeOwCfY9TcgSWEYp-2QH-7qUO713GL0cravJ2FzA@mail.gmail.c>
  2 siblings, 0 replies; 27+ messages in thread
From: ron minnich @ 2011-09-16  5:56 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

for the 2M pages -- I'm willing to see some measurement but let's get
the #s -- I've done some simple measurements and it's not the hit one
would expect. These new machines have about 10 GB/s bandwidth (well,
the ones we are targeting do) and that translates to sub-millisecond
times to zero a 2M page. Further, the text page is in the image cache.
So after first exec of a program, the only text issue is locating the
page. It's not simply a case of having to write 6M each time you exec.

I note that starting a proc, allocating and zeroing 2 GiB, takes
*less* time with 2M pages than 4K pages -- this was measured in May
when we still were supporting 4K pages -- the page faults are far more
expensive than the time to write the memory. Again, YMMV, esp. on an
Atom, but the cost of taking (say) 6 page faults for a 24k text
segment that's already in memory may not be what you want.

There are plenty of games to be played to reduce the cost of zero
filled pages but at least from what I measured the 2M pages are not a
real hit.

ron



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
       [not found] ` <CAP6exYLuD6ZeOwCfY9TcgSWEYp-2QH-7qUO713GL0cravJ2FzA@mail.gmail.c>
@ 2011-09-16  6:46   ` erik quanstrom
  2011-09-16  7:02     ` Nemo
                       ` (6 more replies)
  0 siblings, 7 replies; 27+ messages in thread
From: erik quanstrom @ 2011-09-16  6:46 UTC (permalink / raw)
  To: 9fans

hey, ron.

On Fri Sep 16 01:57:04 EDT 2011, rminnich@gmail.com wrote:
> for the 2M pages -- I'm willing to see some measurement but let's get
> the #s -- I've done some simple measurements and it's not the hit one
> would expect. These new machines have about 10 GB/s bandwidth (well,
> the ones we are targeting do) and that translates to sub-millisecond
> times to zero a 2M page.   Further, the text page is in the image cache.
> So after first exec of a program, the only text issue is locating the
> page. It's not simply a case of having to write 6M each time you exec.

however, neither the stack nor the heap are.  that's 4MB that need to be
cleared.  that sounds like an operation that could take on the order of
ms, and well-worth measuring.

maybe it might make sense to use 4k pages for stack, and sometime for
the heap.

> I note that starting a proc, allocating and zeroing 2 GiB, takes
> *less* time with 2M pages than 4K pages -- this was measured in May
> when we still were supporting 4K pages -- the page faults are far more

i'll try to get some numbers soon, but i think a higher priority is to get
a smp setup.  is anyone testing with smp right now?

are you sure this isn't the difference between throughput and latency?
did you try a small-executable test like
	for(i in `{seq 1 1000000})dd -if /dev/zero -of /dev/null -quiet 1
?  now that the code has been removed, it's a little harder to replicate your
numbers.

> expensive than the time to write the memory. Again, YMMV, esp. on an
> Atom, but the cost of taking (say) 6 page faults for a 24k text
> segment that's already in memory may not be what you want.
>
> There are plenty of games to be played to reduce the cost of zero
> filled pages but at least from what I measured the 2M pages are not a
> real hit.

i'm okay with the atom suffering a little bit (odd how far down the food
chain one can get 64 bits!), i'm actually more concerned about
being punshed severely for forking on a beefy but busy machine.
the atom is just my test mule.

(can't one just preemptively map the whole text on first-fault?)

- erik

p.s.  i guess the claim i thought i saw that you need 1gb pages
isn't correct.  that's good, but i need to track down why i don't see 'em on the atom.



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
  2011-09-16  6:46   ` erik quanstrom
@ 2011-09-16  7:02     ` Nemo
  2011-09-16  7:52     ` dexen deVries
                       ` (5 subsequent siblings)
  6 siblings, 0 replies; 27+ messages in thread
From: Nemo @ 2011-09-16  7:02 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs; +Cc: 9fans

im working in smp. 

On Sep 16, 2011, at 8:46 AM, erik quanstrom <quanstro@quanstro.net> wrote:

> hey, ron.
> 
> On Fri Sep 16 01:57:04 EDT 2011, rminnich@gmail.com wrote:
>> for the 2M pages -- I'm willing to see some measurement but let's get
>> the #s -- I've done some simple measurements and it's not the hit one
>> would expect. These new machines have about 10 GB/s bandwidth (well,
>> the ones we are targeting do) and that translates to sub-millisecond
>> times to zero a 2M page.   Further, the text page is in the image cache.
>> So after first exec of a program, the only text issue is locating the
>> page. It's not simply a case of having to write 6M each time you exec.
> 
> however, neither the stack nor the heap are.  that's 4MB that need to be
> cleared.  that sounds like an operation that could take on the order of
> ms, and well-worth measuring.
> 
> maybe it might make sense to use 4k pages for stack, and sometime for
> the heap.
> 
>> I note that starting a proc, allocating and zeroing 2 GiB, takes
>> *less* time with 2M pages than 4K pages -- this was measured in May
>> when we still were supporting 4K pages -- the page faults are far more
> 
> i'll try to get some numbers soon, but i think a higher priority is to get
> a smp setup.  is anyone testing with smp right now?
> 
> are you sure this isn't the difference between throughput and latency?
> did you try a small-executable test like
>    for(i in `{seq 1 1000000})dd -if /dev/zero -of /dev/null -quiet 1
> ?  now that the code has been removed, it's a little harder to replicate your
> numbers.
> 
>> expensive than the time to write the memory. Again, YMMV, esp. on an
>> Atom, but the cost of taking (say) 6 page faults for a 24k text
>> segment that's already in memory may not be what you want.
>> 
>> There are plenty of games to be played to reduce the cost of zero
>> filled pages but at least from what I measured the 2M pages are not a
>> real hit.
> 
> i'm okay with the atom suffering a little bit (odd how far down the food
> chain one can get 64 bits!), i'm actually more concerned about
> being punshed severely for forking on a beefy but busy machine.
> the atom is just my test mule.
> 
> (can't one just preemptively map the whole text on first-fault?)
> 
> - erik
> 
> p.s.  i guess the claim i thought i saw that you need 1gb pages
> isn't correct.  that's good, but i need to track down why i don't see 'em on the atom.
> 



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
  2011-09-16  6:46   ` erik quanstrom
  2011-09-16  7:02     ` Nemo
@ 2011-09-16  7:52     ` dexen deVries
  2011-09-16 13:36     ` Charles Forsyth
                       ` (4 subsequent siblings)
  6 siblings, 0 replies; 27+ messages in thread
From: dexen deVries @ 2011-09-16  7:52 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

On Friday 16 of September 2011 08:46:51 erik quanstrom wrote:
> On Fri Sep 16 01:57:04 EDT 2011, rminnich@gmail.com wrote:
> > for the 2M pages -- I'm willing to see some measurement but let's get
> > the #s -- I've done some simple measurements and it's not the hit one
> > would expect. These new machines have about 10 GB/s bandwidth (well,
> > the ones we are targeting do) and that translates to sub-millisecond
> > times to zero a 2M page.   Further, the text page is in the image cache.
> > So after first exec of a program, the only text issue is locating the
> > page. It's not simply a case of having to write 6M each time you exec.
> 
> however, neither the stack nor the heap are.  that's 4MB that need to be
> cleared.  that sounds like an operation that could take on the order of
> ms, and well-worth measuring.

keep a pool of clean pages in kernel. upon exec use fresh ones for stack and 
heap, clean old ones in background. employ DMA controler when possible.

-- 
dexen deVries

[[[↓][→]]]

For example, if the first thing in the file is:
   <?kzy irefvba="1.0" rapbqvat="ebg13"?>
an XML parser will recognize that the document is stored in the traditional 
ROT13 encoding.

(( Joe English, http://www.flightlab.com/~joe/sgml/faq-not.txt ))



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
  2011-09-16  6:46   ` erik quanstrom
  2011-09-16  7:02     ` Nemo
  2011-09-16  7:52     ` dexen deVries
@ 2011-09-16 13:36     ` Charles Forsyth
  2011-09-16 14:39     ` ron minnich
                       ` (3 subsequent siblings)
  6 siblings, 0 replies; 27+ messages in thread
From: Charles Forsyth @ 2011-09-16 13:36 UTC (permalink / raw)
  To: 9fans

>i'm okay with the atom suffering a little bit (odd how far down the food
>chain one can get 64 bits!), i'm actually more concerned about
>being punshed severely for forking on a beefy but busy machine.
>the atom is just my test mule.

don't worry about it too much at this point.
i need allocation units smaller than 2Mb myself, and 32 bits come to that.
it's easy to think of ways to address that, but one thing at a time.
it might be best to look at the things that are not well done first, before
considering things that we (probably) know how to do from experience.

also, the applications that ron wants to run
are typically fat and greedy, aren't compositions of processes (in the usual sense),
and are long-running and relatively static once they start (at the moment).
some other potential applications on similar machines,
for services such as file serving, will typically build up a
service pool and recycle that.



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
  2011-09-16  6:46   ` erik quanstrom
                       ` (2 preceding siblings ...)
  2011-09-16 13:36     ` Charles Forsyth
@ 2011-09-16 14:39     ` ron minnich
       [not found]     ` <CAP6exYLy6sicg42A+7D=bmpWtv=p_an3c5=OQw9YB2oxPXhR2A@mail.gmail.c>
                       ` (2 subsequent siblings)
  6 siblings, 0 replies; 27+ messages in thread
From: ron minnich @ 2011-09-16 14:39 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

On Thu, Sep 15, 2011 at 11:46 PM, erik quanstrom <quanstro@quanstro.net> wrote:

> p.s.  i guess the claim i thought i saw that you need 1gb pages
> isn't correct.  that's good, but i need to track down why i don't see 'em on the atom.

My code that mapped physical memory with 1 GiB pages in the kernel got
lost in the shuffle. I expect to bring it back.

ron



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
       [not found]     ` <CAP6exYLy6sicg42A+7D=bmpWtv=p_an3c5=OQw9YB2oxPXhR2A@mail.gmail.c>
@ 2011-09-16 14:57       ` erik quanstrom
  2011-09-16 15:28         ` ron minnich
  0 siblings, 1 reply; 27+ messages in thread
From: erik quanstrom @ 2011-09-16 14:57 UTC (permalink / raw)
  To: rminnich, 9fans

On Fri Sep 16 10:41:24 EDT 2011, rminnich@gmail.com wrote:
> On Thu, Sep 15, 2011 at 11:46 PM, erik quanstrom <quanstro@quanstro.net> wrote:
>
> > p.s.  i guess the claim i thought i saw that you need 1gb pages
> > isn't correct.  that's good, but i need to track down why i don't see 'em on the atom.
>
> My code that mapped physical memory with 1 GiB pages in the kernel got
> lost in the shuffle. I expect to bring it back.

please don't make it required.  there's already array of page sizes per mach.

- erik



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
  2011-09-16 14:57       ` erik quanstrom
@ 2011-09-16 15:28         ` ron minnich
  2011-09-16 15:31           ` Francisco J Ballesteros
       [not found]           ` <CA+N-5bbt92zqNpt0KW4prykyTxjYNHJJ7LuUCj4n3EACyC5GvQ@mail.gmail.c>
  0 siblings, 2 replies; 27+ messages in thread
From: ron minnich @ 2011-09-16 15:28 UTC (permalink / raw)
  To: erik quanstrom; +Cc: 9fans

On Fri, Sep 16, 2011 at 7:57 AM, erik quanstrom
<quanstro@labs.coraid.com> wrote:

> please don't make it required.  there's already array of page sizes per mach.

sorry, but time moves forward. We need the 1G pages for user mode
anyway -- so the fact that you can run on older machines is a happy
coincidence but it will die the first time you run a process that uses
more than 1G. Reducing PTE count by 512 is a not insignificant cost. I
expect to put the 1G pages for the kernel map back in in the next
while.

ron



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
  2011-09-16 15:28         ` ron minnich
@ 2011-09-16 15:31           ` Francisco J Ballesteros
       [not found]           ` <CA+N-5bbt92zqNpt0KW4prykyTxjYNHJJ7LuUCj4n3EACyC5GvQ@mail.gmail.c>
  1 sibling, 0 replies; 27+ messages in thread
From: Francisco J Ballesteros @ 2011-09-16 15:31 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

I agree, but I think it's ok if we use them only if we have them, and rely on
just 2M otherwise.

On Fri, Sep 16, 2011 at 5:28 PM, ron minnich <rminnich@gmail.com> wrote:
> On Fri, Sep 16, 2011 at 7:57 AM, erik quanstrom
> <quanstro@labs.coraid.com> wrote:
>
>> please don't make it required.  there's already array of page sizes per mach.
>
> sorry, but time moves forward. We need the 1G pages for user mode
> anyway -- so the fact that you can run on older machines is a happy
> coincidence but it will die the first time you run a process that uses
> more than 1G. Reducing PTE count by 512 is a not insignificant cost. I
> expect to put the 1G pages for the kernel map back in in the next
> while.
>
> ron
>
>



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
       [not found]           ` <CA+N-5bbt92zqNpt0KW4prykyTxjYNHJJ7LuUCj4n3EACyC5GvQ@mail.gmail.c>
@ 2011-09-16 15:33             ` erik quanstrom
  2011-09-16 15:36               ` Francisco J Ballesteros
       [not found]               ` <CA+N-5bYJ1VBpqQENUy5fZirE_Joa9E6RE8rJbvaFwh065zd2TA@mail.gmail.c>
  0 siblings, 2 replies; 27+ messages in thread
From: erik quanstrom @ 2011-09-16 15:33 UTC (permalink / raw)
  To: nemo, 9fans, quanstro

On Fri Sep 16 11:31:58 EDT 2011, nemo@lsub.org wrote:
> I agree, but I think it's ok if we use them only if we have them, and rely on
> just 2M otherwise.
>
> On Fri, Sep 16, 2011 at 5:28 PM, ron minnich <rminnich@gmail.com> wrote:
> > On Fri, Sep 16, 2011 at 7:57 AM, erik quanstrom
> > <quanstro@labs.coraid.com> wrote:
> >
> >> please don't make it required.  there's already array of page sizes per mach.
> >
> > sorry, but time moves forward. We need the 1G pages for user mode
> > anyway -- so the fact that you can run on older machines is a happy
> > coincidence but it will die the first time you run a process that uses
> > more than 1G. Reducing PTE count by 512 is a not insignificant cost. I
> > expect to put the 1G pages for the kernel map back in in the next
> > while.

i would hate to have dogmatic rules like this limit cooperation.  i'm really
excited about nix.  please don't kill the buzz off so quickly.

- erik



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
  2011-09-16 15:33             ` erik quanstrom
@ 2011-09-16 15:36               ` Francisco J Ballesteros
  2011-09-16 15:44                 ` ron minnich
       [not found]               ` <CA+N-5bYJ1VBpqQENUy5fZirE_Joa9E6RE8rJbvaFwh065zd2TA@mail.gmail.c>
  1 sibling, 1 reply; 27+ messages in thread
From: Francisco J Ballesteros @ 2011-09-16 15:36 UTC (permalink / raw)
  To: erik quanstrom; +Cc: 9fans

What's the problem if you let the kernel work with 2m/1G
entries when you have them, and with just 2M entries with it doesnt?

Or perhaps your reply was not to my mail... or I'm confused :)

On Fri, Sep 16, 2011 at 5:33 PM, erik quanstrom
<quanstro@labs.coraid.com> wrote:
>
> i would hate to have dogmatic rules like this limit cooperation.  i'm really
> excited about nix.  please don't kill the buzz off so quickly.
>



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
       [not found]               ` <CA+N-5bYJ1VBpqQENUy5fZirE_Joa9E6RE8rJbvaFwh065zd2TA@mail.gmail.c>
@ 2011-09-16 15:37                 ` erik quanstrom
  0 siblings, 0 replies; 27+ messages in thread
From: erik quanstrom @ 2011-09-16 15:37 UTC (permalink / raw)
  To: nemo, quanstro, 9fans

On Fri Sep 16 11:36:14 EDT 2011, nemo@lsub.org wrote:
> What's the problem if you let the kernel work with 2m/1G
> entries when you have them, and with just 2M entries with it doesnt?
>
> Or perhaps your reply was not to my mail... or I'm confused :)
>
> On Fri, Sep 16, 2011 at 5:33 PM, erik quanstrom
> <quanstro@labs.coraid.com> wrote:
> >
> > i would hate to have dogmatic rules like this limit cooperation.  i'm really
> > excited about nix.  please don't kill the buzz off so quickly.

that would be great if we could use all the page sizes.   i don't
want to have to have 1gb for the kernel all the time.

- erik



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
  2011-09-16 15:36               ` Francisco J Ballesteros
@ 2011-09-16 15:44                 ` ron minnich
  2011-09-16 16:07                   ` Francisco J Ballesteros
  0 siblings, 1 reply; 27+ messages in thread
From: ron minnich @ 2011-09-16 15:44 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

OK, ok, I'm being unreasonable.

If somebody will please write the cpuid function that returns 1 if GiB
pages are supported and 0 if not, I'll do the rest next week.

ron



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
  2011-09-16 15:44                 ` ron minnich
@ 2011-09-16 16:07                   ` Francisco J Ballesteros
  0 siblings, 0 replies; 27+ messages in thread
From: Francisco J Ballesteros @ 2011-09-16 16:07 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

It's already there.
We can discuss this off-list

On Fri, Sep 16, 2011 at 5:44 PM, ron minnich <rminnich@gmail.com> wrote:
> OK, ok, I'm being unreasonable.
>
> If somebody will please write the cpuid function that returns 1 if GiB
> pages are supported and 0 if not, I'll do the rest next week.
>
> ron
>
>



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
  2011-09-16  6:46   ` erik quanstrom
                       ` (4 preceding siblings ...)
       [not found]     ` <CAP6exYLy6sicg42A+7D=bmpWtv=p_an3c5=OQw9YB2oxPXhR2A@mail.gmail.c>
@ 2011-09-16 18:24     ` ron minnich
  2011-09-16 22:23       ` Charles Forsyth
       [not found]     ` <CAP6exYK+aoYcaC+LLo4XdPmZ1jvPQ0dAgZuUkmX7_xZQ+h3+gQ@mail.gmail.c>
  6 siblings, 1 reply; 27+ messages in thread
From: ron minnich @ 2011-09-16 18:24 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

On Thu, Sep 15, 2011 at 11:46 PM, erik quanstrom <quanstro@quanstro.net> wrote:
>
>
> (can't one just preemptively map the whole text on first-fault?)

we actually do if it's 2M pages :-)
btw, you want to see some good code, check out mmuput and physinit.
Very elegant (to me) code that
properly uses the pages available. Quite cool. It had changed since
last I looked and I did not realize what
a nice job jmk and nemo had done.

ron



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
       [not found]     ` <CAP6exYK+aoYcaC+LLo4XdPmZ1jvPQ0dAgZuUkmX7_xZQ+h3+gQ@mail.gmail.c>
@ 2011-09-16 18:32       ` erik quanstrom
  2011-09-16 18:44         ` Francisco J Ballesteros
  0 siblings, 1 reply; 27+ messages in thread
From: erik quanstrom @ 2011-09-16 18:32 UTC (permalink / raw)
  To: 9fans

On Fri Sep 16 14:26:53 EDT 2011, rminnich@gmail.com wrote:
> On Thu, Sep 15, 2011 at 11:46 PM, erik quanstrom <quanstro@quanstro.net> wrote:
> >
> >
> > (can't one just preemptively map the whole text on first-fault?)
>
> we actually do if it's 2M pages :-)

ghostscript, too?

:-)

- erik



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
  2011-09-16 18:32       ` erik quanstrom
@ 2011-09-16 18:44         ` Francisco J Ballesteros
  0 siblings, 0 replies; 27+ messages in thread
From: Francisco J Ballesteros @ 2011-09-16 18:44 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

That's what 1G pages are for.

On Fri, Sep 16, 2011 at 8:32 PM, erik quanstrom
<quanstro@labs.coraid.com> wrote:
> On Fri Sep 16 14:26:53 EDT 2011, rminnich@gmail.com wrote:
>> On Thu, Sep 15, 2011 at 11:46 PM, erik quanstrom <quanstro@quanstro.net> wrote:
>> >
>> >
>> > (can't one just preemptively map the whole text on first-fault?)
>>
>> we actually do if it's 2M pages :-)
>
> ghostscript, too?
>
> :-)
>
> - erik
>
>



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
  2011-09-16 18:24     ` ron minnich
@ 2011-09-16 22:23       ` Charles Forsyth
  2011-09-16 22:27         ` erik quanstrom
  0 siblings, 1 reply; 27+ messages in thread
From: Charles Forsyth @ 2011-09-16 22:23 UTC (permalink / raw)
  To: 9fans

> (can't one just preemptively map the whole text on first-fault?)

that doesn't make sense on many architectures.
if there isn't a page table system, for instance, but a set of moderate size of software-managed tlbs,
perhaps with hardware assistance, that won't work terribly well.
the faults tell you what you're using right now. you need a page size, however,
that's appropriate for your system (including your application set), or
faulting lots of tiny chunks of text that your processor can execute quickly
will give excessive overhead.



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
  2011-09-16 22:23       ` Charles Forsyth
@ 2011-09-16 22:27         ` erik quanstrom
  2011-09-17  3:39           ` Bruce Ellis
  2011-09-17  7:44           ` Charles Forsyth
  0 siblings, 2 replies; 27+ messages in thread
From: erik quanstrom @ 2011-09-16 22:27 UTC (permalink / raw)
  To: 9fans

On Fri Sep 16 18:18:25 EDT 2011, forsyth@terzarima.net wrote:
> > (can't one just preemptively map the whole text on first-fault?)
>
> that doesn't make sense on many architectures.
> if there isn't a page table system, for instance, but a set of moderate size of software-managed tlbs,
> perhaps with hardware assistance, that won't work terribly well.
> the faults tell you what you're using right now. you need a page size, however,
> that's appropriate for your system (including your application set), or
> faulting lots of tiny chunks of text that your processor can execute quickly
> will give excessive overhead.

sure but we are talking about x86_64, which has a limited number of
possible page sizes and hardware managed tlbs.

it would seem that one could give the architecture code the info and
let it decide?

- erik



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
  2011-09-16 22:27         ` erik quanstrom
@ 2011-09-17  3:39           ` Bruce Ellis
  2011-09-17  7:35             ` Charles Forsyth
  2011-09-17  7:44           ` Charles Forsyth
  1 sibling, 1 reply; 27+ messages in thread
From: Bruce Ellis @ 2011-09-17  3:39 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

i like the idea of dma asynchronously cleared pages for a free list.
very much a "if you take last beer stock the fridge" laziness.

brucee

On 17 September 2011 08:27, erik quanstrom <quanstro@labs.coraid.com> wrote:
> On Fri Sep 16 18:18:25 EDT 2011, forsyth@terzarima.net wrote:
>> > (can't one just preemptively map the whole text on first-fault?)
>>
>> that doesn't make sense on many architectures.
>> if there isn't a page table system, for instance, but a set of moderate size of software-managed tlbs,
>> perhaps with hardware assistance, that won't work terribly well.
>> the faults tell you what you're using right now. you need a page size, however,
>> that's appropriate for your system (including your application set), or
>> faulting lots of tiny chunks of text that your processor can execute quickly
>> will give excessive overhead.
>
> sure but we are talking about x86_64, which has a limited number of
> possible page sizes and hardware managed tlbs.
>
> it would seem that one could give the architecture code the info and
> let it decide?
>
> - erik
>
>



--
Don't meddle in the mouth -- MVS (0416935147, +1-513-3BRUCEE)



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
  2011-09-17  3:39           ` Bruce Ellis
@ 2011-09-17  7:35             ` Charles Forsyth
  0 siblings, 0 replies; 27+ messages in thread
From: Charles Forsyth @ 2011-09-17  7:35 UTC (permalink / raw)
  To: 9fans

the trouble with the zeroing pages scheme, which has been done before
(eg, having the process with least priority top up the memory), is that
it was great when the system wasn't under load, just fine under increased
load, but eventually, when you most want your cycles, you're back
in the original situation. if you were able to hold usage below that
level, it still worked against keeping more useful data cached in that memory.
(in effect, you had a cache of zeroes.) in a similar way, there might be more useful things
to do with your DMA cycles, which cost something too, and there are implications for power.

the maths behind Working Set hasn't gone away just because the parameters got bigger.
otherwise, it would never have been true.



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
  2011-09-16 22:27         ` erik quanstrom
  2011-09-17  3:39           ` Bruce Ellis
@ 2011-09-17  7:44           ` Charles Forsyth
  2011-09-17  8:45             ` Charles Forsyth
  1 sibling, 1 reply; 27+ messages in thread
From: Charles Forsyth @ 2011-09-17  7:44 UTC (permalink / raw)
  To: 9fans

>sure but we are talking about x86_64, which has a limited number of
>possible page sizes

no, it doesn't.



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
  2011-09-17  7:44           ` Charles Forsyth
@ 2011-09-17  8:45             ` Charles Forsyth
  0 siblings, 0 replies; 27+ messages in thread
From: Charles Forsyth @ 2011-09-17  8:45 UTC (permalink / raw)
  To: 9fans

>>sure but we are talking about x86_64, which has a limited number of
>>possible page sizes

>no, it doesn't.

i cut off that message: what do we usually do when the hardware doesn't do quite what we'd like?
admittedly, if it had only Gbyte and 2 Mbyte pages, then indeed, there wouldn't be much to be done about it. 4k is better in that regard, because it's too small.



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
  2011-09-16  5:44 ` Noah Evans
@ 2011-09-17 19:41   ` erik quanstrom
  2011-09-17 20:03     ` Noah Evans
  0 siblings, 1 reply; 27+ messages in thread
From: erik quanstrom @ 2011-09-17 19:41 UTC (permalink / raw)
  To: 9fans

hey noah, what's the mailing list?

- erik



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [9fans] gar nix!
  2011-09-17 19:41   ` erik quanstrom
@ 2011-09-17 20:03     ` Noah Evans
  0 siblings, 0 replies; 27+ messages in thread
From: Noah Evans @ 2011-09-17 20:03 UTC (permalink / raw)
  To: Fans of the OS Plan 9 from Bell Labs

nix-dev@googlegroups.com

Noah



On Sat, Sep 17, 2011 at 9:41 PM, erik quanstrom <quanstro@quanstro.net> wrote:
> hey noah, what's the mailing list?
>
> - erik
>
>



^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2011-09-17 20:03 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-09-16  1:19 [9fans] gar nix! erik quanstrom
2011-09-16  5:44 ` Noah Evans
2011-09-17 19:41   ` erik quanstrom
2011-09-17 20:03     ` Noah Evans
2011-09-16  5:56 ` ron minnich
     [not found] ` <CAP6exYLuD6ZeOwCfY9TcgSWEYp-2QH-7qUO713GL0cravJ2FzA@mail.gmail.c>
2011-09-16  6:46   ` erik quanstrom
2011-09-16  7:02     ` Nemo
2011-09-16  7:52     ` dexen deVries
2011-09-16 13:36     ` Charles Forsyth
2011-09-16 14:39     ` ron minnich
     [not found]     ` <CAP6exYLy6sicg42A+7D=bmpWtv=p_an3c5=OQw9YB2oxPXhR2A@mail.gmail.c>
2011-09-16 14:57       ` erik quanstrom
2011-09-16 15:28         ` ron minnich
2011-09-16 15:31           ` Francisco J Ballesteros
     [not found]           ` <CA+N-5bbt92zqNpt0KW4prykyTxjYNHJJ7LuUCj4n3EACyC5GvQ@mail.gmail.c>
2011-09-16 15:33             ` erik quanstrom
2011-09-16 15:36               ` Francisco J Ballesteros
2011-09-16 15:44                 ` ron minnich
2011-09-16 16:07                   ` Francisco J Ballesteros
     [not found]               ` <CA+N-5bYJ1VBpqQENUy5fZirE_Joa9E6RE8rJbvaFwh065zd2TA@mail.gmail.c>
2011-09-16 15:37                 ` erik quanstrom
2011-09-16 18:24     ` ron minnich
2011-09-16 22:23       ` Charles Forsyth
2011-09-16 22:27         ` erik quanstrom
2011-09-17  3:39           ` Bruce Ellis
2011-09-17  7:35             ` Charles Forsyth
2011-09-17  7:44           ` Charles Forsyth
2011-09-17  8:45             ` Charles Forsyth
     [not found]     ` <CAP6exYK+aoYcaC+LLo4XdPmZ1jvPQ0dAgZuUkmX7_xZQ+h3+gQ@mail.gmail.c>
2011-09-16 18:32       ` erik quanstrom
2011-09-16 18:44         ` Francisco J Ballesteros

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).