* [9fans] 9vx, kproc and *double sleep* @ 2010-06-11 14:06 Philippe Anel 2010-06-11 14:40 ` ron minnich 2010-06-11 15:02 ` ron minnich 0 siblings, 2 replies; 39+ messages in thread From: Philippe Anel @ 2010-06-11 14:06 UTC (permalink / raw) To: 9fans Dear 9fans, I think I maybe have found the reason why some of us have a *double sleep* error while running 9vx. I think this would even explain some segmentation faults seen in kprocs. Please forgive me for the length of this post. I'm trying to be as explicit as possible. I have no certainty about this, mostly because the part of the code I think involved in the bug has been checked and read several times by several programmers. But I instrumented my own version of 9vx with a circular log buffer and have the same results each time I encounter this bug. While reading the a/proc.c source code, I wondered what would happen if two cpus (Machs) try to call the function ready() on the same process at almost the same time. I know the function ready() queues a proc into run queue so the scheduler (or schedulers as one is executed per Mach) can execute it. Because of this, if a process can be readied twice, two Machs would execute the same code with the same stack and hence the whole kernel would crash. Then I immediatly wondered if this could be the reason why we have the function sleep called twice on the same Rendez address (typically a "*double sleep*"). The function queueproc() called by the function ready() does not check the if process about to be added to the run queue has already been added. The reason is not only because it takes time, but most of all, this case is not supposed to happen. If a process A in running (ie in Running state), it is out of the run queue. Same if it is waiting (ie in Wakeme state), in which case, only one other process is expected to have a ref on A (through the Rendez.p member). The thing is I think this last point is not totally true, because process A also holds a ref to itself. Let's think about the following thought experiment (here again I apologize because I suspect I could have made this simpler), which would requires 3 cpus (or Machs): Step 1, on Mach 0: A proc X is asking for a worker process (kproc/kserver in 9vx) to execute some task. In order to do so, it calls wakeup on a Rendez pointer held by a proc A, typically sleeping on it (we will see how a few lines below). Step 2, on Mach 1: Proc A is awakened. It serves the call (Kcall in 9vx) and then signals the client (proc X) the work is done. In order to do so, it calls wakeup on the Rendez pointer Proc.rsleep of X. Then proc A waits for another request. It sleeps again on the Rendez pointer assigned to the server. Let see how it works. It first locks the Rendez, then locks itself. After this it checks if the condition happened of if a note is pending. Lets assume this did not happened. It thus change its own state to Wakeme and initialize the schedule point which be called by the scheduler when the Rendez will be awakened. Then, before going to sleep (or giving control to scheduler), it unlocks itself, then unlocks the Rendez. Sleeping means giving control the scheduler so that another proc (or coroutine in kernel) can execute. In order to do that, the sleep() function just calls "gotolabel(&m->sched)". At this moment, no one has a lock on either the Rendez or proc A. Step 3, on Mach 0: Proc X, which has been awakened by proc A, ask proc A for another request. It calls wakeup on the Rendez on which A is trying to sleep. I insist on the fact proc A is 'trying' because the scheduler has still not switched Mach.up and thus Mach 1 has still a ref on Proc A. In the function wakeup(), proc X locks r (the Rendez), then locks r->p which is pointer to proc A, and then call ready on it before unlocking p and r. At this point, we have proc A in the run queue, and still in the scheduler of Mach 1. We then can imagine that a third cpu (Mach 2) schedules this proc (ie dequeue it from run queue, change its state to Running and calls gotolabel(&p->sched) ... executing proc A code on proc A stack ... This is the bug I think. Because in the meantime, Mach 1 can continue its execution in schedinit() function, with proc A state set to Running. And Mach 1 would thus calls ready() on proc A and even schedules and executes it. I know this is unlikely to happen because sleep() goes to the scheduler with splhi and there is no reason why it could not process schedinit() "if (up) {...}" statement before another cpu/Mach calls the function wakeup which itself calls ready() on proc A. But the problem is that 9vx splhi() is empty ... and cpu/Machs which are really pthreads are scheduled by the operating system (Linux, BSD, ...). In fact, there is a 0.000001% (I cannot calculate this to be honnest) (non-)chance this can happen on real hardware. I updated the functions sleep() and schedinit() so the function schedinit() unlocks both p and p->r in "if (up) { ... }" statement. Since this change, I no longer have the *double sleep* error, nor segmentation fault. What do you think ? Phil; /* * sleep if a condition is not true. Another process will * awaken us after it sets the condition. When we awaken * the condition may no longer be true. * * we lock both the process and the rendezvous to keep r->p * and p->r synchronized. */ void sleep(Rendez *r, int (*f)(void*), void *arg) { int s; s = splhi(); lock(r); lock(&up->rlock); if(r->p){ print("double sleep called from %#p, %lud %lud\n", getcallerpc(&r), r->p->pid, up->pid); dumpstack(); } /* * Wakeup only knows there may be something to do by testing * r->p in order to get something to lock on. * Flush that information out to memory in case the sleep is * committed. */ r->p = up; if((*f)(arg) || up->notepending){ /* * if condition happened or a note is pending * never mind */ r->p = nil; unlock(&up->rlock); unlock(r); } else { /* * now we are committed to * change state and call scheduler */ up->state = Wakeme; up->r = r; procsave(up); if(setlabel(&up->sched)) { /* * here when the process is awakened */ procrestore(up); spllo(); } else { /* * here to go to sleep (i.e. stop Running) */ // xigh: move unlocking to schedinit() // unlock(&up->rlock); // unlock(r); gotolabel(&m->sched); } } if(up->notepending) { up->notepending = 0; splx(s); if(up->procctl == Proc_exitme && up->closingfgrp) forceclosefgrp(); error(Eintr); } splx(s); } /* * Always splhi()'ed. */ void schedinit(void) /* never returns */ { Edf *e; setlabel(&m->sched); if(up) { if((e = up->edf) && (e->flags & Admitted)) edfrecord(up); m->proc = 0; switch(up->state) { case Running: ready(up); break; case Wakeme: unlock(&up->rlock); unlock(up->r); break; case Moribund: // ... break; } up->mach = nil; updatecpu(up); up = nil; } sched(); } ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-11 14:06 [9fans] 9vx, kproc and *double sleep* Philippe Anel @ 2010-06-11 14:40 ` ron minnich 2010-06-11 14:49 ` erik quanstrom 2010-06-11 14:49 ` Philippe Anel 2010-06-11 15:02 ` ron minnich 1 sibling, 2 replies; 39+ messages in thread From: ron minnich @ 2010-06-11 14:40 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs I'll look but one thing doesn't make sense to me: On Fri, Jun 11, 2010 at 2:06 PM, Philippe Anel <xigh@bouyapop.org> wrote: > // xigh: move unlocking to schedinit() schedinit only runs once and sleep runs all the time. That's the part I don't get. But you might have found something, I sure wish I understood it all better :-) ron ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-11 14:40 ` ron minnich @ 2010-06-11 14:49 ` erik quanstrom 2010-06-11 14:54 ` Philippe Anel ` (3 more replies) 2010-06-11 14:49 ` Philippe Anel 1 sibling, 4 replies; 39+ messages in thread From: erik quanstrom @ 2010-06-11 14:49 UTC (permalink / raw) To: 9fans > schedinit only runs once and sleep runs all the time. That's the part > I don't get. gotolabel in sleep sends you back to the setlabel at the top of schedinit. > But you might have found something, I sure wish I understood it all better :-) i'm not entirely convinced that the problem isn't the fact that splhi() doesn't do anything. here's what i wonder: - does richard miller's alternate implementation of wakeup solve this problem. - does changing spl* to manipulation of a per-cpu lock solve the problem? sometimes preventing anything else from running on your mach is exactly what you want. in any event, given the long history with sleep/wakeup, changes should be justified with a promula model. the current model omits the spl* and the second lock. (http://swtch.com/spin/sleep_wakeup.txt). - erik ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-11 14:49 ` erik quanstrom @ 2010-06-11 14:54 ` Philippe Anel 2010-06-11 15:03 ` erik quanstrom 2010-06-11 14:59 ` Philippe Anel ` (2 subsequent siblings) 3 siblings, 1 reply; 39+ messages in thread From: Philippe Anel @ 2010-06-11 14:54 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs I don't think either splhi fixes the problem ... it only hides it for the 99.999999999% cases. Phil; erik quanstrom wrote: >> schedinit only runs once and sleep runs all the time. That's the part >> I don't get. >> > > gotolabel in sleep sends you back to the > setlabel at the top of schedinit. > > >> But you might have found something, I sure wish I understood it all better :-) >> > > i'm not entirely convinced that the problem isn't the fact that splhi() > doesn't do anything. > > here's what i wonder: > - does richard miller's alternate implementation of wakeup > solve this problem. > - does changing spl* to manipulation of a per-cpu lock solve the problem? > sometimes preventing anything else from running on your mach is > exactly what you want. > > in any event, given the long history with sleep/wakeup, changes should > be justified with a promula model. the current model omits the spl* > and the second lock. (http://swtch.com/spin/sleep_wakeup.txt). > > - erik > > > ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-11 14:54 ` Philippe Anel @ 2010-06-11 15:03 ` erik quanstrom 2010-06-11 15:22 ` Philippe Anel 0 siblings, 1 reply; 39+ messages in thread From: erik quanstrom @ 2010-06-11 15:03 UTC (permalink / raw) To: 9fans On Fri Jun 11 10:54:40 EDT 2010, xigh@bouyapop.org wrote: > I don't think either splhi fixes the problem ... it only hides it for > the 99.999999999% cases. on a casual reading, i agree. unfortunately, the current simplified promela model disagrees, and coraid has run millions of cpu-hrs on quad processor machines running near 100% load with up to 1500 procs, and never seen this. unless you have a good reason why we've never seen such a deadlock, i'm inclined to believe we're missing something. we need better reasons for sticking locks in than guesswork. multiple locks can easily lead to deadlock. have you tried your solution with a single Mach? > No ... I don't think so. I think the problem comes from the fact the > process is no longer exclusively tied to the current Mach when going > (back) to schedinit() ... hence the change I did. have you tried? worst case is you'll have more information on the problem. - erik ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-11 15:03 ` erik quanstrom @ 2010-06-11 15:22 ` Philippe Anel 2010-06-11 15:25 ` Philippe Anel 0 siblings, 1 reply; 39+ messages in thread From: Philippe Anel @ 2010-06-11 15:22 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs I never seen it on real hardware but I think it does not mean it cannot happen. The problem in 9vx comes from the fact 9vx Mach are simulated by pthreads which can be scheduled just before calling gotolabel in sleep(). This gives the time to another Mach (or pthread) to 'readies' the proc A. I never seen it on real hardware but I think it does not mean it cannot happen. The problem in 9vx comes from the fact 9vx Mach are simulated by pthreads which can be scheduled just before calling gotolabel in sleep(). This gives the time to another Mach (or pthread) to 'readies' the proc A. I think it does not happen on real hardware because the cpu just don't stop while calling gotolabel() and executes the scheduler. It does not happen because the cpu is not interupted (thanks to splhi). But still, I feel the problem is here, and we can imagine ... why not, the cpu running proc A blocking on a bus request or something else. I don't know if the model is good or not ... and I wrote this is only a thougth experiment ... with my poor brain :) I think it does not happen on real hardware because the cpu just don't stop while calling gotolabel() and executes the scheduler. It does not happen because the cpu is not interupted (thanks to splhi). But still, I feel the problem is here, and we can imagine ... why not, the cpu running proc A blocking on a bus request or something else. I don't know if the model is good or not ... and I wrote this is only a thougth experiment ... with my poor brain :) Phil; erik quanstrom wrote: > On Fri Jun 11 10:54:40 EDT 2010, xigh@bouyapop.org wrote: > >> I don't think either splhi fixes the problem ... it only hides it for >> the 99.999999999% cases. >> > > on a casual reading, i agree. unfortunately, > the current simplified promela model disagrees, > and coraid has run millions of cpu-hrs on quad > processor machines running near 100% load > with up to 1500 procs, and never seen this. > > unless you have a good reason why we've never > seen such a deadlock, i'm inclined to believe > we're missing something. we need better reasons > for sticking locks in than guesswork. > multiple locks can easily lead to deadlock. > > have you tried your solution with a single Mach? > > >> No ... I don't think so. I think the problem comes from the fact the >> process is no longer exclusively tied to the current Mach when going >> (back) to schedinit() ... hence the change I did. >> > > have you tried? worst case is you'll have more > information on the problem. > > - erik > > > ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-11 15:22 ` Philippe Anel @ 2010-06-11 15:25 ` Philippe Anel 0 siblings, 0 replies; 39+ messages in thread From: Philippe Anel @ 2010-06-11 15:25 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs Oooops ... sorry for double copy :) The post was supposed to be : I never seen it on real hardware but I think it does not mean it cannot happen. The problem in 9vx comes from the fact 9vx Mach are simulated by pthreads which can be scheduled just before calling gotolabel in sleep(). This gives the time to another Mach (or pthread) to 'readies' the proc A. I think it does not happen on real hardware because the cpu just don't stop while calling gotolabel() and executes the scheduler. It does not happen because the cpu is not interupted (thanks to splhi). But still, I feel the problem is here, and we can imagine ... why not, the cpu running proc A blocking on a bus request or something else. I don't know if the model is good or not ... and as I wrote, this is only a thougth experiment ... with my poor brain :) Phil; Philippe Anel wrote: > > I never seen it on real hardware but I think it does not mean it > cannot happen. The problem in 9vx comes from the fact 9vx Mach are > simulated by pthreads which can be scheduled just before calling > gotolabel in sleep(). This gives the time to another Mach (or pthread) > to 'readies' the proc A. > > I never seen it on real hardware but I think it does not mean it > cannot happen. The problem in 9vx comes from the fact 9vx Mach are > simulated by pthreads which can be scheduled just before calling > gotolabel in sleep(). This gives the time to another Mach (or pthread) > to 'readies' the proc A. > > I think it does not happen on real hardware because the cpu just don't > stop while calling gotolabel() and executes the scheduler. It does not > happen because the cpu is not interupted (thanks to splhi). But still, > I feel the problem is here, and we can imagine ... why not, the cpu > running proc A blocking on a bus request or something else. > > I don't know if the model is good or not ... and I wrote this is only > a thougth experiment ... with my poor brain :) > > I think it does not happen on real hardware because the cpu just don't > stop while calling gotolabel() and executes the scheduler. It does not > happen because the cpu is not interupted (thanks to splhi). But still, > I feel the problem is here, and we can imagine ... why not, the cpu > running proc A blocking on a bus request or something else. > > I don't know if the model is good or not ... and I wrote this is only > a thougth experiment ... with my poor brain :) > > Phil; > > > erik quanstrom wrote: >> On Fri Jun 11 10:54:40 EDT 2010, xigh@bouyapop.org wrote: >> >>> I don't think either splhi fixes the problem ... it only hides it >>> for the 99.999999999% cases. >>> >> >> on a casual reading, i agree. unfortunately, >> the current simplified promela model disagrees, >> and coraid has run millions of cpu-hrs on quad >> processor machines running near 100% load >> with up to 1500 procs, and never seen this. >> >> unless you have a good reason why we've never >> seen such a deadlock, i'm inclined to believe >> we're missing something. we need better reasons >> for sticking locks in than guesswork. >> multiple locks can easily lead to deadlock. >> >> have you tried your solution with a single Mach? >> >> >>> No ... I don't think so. I think the problem comes from the fact the >>> process is no longer exclusively tied to the current Mach when going >>> (back) to schedinit() ... hence the change I did. >>> >> >> have you tried? worst case is you'll have more >> information on the problem. >> >> - erik >> >> >> > > > ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-11 14:49 ` erik quanstrom 2010-06-11 14:54 ` Philippe Anel @ 2010-06-11 14:59 ` Philippe Anel 2010-06-11 17:11 ` Bakul Shah 2010-06-12 7:02 ` Philippe Anel 2010-06-12 20:15 ` Richard Miller 3 siblings, 1 reply; 39+ messages in thread From: Philippe Anel @ 2010-06-11 14:59 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs Ooops I forgot to answer this : > - does changing spl* to manipulation of a per-cpu lock solve the problem? > sometimes preventing anything else from running on your mach is > exactly what you want. > No ... I don't think so. I think the problem comes from the fact the process is no longer exclusively tied to the current Mach when going (back) to schedinit() ... hence the change I did. Phil; ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-11 14:59 ` Philippe Anel @ 2010-06-11 17:11 ` Bakul Shah 2010-06-11 17:31 ` Philippe Anel 0 siblings, 1 reply; 39+ messages in thread From: Bakul Shah @ 2010-06-11 17:11 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs On Fri, 11 Jun 2010 16:59:42 +0200 Philippe Anel <xigh@bouyapop.org> wrote: > Ooops I forgot to answer this : > > - does changing spl* to manipulation of a per-cpu lock solve the problem? > > sometimes preventing anything else from running on your mach is > > exactly what you want. > > > No ... I don't think so. I think the problem comes from the fact the > process is no longer exclusively tied to the current Mach when going > (back) to schedinit() ... hence the change I did. Were you able to verify your hypothesis by adding a bit of trapping code + assertion(s) in the original sources? At the point of double sleep one can check state to see if the expected preconditions are true. Alternatively one can check when the expected conditions become true, set a variable and test it where the double sleep print occurs. Then one can sort of walk back to the earliest point where things go wrong. ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-11 17:11 ` Bakul Shah @ 2010-06-11 17:31 ` Philippe Anel 2010-06-11 18:34 ` Bakul Shah 0 siblings, 1 reply; 39+ messages in thread From: Philippe Anel @ 2010-06-11 17:31 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs I only did my tests on 9vx. I have a version that I instrumented with a circular log buffer, and I have some gdb macros which dumps the buffer. I can put the whole source somewhere and even a log with my comments of the bug if you want to see it. But please note that I made several changes (because I had to understand how it works) and I would rather copy my changes to the latest 9vx source tree so that everyone can read it. What do you think ? At the end of this post I added some part of the instruments. Please, I would like to insist on the fact I'm not saying the promela model is wrong. And I realize that the fix I propose might not be the good one. Maybe the problem is even elsewhere. All these is just feelings, logs and headache. Phil; The logs look like this : (gdb) k9log 10 - kernel history size: 524288 element size: 64 element count: 8192 History stack: 56300 elems: dbeb: m= 3 pc=44e235 sp= 51f02020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc: psleep dbea: m= 3 pc=44e235 sp= 51f02020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc: search dbe9: m= 3 pc=44e235 sp= 51f02020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc dbe8: m= 3 pc=44dfb2 sp= 51f02070 up=0 :0 xp=4 :8 r= 0 a= 0 # proc: sched: calling runproc dbe7: m= 3 pc=44dfb2 sp= 51f02070 up=0 :0 xp=4 :8 r= 0 a= 0 # proc: sched dbe6: m= 3 pc=40e41b sp= 51f020b0 up=4 :8 xp=0 :0 r= 0 a= 0 # proc: schedinit: disable up dbe5: m= 3 pc=40e41b sp= 51f020b0 up=4 :8 xp=0 :0 r= 983360 a= 0 # proc: sleep: unlock r dbe4: m= 3 pc=40e41b sp= 51f020b0 up=4 :8 xp=0 :0 r= 983360 a= 0 # proc: sleep: unlock up dbe3: m= 3 pc=40e41b sp= 51f020b0 up=4 :8 xp=0 :0 r= 0 a= 0 # proc: schedinit: up still active dbe2: m= 3 pc=40e41b sp= 51f020b0 up=4 :8 xp=0 :0 r= 0 a= 0 # proc: schedinit: start where the first column is the serial of the logged operation, and 'a' just a int. ------------------------------------------------------------------------------------- void sleep(Rendez *r, int (*ftest)(void*), void *arg) { int s; void (*pt)(Proc*, int, vlong); void * pc; pc = getcallerpc(&r); k9log("proc: sleep", pc, 0, r, 0); s = splhi(); if (up == nil) dbgbreak(); k9log("proc: sleep: lock r", pc, 0, r, 0); lock(&r->xlk); if(r->xp){ k9log("proc: sleep: ** double sleep **", pc, r->xp, r, 0); panic("cpu%d: up=%d *double sleep* r=%p r->p=%d caller=%p\n", m->machno, up ? up->pid : 0, r, r->xp ? r->xp->pid : 0, pc); } /* * Wakeup only knows there may be something to do by testing * r->p in order to get something to lock on. * Flush that information out to memory in case the sleep is * committed. */ r->xp = up; r->xm = m; r->xpc = pc; k9log("proc: sleep: lock up", pc, 0, r, 0); lock(&up->rlock); // if (up->state != Running) // dbgbreak(); k9log("proc: sleep: condition", pc, 0, r, up->nlocks.ref); if ((*ftest)(arg)) { k9log("proc: sleep: happened", pc, 0, r, 0); done: /* * if condition happened or a note is pending * never mind */ r->xp = nil; k9log("proc: sleep: unlock up", pc, 0, r, 0); unlock(&up->rlock); k9log("proc: sleep: unlock r", pc, 0, r, 0); unlock(&r->xlk); } else if (up->notepending) { k9log("proc: sleep: note pending", pc, 0, r, 0); goto done; } else { /* * now we are committed to * change state and call scheduler */ pt = proctrace; if(pt) pt(up, SSleep, 0); up->state = Wakeme; up->rx = r; /* statistics */ m->cs++; k9log("proc: sleep: sleeping", pc, 0, r, 0); procsave(up); if(setlabel(&up->sched)) { /* * here when the process is awakened */ k9log("proc: sleep: awakened", pc, r->xp, r, 0); procrestore(up); spllo(); } else { /* * here to go to sleep (i.e. stop Running) */ // up->rmu = 1; // k9log("proc: sleep: unlock up", pc, 0, r, 0); // unlock(&up->rlock); // k9log("proc: sleep: unlock r", pc, 0, r, 0); // unlock(&r->xlk); k9log("proc: sleep: going sched", pc, 0, r, 0); gotolabel(&m->sched); } } k9log("proc: sleep: done", pc, 0, r, 0); if(up->notepending) { k9log("proc: sleep: forward note", pc, 0, r, 0); up->notepending = 0; splx(s); if(up->procctl == Proc_exitme && up->closingfgrp) forceclosefgrp(); error(Eintr); } splx(s); } struct { int32 count; struct { char * op; void * pc; void * sp; int p0; int s0; int p1; int s1; int xm; // machno Rendez * r; int32 a; int32 n; } hist[8192]; } k9logs; void k9log(char * op, void * pc, Proc * p, void * r, int32 a) { static int32 serial = 0; int32 n, i; n = __sync_fetch_and_add_4(&serial, 1); i = __sync_fetch_and_add_4(&k9logs.count, 1) % nelem(k9logs.hist); k9logs.hist[i].n = n; k9logs.hist[i].r = r; k9logs.hist[i].a = a; k9logs.hist[i].xm = getmach()->machno; k9logs.hist[i].op = op; k9logs.hist[i].pc = pc; k9logs.hist[i].sp = &p; if (up) { k9logs.hist[i].p0 = up->pid; k9logs.hist[i].s0 = up->state; } else { k9logs.hist[i].p0 = 0; k9logs.hist[i].s0 = 0; } if (p) { k9logs.hist[i].p1 = p->pid; k9logs.hist[i].s1 = p->state; } else { k9logs.hist[i].p1 = 0; k9logs.hist[i].s1 = 0; } } Bakul Shah wrote: > On Fri, 11 Jun 2010 16:59:42 +0200 Philippe Anel <xigh@bouyapop.org> wrote: > >> Ooops I forgot to answer this : >> >>> - does changing spl* to manipulation of a per-cpu lock solve the problem? >>> sometimes preventing anything else from running on your mach is >>> exactly what you want. >>> >>> >> No ... I don't think so. I think the problem comes from the fact the >> process is no longer exclusively tied to the current Mach when going >> (back) to schedinit() ... hence the change I did. >> > > Were you able to verify your hypothesis by adding a bit of > trapping code + assertion(s) in the original sources? At the > point of double sleep one can check state to see if the > expected preconditions are true. Alternatively one can check > when the expected conditions become true, set a variable and > test it where the double sleep print occurs. Then one can sort > of walk back to the earliest point where things go wrong. > > > ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-11 17:31 ` Philippe Anel @ 2010-06-11 18:34 ` Bakul Shah 2010-06-11 18:36 ` erik quanstrom 2010-06-11 18:51 ` Philippe Anel 0 siblings, 2 replies; 39+ messages in thread From: Bakul Shah @ 2010-06-11 18:34 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs On Fri, 11 Jun 2010 19:31:58 +0200 Philippe Anel <xigh@bouyapop.org> wrote: > > I only did my tests on 9vx. I have a version that I instrumented with > a circular log buffer, and I have some gdb macros which dumps the > buffer. > > I can put the whole source somewhere and even a log with my comments > of the bug if you want to see it. But please note that I made several Yes, please. Thanks! > changes (because I had to understand how it works) and I would rather > copy my changes to the latest 9vx source tree so that everyone can > read it. What do you think ? Agreed. Best to check this in on a separate branch though. Branching/merging is cheap in hg. > Please, I would like to insist on the fact I'm not saying the promela > model is wrong. And I realize that the fix I propose might not be the > good one. Maybe the problem is even elsewhere. All these is just > feelings, logs and headache. I haven't used promela so can't say anything about it. sleep() is pretty complicated so figuring it out will take some time and effort but I first have to understand the cause and from past experience I know that code to check a cause hypothesis can be quite valuable (hence my earlier question). An unambiguous proof of what went wrong somehow frees my mind to better focus on the solution! Thanks for your thought experiements & code! ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-11 18:34 ` Bakul Shah @ 2010-06-11 18:36 ` erik quanstrom 2010-06-11 18:51 ` Philippe Anel 1 sibling, 0 replies; 39+ messages in thread From: erik quanstrom @ 2010-06-11 18:36 UTC (permalink / raw) To: 9fans > I haven't used promela so can't say anything about it. http://spinroot.com/spin/Man/index.html - erik ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-11 18:34 ` Bakul Shah 2010-06-11 18:36 ` erik quanstrom @ 2010-06-11 18:51 ` Philippe Anel 1 sibling, 0 replies; 39+ messages in thread From: Philippe Anel @ 2010-06-11 18:51 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs You can download my own (ugly) 9vx source code here : http://www.bouyapop.org/9vxigh.tar.bz2 In 9vx you'll find .gdbinit and crash.c. Just copy it to vx32 and replace 9vx folder, compile it and execute it under gdb with you own 9vx env. (gdb) r -F -r <your folder> then compile and execute crash.c with 8c/8l. When it crashes, you can watch the latest logs with the gdb command k9logs 100 (it will show you 100 last ops). Phil; Bakul Shah wrote: > On Fri, 11 Jun 2010 19:31:58 +0200 Philippe Anel <xigh@bouyapop.org> wrote: > >> I only did my tests on 9vx. I have a version that I instrumented with >> a circular log buffer, and I have some gdb macros which dumps the >> buffer. >> >> I can put the whole source somewhere and even a log with my comments >> of the bug if you want to see it. But please note that I made several >> > > Yes, please. Thanks! > > >> changes (because I had to understand how it works) and I would rather >> copy my changes to the latest 9vx source tree so that everyone can >> read it. What do you think ? >> > > Agreed. Best to check this in on a separate branch though. > Branching/merging is cheap in hg. > > >> Please, I would like to insist on the fact I'm not saying the promela >> model is wrong. And I realize that the fix I propose might not be the >> good one. Maybe the problem is even elsewhere. All these is just >> feelings, logs and headache. >> > > I haven't used promela so can't say anything about it. > sleep() is pretty complicated so figuring it out will take > some time and effort but I first have to understand the cause > and from past experience I know that code to check a cause > hypothesis can be quite valuable (hence my earlier question). > An unambiguous proof of what went wrong somehow frees my mind > to better focus on the solution! > > Thanks for your thought experiements & code! > > > ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-11 14:49 ` erik quanstrom 2010-06-11 14:54 ` Philippe Anel 2010-06-11 14:59 ` Philippe Anel @ 2010-06-12 7:02 ` Philippe Anel 2010-06-12 9:22 ` Philippe Anel 2010-06-13 13:01 ` Richard Miller 2010-06-12 20:15 ` Richard Miller 3 siblings, 2 replies; 39+ messages in thread From: Philippe Anel @ 2010-06-12 7:02 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs Hi, I really think the spin model is good. And in fact, I really think current sleep/wakeup/postnote code is good. However, this model makes the assumption that plan9 processes are really Machs and not coroutines. I think we need a larger model, which includes the scheduler. I mean a model that describes a set of processes (in spin meaning), picking one kind of coroutine objects from a run queue (shared by all spin processes) and then calling sleep/wakeup/postnote a few times before putting the coroutine object back to the run queue. These spin processes would represent the cpus (or Machs) while coroutine objects would represent the plan9 processes. I even think we don't have to simulate the fact these processes can be interrupted. Again, the change I proposed is not about sleep/wakeup/postnote, but because wakeup() is ready()'ing the awakened process while the mach on which sleep() runs is still holdind a pointer (up) to the awakened process and can later (in schedinit()) assumes it is safe to access (up)->state. Because of this, schedinit() can tries to call ready() on (up), because because (up)->state may have been changed to Running by a third mach entity. This change only updates schedinit() (and tries) to make (up)->state access safe when it happens after a sleep() is awakened. Phil; > > in any event, given the long history with sleep/wakeup, changes should > be justified with a promula model. the current model omits the spl* > and the second lock. (http://swtch.com/spin/sleep_wakeup.txt). > > - erik > > ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-12 7:02 ` Philippe Anel @ 2010-06-12 9:22 ` Philippe Anel 2010-06-12 11:51 ` erik quanstrom 2010-06-13 13:01 ` Richard Miller 1 sibling, 1 reply; 39+ messages in thread From: Philippe Anel @ 2010-06-12 9:22 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs 9fans, FYI, I've wondered if they had the same problem in go runtime because I suspected the code to be quite similar. And I think go team fixed the problem in ready() equivalent in go runtime, by adding a flag in Proc equivalent so the proc (G in go) is put back to the run queue ... Phil; In go/src/pkg/runtime/proc.c: >---------------------------------------------------------------------< // Mark g ready to run. Sched is already locked. G might be running // already and about to stop. The sched lock protects g->status from // changing underfoot. static void readylocked(G *g) { if(g->m){ // Running on another machine. // Ready it when it stops. g->readyonstop = 1; return; } ... >---------------------------------------------------------------------< // Scheduler loop: find g to run, run it, repeat. static void scheduler(void) { lock(&sched); ... if(gp->readyonstop){ gp->readyonstop = 0; readylocked(gp); } ... ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-12 9:22 ` Philippe Anel @ 2010-06-12 11:51 ` erik quanstrom 0 siblings, 0 replies; 39+ messages in thread From: erik quanstrom @ 2010-06-12 11:51 UTC (permalink / raw) To: 9fans > FYI, I've wondered if they had the same problem in go runtime because > I suspected the code to be quite similar. And I think go team fixed the > problem in ready() equivalent in go runtime, by adding a flag in Proc > equivalent so the proc (G in go) is put back to the run queue ... are you sure? that big scheduler lock looks like it's doing the job of splhi() to me. - erik ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-12 7:02 ` Philippe Anel 2010-06-12 9:22 ` Philippe Anel @ 2010-06-13 13:01 ` Richard Miller 2010-06-13 13:43 ` Philippe Anel 2010-06-13 14:26 ` Philippe Anel 1 sibling, 2 replies; 39+ messages in thread From: Richard Miller @ 2010-06-13 13:01 UTC (permalink / raw) To: 9fans Philippe said: > Again, the change I proposed is not about sleep/wakeup/postnote, but > because wakeup() is ready()'ing the awakened process while the mach on > which sleep() runs is still holdind a pointer (up) to the awakened > process and can later (in schedinit()) assumes it is safe to access > (up)->state. Because of this, schedinit() can tries to call ready() on > (up), because because (up)->state may have been changed to Running by > a third mach entity. and I tried to summarize: > It's a race between a process in sleep() returning to the scheduler on > cpu A, and the same process being readied and rescheduled on cpu B > after the wakeup. But after further study of proc.c, I now believe we were both wrong. A process on the ready queue can only be taken off the queue and scheduled by calling dequeueproc(), which contains this: /* * p->mach==0 only when process state is saved */ if(p == 0 || p->mach){ unlock(runq); return nil; } So the process p can only be scheduled (i.e. p->state set to Running) if p->mach==nil. The only place p->mach gets set to nil is in schedinit(), *after* the test for p->state==Running. This seems to mean there isn't a race after all, and Philippe's thought experiment is impossible. Am I missing something? ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-13 13:01 ` Richard Miller @ 2010-06-13 13:43 ` Philippe Anel 2010-06-13 14:26 ` Philippe Anel 1 sibling, 0 replies; 39+ messages in thread From: Philippe Anel @ 2010-06-13 13:43 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs How do you understand the following logs ? At 14302, we see that proc 49 is running on Mach 6 and has the *double sleep* bug. (gdb) proc 49 proc text=*io* proc=0x2aaabb4d5b40 state=8 kstack=0x2aaaed968010-0x2aaaeda68010 Proc 49 is a kproc *io* in state 8: Wakeme. At 14268, you'll see that proc 49 was about to go sched. Between 14268 and 142ff, proc 49 has been scheduled by Mach 1 at 142bf. 14302: m= 6 pc=40be22 sp=2aaaeda67f08 up=49:8 xp=49:8 r= 9c40f0 a= 0 # proc: sleep: ** double sleep ** 14301: m= 6 pc=40be22 sp=2aaaeda67f08 up=49:8 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: lock r 14300: m= 6 pc=40be22 sp=2aaaeda67f08 up=49:8 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep 142ff: m= 6 pc=40be22 sp=2aaaeda67f08 up=49:8 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: done 142fe: m= 5 pc=44e1a9 sp= 53c04020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc 142fd: m= 5 pc=44df26 sp= 53c04070 up=0 :0 xp=8 :2 r= 0 a= 0 # proc: sched: calling runproc 142fc: m= 5 pc=44df26 sp= 53c04070 up=0 :0 xp=8 :2 r= 0 a= 0 # proc: sched 142fb: m= 5 pc=40e41b sp= 53c040b0 up=8 :2 xp=0 :0 r= 0 a= 0 # proc: schedinit: disable up 142fa: m= 5 pc=40e41b sp= 53c040b0 up=8 :2 xp=0 :0 r= 0 a= 0 # proc: schedinit: up still active 142f9: m= 1 pc=44e1a9 sp= 50b00020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc 142f8: m= 1 pc=44df26 sp= 50b00070 up=0 :0 xp=49:8 r= 0 a= 0 # proc: sched: calling runproc 142f7: m= 1 pc=44df26 sp= 50b00070 up=0 :0 xp=49:8 r= 0 a= 0 # proc: sched 142f6: m= 1 pc=40e41b sp= 50b000b0 up=49:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: disable up 142f5: m= 1 pc=40e41b sp= 50b000b0 up=49:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: up still active 142f4: m= 1 pc=40e41b sp= 50b000b0 up=49:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: start 142f3: m= 1 pc=40be22 sp=2aaaeda67f08 up=49:8 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: going sched 142f2: m= 1 pc=40be22 sp=2aaaeda67f08 up=49:8 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: unlock r 142f1: m= 1 pc=40be22 sp=2aaaeda67f08 up=49:8 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: unlock up 142f0: m= 1 pc=40be22 sp=2aaaeda67f08 up=49:8 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: sleeping 142ef: m= 1 pc=40be22 sp=2aaaeda67f08 up=49:4 xp=0 :0 r= 9c40f0 a= 2 # proc: sleep: condition 142ee: m= 1 pc=40be22 sp=2aaaeda67f08 up=49:4 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: lock up 142ed: m= 0 pc=44fd4b sp=2aaaeddeba78 up=66:4 xp=8 :2 r= 0 a= 0 # kproc: ready: newmach 142ec: m= 1 pc=40be22 sp=2aaaeda67f08 up=49:4 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: lock r 142eb: m= 0 pc=44fd4b sp=2aaaeddeba78 up=66:4 xp=8 :8 r= 0 a= 0 # kproc: ready 142ea: m= 1 pc=40be22 sp=2aaaeda67f08 up=49:4 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep 142e9: m= 1 pc= 0 sp=2aaaeda67f68 up=49:4 xp=0 :0 r= 9c40f0 a= 205 # kserver: wait 142e8: m= 0 pc=40c101 sp=2aaaeddebaa8 up=66:4 xp=8 :8 r= 9a8340 a= 0 # proc: wakeup: ready p 142e7: m= 1 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=68:2 r=2aaabb7afd08 a= 0 # proc: wakeup: done 142e6: m= 0 pc=40c101 sp=2aaaeddebaa8 up=66:4 xp=8 :8 r= 9a8340 a= 0 # proc: wakeup: lock p 142e5: m= 1 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=68:2 r=2aaabb7afd08 a= 0 # proc: wakeup: unlock r 142e4: m= 0 pc=40c101 sp=2aaaeddebaa8 up=66:4 xp=0 :0 r= 9a8340 a= 0 # proc: wakeup: lock r 142e3: m= 1 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=68:2 r=2aaabb7afd08 a= 0 # proc: wakeup: unlock p 142e2: m= 0 pc=40c101 sp=2aaaeddebaa8 up=66:4 xp=0 :0 r= 9a8340 a= 0 # proc: wakeup 142e1: m= 1 pc=40f22e sp=2aaaeda67ed8 up=49:4 xp=0 :0 r= 0 a= 0 # kproc: done 142e0: m= 0 pc= 0 sp=2aaaeddebae8 up=66:4 xp=8 :8 r= 9a8340 a= 1097 # kcall: wakeup 142df: m= 1 pc=40f22e sp=2aaaeda67ed8 up=49:4 xp=0 :0 r= 0 a= 0 # kproc: kick 142de: m= 0 pc=40c88b sp=2aaaeddebae8 up=66:4 xp=0 :0 r= 0 a= 1097 # kcall: call 142dd: m= 1 pc=44fd4b sp=2aaaeda67eb8 up=49:4 xp=68:2 r= 0 a= 0 # proc: ready: done 142dc: m= 1 pc=44e9da sp=2aaaeda67e78 up=49:4 xp=68:2 r= 0 a= 0 # proc: queue: done 142db: m= 1 pc=44e9da sp=2aaaeda67e78 up=49:4 xp=68:2 r= 0 a= 0 # proc: queue 142da: m= 0 pc=40c39d sp=2aaaeddeb878 up=66:4 xp=0 :0 r= 0 a= 1095 # kcall: done 142d9: m= 1 pc=44fd4b sp=2aaaeda67eb8 up=49:4 xp=68:8 r= 0 a= 0 # proc: ready 142d8: m= 0 pc=40c39d sp=2aaaeddeb878 up=66:4 xp=0 :0 r= 0 a= 1095 # kcall: release 142d7: m= 1 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=68:8 r=2aaabb7afd08 a= 0 # proc: wakeup: ready p 142d6: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: done 142d5: m= 1 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=68:8 r=2aaabb7afd08 a= 0 # proc: wakeup: lock p 142d4: m= 1 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: wakeup: lock r 142d3: m= 1 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: wakeup 142d2: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: awakened 142d1: m= 1 pc= 0 sp=2aaaeda67f68 up=49:4 xp=68:8 r= 9c40f0 a= 1096 # kserver: done 142d0: m= 0 pc=44df26 sp=7fffffffe1d0 up=66:4 xp=68:8 r= 0 a= 0 # proc: sched: switch 142cf: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=66:3 r= 0 a= 0 # proc: sched: found 142ce: m= 0 pc=40f3d2 sp=7fffffffe120 up=0 :0 xp=66:3 r= 0 a= 0 # proc: runproc: done 142cd: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=66:2 r= 0 a= 0 # proc: dequeue: done 142cc: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=66:2 r= 0 a= 0 # proc: dequeue 142cb: m= 0 pc=40f3d2 sp=7fffffffe120 up=0 :0 xp=0 :0 r= 0 a= 0 # proc: runproc 142ca: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=68:8 r= 0 a= 0 # proc: sched: calling runproc 142c9: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=68:8 r= 0 a= 0 # proc: sched 142c8: m= 0 pc=40d2a0 sp=7fffffffe210 up=68:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: disable up 142c7: m= 0 pc=40d2a0 sp=7fffffffe210 up=68:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: up still active 142c6: m= 0 pc=40d2a0 sp=7fffffffe210 up=68:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: start 142c5: m= 0 pc=40c269 sp=2aaaedfed808 up=68:8 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: going sched 142c4: m= 1 pc= 0 sp=2aaaeda67f68 up=49:4 xp=68:8 r= 9c40f0 a= 1096 # kserver: serve 142c3: m= 0 pc=40c269 sp=2aaaedfed808 up=68:8 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: unlock r 142c2: m= 1 pc=40be22 sp=2aaaeda67f08 up=49:4 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: done 142c1: m= 0 pc=40c269 sp=2aaaedfed808 up=68:8 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: unlock up 142c0: m= 1 pc=40be22 sp=2aaaeda67f08 up=49:4 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: awakened 142bf: m= 1 pc=44df26 sp= 50b00070 up=49:4 xp=8 :8 r= 0 a= 0 # proc: sched: switch 142be: m= 0 pc=40c269 sp=2aaaedfed808 up=68:8 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: sleeping 142bd: m= 0 pc=40c269 sp=2aaaedfed808 up=68:4 xp=0 :0 r=2aaabb7afd08 a= 2 # proc: sleep: condition 142bc: m= 0 pc=40c269 sp=2aaaedfed808 up=68:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: lock up 142bb: m= 0 pc=40c269 sp=2aaaedfed808 up=68:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: lock r 142ba: m= 1 pc=44df26 sp= 50b00070 up=0 :0 xp=49:2 r= 0 a= 0 # proc: sched: found 142b9: m= 0 pc=40c269 sp=2aaaedfed808 up=68:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep 142b8: m= 1 pc=44e1a9 sp= 50b00020 up=0 :0 xp=49:2 r= 0 a= 0 # kproc: runproc: found 142b7: m= 0 pc=40c39d sp=2aaaedfed868 up=68:4 xp=0 :0 r= 0 a= 1096 # kcall: sleep 142b6: m= 0 pc=40c101 sp=2aaaedfed828 up=68:4 xp=49:2 r= 9c40f0 a= 0 # proc: wakeup: done 142b5: m= 1 pc=44e1a9 sp= 50b00020 up=0 :0 xp=49:2 r= 0 a= 0 # kproc: runproc: p removed from kqueue 142b4: m= 1 pc=44e1a9 sp= 50b00020 up=0 :0 xp=49:2 r= 0 a= 0 # kproc: runproc: remove p from kqueue 142b3: m= 0 pc=40c101 sp=2aaaedfed828 up=68:4 xp=49:2 r= 9c40f0 a= 0 # proc: wakeup: unlock r 142b2: m= 1 pc=44e1a9 sp= 50b00020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc: psleep: done 142b1: m= 0 pc=40c101 sp=2aaaedfed828 up=68:4 xp=49:2 r= 9c40f0 a= 0 # proc: wakeup: unlock p 142b0: m= 0 pc=44fd4b sp=2aaaedfed7f8 up=68:4 xp=49:2 r= 0 a= 0 # kproc: ready: done 142af: m= 0 pc=44fd4b sp=2aaaedfed7f8 up=68:4 xp=49:2 r= 0 a= 0 # kproc: ready: pwakeup: done 142ae: m= 5 pc=40e41b sp= 53c040b0 up=8 :8 xp=0 :0 r= 0 a= 0 # proc: schedinit: start 142ad: m= 5 pc=40be22 sp=2aaaed2dff08 up=8 :8 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: going sched 142ac: m= 5 pc=40be22 sp=2aaaed2dff08 up=8 :8 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: unlock r 142ab: m= 5 pc=40be22 sp=2aaaed2dff08 up=8 :8 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: unlock up 142aa: m= 0 pc=44fd4b sp=2aaaedfed7f8 up=68:4 xp=49:2 r= 0 a= 0 # kproc: ready: pwakeup 142a9: m= 0 pc=44fd4b sp=2aaaedfed7f8 up=68:4 xp=49:8 r= 0 a= 0 # kproc: ready 142a8: m= 0 pc=40c101 sp=2aaaedfed828 up=68:4 xp=49:8 r= 9c40f0 a= 0 # proc: wakeup: ready p 142a7: m= 5 pc=40be22 sp=2aaaed2dff08 up=8 :8 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: sleeping 142a6: m= 0 pc=40c101 sp=2aaaedfed828 up=68:4 xp=49:8 r= 9c40f0 a= 0 # proc: wakeup: lock p 142a5: m= 5 pc=40be22 sp=2aaaed2dff08 up=8 :4 xp=0 :0 r= 9a8340 a= 2 # proc: sleep: condition 142a4: m= 0 pc=40c101 sp=2aaaedfed828 up=68:4 xp=0 :0 r= 9c40f0 a= 0 # proc: wakeup: lock r 142a3: m= 5 pc=40be22 sp=2aaaed2dff08 up=8 :4 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: lock up 142a2: m= 0 pc=40c101 sp=2aaaedfed828 up=68:4 xp=0 :0 r= 9c40f0 a= 0 # proc: wakeup 142a1: m= 5 pc=40be22 sp=2aaaed2dff08 up=8 :4 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: lock r 142a0: m= 0 pc= 0 sp=2aaaedfed868 up=68:4 xp=49:8 r= 9c40f0 a= 1096 # kcall: wakeup 1429f: m= 5 pc=40be22 sp=2aaaed2dff08 up=8 :4 xp=0 :0 r= 9a8340 a= 0 # proc: sleep 1429e: m= 5 pc= 0 sp=2aaaed2dff68 up=8 :4 xp=0 :0 r= 9a8340 a= 446 # kserver: wait 1429d: m= 0 pc=40c39d sp=2aaaedfed868 up=68:4 xp=0 :0 r= 0 a= 1096 # kcall: call 1429c: m= 5 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=66:2 r=2aaabb68bdb8 a= 0 # proc: wakeup: done 1429b: m= 5 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=66:2 r=2aaabb68bdb8 a= 0 # proc: wakeup: unlock r 1429a: m= 5 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=66:2 r=2aaabb68bdb8 a= 0 # proc: wakeup: unlock p 14299: m= 5 pc=40f22e sp=2aaaed2dfed8 up=8 :4 xp=0 :0 r= 0 a= 0 # kproc: done 14298: m= 5 pc=40f22e sp=2aaaed2dfed8 up=8 :4 xp=0 :0 r= 0 a= 0 # kproc: kick 14297: m= 5 pc=44fd4b sp=2aaaed2dfeb8 up=8 :4 xp=66:2 r= 0 a= 0 # proc: ready: done 14296: m= 5 pc=44e9da sp=2aaaed2dfe78 up=8 :4 xp=66:2 r= 0 a= 0 # proc: queue: done 14295: m= 5 pc=44e9da sp=2aaaed2dfe78 up=8 :4 xp=66:2 r= 0 a= 0 # proc: queue 14294: m= 0 pc=40c39d sp=2aaaedfed868 up=68:4 xp=0 :0 r= 0 a= 1094 # kcall: done 14293: m= 5 pc=44fd4b sp=2aaaed2dfeb8 up=8 :4 xp=66:8 r= 0 a= 0 # proc: ready 14292: m= 5 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=66:8 r=2aaabb68bdb8 a= 0 # proc: wakeup: ready p 14291: m= 0 pc=40c39d sp=2aaaedfed868 up=68:4 xp=0 :0 r= 0 a= 1094 # kcall: release 14290: m= 5 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=66:8 r=2aaabb68bdb8 a= 0 # proc: wakeup: lock p 1428f: m= 0 pc=40c269 sp=2aaaedfed808 up=68:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: done 1428e: m= 5 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: wakeup: lock r 1428d: m= 5 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: wakeup 1428c: m= 5 pc= 0 sp=2aaaed2dff68 up=8 :4 xp=66:8 r= 9a8340 a= 1095 # kserver: done 1428b: m= 0 pc=40c269 sp=2aaaedfed808 up=68:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: awakened 1428a: m= 0 pc=44df26 sp=7fffffffe1d0 up=68:4 xp=66:8 r= 0 a= 0 # proc: sched: switch 14289: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=68:3 r= 0 a= 0 # proc: sched: found 14288: m= 0 pc=40f3d2 sp=7fffffffe120 up=0 :0 xp=68:3 r= 0 a= 0 # proc: runproc: done 14287: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue: done 14286: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue 14285: m= 0 pc=40f3d2 sp=7fffffffe120 up=0 :0 xp=0 :0 r= 0 a= 0 # proc: runproc 14284: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=66:8 r= 0 a= 0 # proc: sched: calling runproc 14283: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=66:8 r= 0 a= 0 # proc: sched 14282: m= 0 pc=40d2a0 sp=7fffffffe210 up=66:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: disable up 14281: m= 0 pc=40d2a0 sp=7fffffffe210 up=66:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: up still active 14280: m= 0 pc=40d2a0 sp=7fffffffe210 up=66:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: start 1427f: m= 5 pc= 0 sp=2aaaed2dff68 up=8 :4 xp=66:8 r= 9a8340 a= 1095 # kserver: serve 1427e: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:8 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: going sched 1427d: m= 5 pc=40be22 sp=2aaaed2dff08 up=8 :4 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: done 1427c: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:8 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: unlock r 1427b: m= 5 pc=40be22 sp=2aaaed2dff08 up=8 :4 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: awakened 1427a: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:8 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: unlock up 14279: m= 5 pc=44df26 sp= 53c04070 up=8 :4 xp=4 :4 r= 0 a= 0 # proc: sched: switch 14278: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:8 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: sleeping 14277: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:4 xp=0 :0 r=2aaabb68bdb8 a= 2 # proc: sleep: condition 14276: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: lock up 14275: m= 5 pc=44df26 sp= 53c04070 up=0 :0 xp=8 :2 r= 0 a= 0 # proc: sched: found 14274: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: lock r 14273: m= 5 pc=44e1a9 sp= 53c04020 up=0 :0 xp=8 :2 r= 0 a= 0 # kproc: runproc: found 14272: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep 14271: m= 0 pc=40c39d sp=2aaaeddeb878 up=66:4 xp=0 :0 r= 0 a= 1095 # kcall: sleep 14270: m= 5 pc=44e1a9 sp= 53c04020 up=0 :0 xp=8 :2 r= 0 a= 0 # kproc: runproc: p removed from kqueue 1426f: m= 0 pc=40c101 sp=2aaaeddeb838 up=66:4 xp=8 :2 r= 9a8340 a= 0 # proc: wakeup: done 1426e: m= 5 pc=44e1a9 sp= 53c04020 up=0 :0 xp=8 :2 r= 0 a= 0 # kproc: runproc: remove p from kqueue 1426d: m= 5 pc=44e1a9 sp= 53c04020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc: psleep: done 1426c: m= 0 pc=40c101 sp=2aaaeddeb838 up=66:4 xp=8 :2 r= 9a8340 a= 0 # proc: wakeup: unlock r 1426b: m= 0 pc=40c101 sp=2aaaeddeb838 up=66:4 xp=8 :2 r= 9a8340 a= 0 # proc: wakeup: unlock p 1426a: m= 0 pc=44fd4b sp=2aaaeddeb808 up=66:4 xp=8 :2 r= 0 a= 0 # kproc: ready: done 14269: m= 0 pc=44fd4b sp=2aaaeddeb808 up=66:4 xp=8 :2 r= 0 a= 0 # kproc: ready: pwakeup: done 14268: m= 6 pc=40be22 sp=2aaaeda67f08 up=49:8 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: going sched 14267: m= 6 pc=40be22 sp=2aaaeda67f08 up=49:8 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: unlock r 14266: m= 6 pc=40be22 sp=2aaaeda67f08 up=49:8 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: unlock up 14265: m= 0 pc=44fd4b sp=2aaaeddeb808 up=66:4 xp=8 :2 r= 0 a= 0 # kproc: ready: pwakeup 14264: m= 0 pc=44fd4b sp=2aaaeddeb808 up=66:4 xp=8 :8 r= 0 a= 0 # kproc: ready 14263: m= 6 pc=40be22 sp=2aaaeda67f08 up=49:8 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: sleeping 14262: m= 0 pc=40c101 sp=2aaaeddeb838 up=66:4 xp=8 :8 r= 9a8340 a= 0 # proc: wakeup: ready p 14261: m= 6 pc=40be22 sp=2aaaeda67f08 up=49:4 xp=0 :0 r= 9c40f0 a= 2 # proc: sleep: condition 14260: m= 0 pc=40c101 sp=2aaaeddeb838 up=66:4 xp=8 :8 r= 9a8340 a= 0 # proc: wakeup: lock p 1425f: m= 6 pc=40be22 sp=2aaaeda67f08 up=49:4 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: lock up 1425e: m= 0 pc=40c101 sp=2aaaeddeb838 up=66:4 xp=0 :0 r= 9a8340 a= 0 # proc: wakeup: lock r 1425d: m= 6 pc=40be22 sp=2aaaeda67f08 up=49:4 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: lock r 1425c: m= 0 pc=40c101 sp=2aaaeddeb838 up=66:4 xp=0 :0 r= 9a8340 a= 0 # proc: wakeup 1425b: m= 6 pc=40be22 sp=2aaaeda67f08 up=49:4 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep 1425a: m= 0 pc= 0 sp=2aaaeddeb878 up=66:4 xp=8 :8 r= 9a8340 a= 1095 # kcall: wakeup 14259: m= 6 pc= 0 sp=2aaaeda67f68 up=49:4 xp=0 :0 r= 9c40f0 a= 204 # kserver: wait 14258: m= 0 pc=40c39d sp=2aaaeddeb878 up=66:4 xp=0 :0 r= 0 a= 1095 # kcall: call 14257: m= 6 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=68:2 r=2aaabb7afd08 a= 0 # proc: wakeup: done 14256: m= 6 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=68:2 r=2aaabb7afd08 a= 0 # proc: wakeup: unlock r 14255: m= 6 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=68:2 r=2aaabb7afd08 a= 0 # proc: wakeup: unlock p 14254: m= 6 pc=40f22e sp=2aaaeda67ed8 up=49:4 xp=0 :0 r= 0 a= 0 # kproc: done 14253: m= 6 pc=40f22e sp=2aaaeda67ed8 up=49:4 xp=0 :0 r= 0 a= 0 # kproc: kick 14252: m= 6 pc=44fd4b sp=2aaaeda67eb8 up=49:4 xp=68:2 r= 0 a= 0 # proc: ready: done 14251: m= 6 pc=44e9da sp=2aaaeda67e78 up=49:4 xp=68:2 r= 0 a= 0 # proc: queue: done 14250: m= 0 pc=40c39d sp=2aaaeddeb878 up=66:4 xp=0 :0 r= 0 a= 1092 # kcall: done 1424f: m= 6 pc=44e9da sp=2aaaeda67e78 up=49:4 xp=68:2 r= 0 a= 0 # proc: queue 1424e: m= 0 pc=40c39d sp=2aaaeddeb878 up=66:4 xp=0 :0 r= 0 a= 1092 # kcall: release 1424d: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: done 1424c: m= 6 pc=44fd4b sp=2aaaeda67eb8 up=49:4 xp=68:8 r= 0 a= 0 # proc: ready 1424b: m= 6 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=68:8 r=2aaabb7afd08 a= 0 # proc: wakeup: ready p 1424a: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: awakened 14249: m= 6 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=68:8 r=2aaabb7afd08 a= 0 # proc: wakeup: lock p 14248: m= 6 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: wakeup: lock r 14247: m= 6 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: wakeup 14246: m= 6 pc= 0 sp=2aaaeda67f68 up=49:4 xp=68:8 r= 9c40f0 a= 1094 # kserver: done 14245: m= 0 pc=44df26 sp=7fffffffe1d0 up=66:4 xp=68:8 r= 0 a= 0 # proc: sched: switch 14244: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=66:3 r= 0 a= 0 # proc: sched: found 14243: m= 0 pc=40f3d2 sp=7fffffffe120 up=0 :0 xp=66:3 r= 0 a= 0 # proc: runproc: done 14242: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=66:2 r= 0 a= 0 # proc: dequeue: done 14241: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=66:2 r= 0 a= 0 # proc: dequeue 14240: m= 0 pc=40f3d2 sp=7fffffffe120 up=0 :0 xp=0 :0 r= 0 a= 0 # proc: runproc 1423f: m= 6 pc= 0 sp=2aaaeda67f68 up=49:4 xp=68:8 r= 9c40f0 a= 1094 # kserver: serve 1423e: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=68:8 r= 0 a= 0 # proc: sched: calling runproc 1423d: m= 6 pc=40be22 sp=2aaaeda67f08 up=49:4 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: done 1423c: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=68:8 r= 0 a= 0 # proc: sched 1423b: m= 6 pc=40be22 sp=2aaaeda67f08 up=49:4 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: awakened 1423a: m= 0 pc=40d2a0 sp=7fffffffe210 up=68:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: disable up 14239: m= 6 pc=44df26 sp= 54b05070 up=49:4 xp=8 :8 r= 0 a= 0 # proc: sched: switch 14238: m= 0 pc=40d2a0 sp=7fffffffe210 up=68:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: up still active 14237: m= 0 pc=40d2a0 sp=7fffffffe210 up=68:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: start 14236: m= 0 pc=40c269 sp=2aaaedfed808 up=68:8 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: going sched 14235: m= 0 pc=40c269 sp=2aaaedfed808 up=68:8 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: unlock r 14234: m= 6 pc=44df26 sp= 54b05070 up=0 :0 xp=49:2 r= 0 a= 0 # proc: sched: found 14233: m= 6 pc=44e1a9 sp= 54b05020 up=0 :0 xp=49:2 r= 0 a= 0 # kproc: runproc: found 14232: m= 0 pc=40c269 sp=2aaaedfed808 up=68:8 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: unlock up 14231: m= 6 pc=44e1a9 sp= 54b05020 up=0 :0 xp=49:2 r= 0 a= 0 # kproc: runproc: p removed from kqueue 14230: m= 6 pc=44e1a9 sp= 54b05020 up=0 :0 xp=49:2 r= 0 a= 0 # kproc: runproc: remove p from kqueue 1422f: m= 6 pc=44e1a9 sp= 54b05020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc: psleep: done 1422e: m= 0 pc=40c269 sp=2aaaedfed808 up=68:8 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: sleeping 1422d: m= 0 pc=40c269 sp=2aaaedfed808 up=68:4 xp=0 :0 r=2aaabb7afd08 a= 2 # proc: sleep: condition 1422c: m= 0 pc=40c269 sp=2aaaedfed808 up=68:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: lock up 1422b: m= 0 pc=40c269 sp=2aaaedfed808 up=68:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: lock r 1422a: m= 0 pc=40c269 sp=2aaaedfed808 up=68:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep 14229: m= 0 pc=40c39d sp=2aaaedfed868 up=68:4 xp=0 :0 r= 0 a= 1094 # kcall: sleep 14228: m= 0 pc=40c101 sp=2aaaedfed828 up=68:4 xp=49:2 r= 9c40f0 a= 0 # proc: wakeup: done 14227: m= 0 pc=40c101 sp=2aaaedfed828 up=68:4 xp=49:2 r= 9c40f0 a= 0 # proc: wakeup: unlock r 14226: m= 0 pc=40c101 sp=2aaaedfed828 up=68:4 xp=49:2 r= 9c40f0 a= 0 # proc: wakeup: unlock p 14225: m= 0 pc=44fd4b sp=2aaaedfed7f8 up=68:4 xp=49:2 r= 0 a= 0 # kproc: ready: done 14224: m= 0 pc=44fd4b sp=2aaaedfed7f8 up=68:4 xp=49:2 r= 0 a= 0 # kproc: ready: pwakeup: done 14223: m= 0 pc=44fd4b sp=2aaaedfed7f8 up=68:4 xp=49:2 r= 0 a= 0 # kproc: ready: pwakeup 14222: m= 0 pc=44fd4b sp=2aaaedfed7f8 up=68:4 xp=49:8 r= 0 a= 0 # kproc: ready 14221: m= 0 pc=40c101 sp=2aaaedfed828 up=68:4 xp=49:8 r= 9c40f0 a= 0 # proc: wakeup: ready p 14220: m= 0 pc=40c101 sp=2aaaedfed828 up=68:4 xp=49:8 r= 9c40f0 a= 0 # proc: wakeup: lock p 1421f: m= 0 pc=40c101 sp=2aaaedfed828 up=68:4 xp=0 :0 r= 9c40f0 a= 0 # proc: wakeup: lock r 1421e: m= 0 pc=40c101 sp=2aaaedfed828 up=68:4 xp=0 :0 r= 9c40f0 a= 0 # proc: wakeup 1421d: m= 0 pc= 0 sp=2aaaedfed868 up=68:4 xp=49:8 r= 9c40f0 a= 1094 # kcall: wakeup 1421c: m= 0 pc=40c39d sp=2aaaedfed868 up=68:4 xp=0 :0 r= 0 a= 1094 # kcall: call 1421b: m= 0 pc=41ea29 sp=2aaaedfed928 up=68:4 xp=0 :0 r= 0 a= 0 # proc: error 1421a: m= 0 pc=41ea29 sp=2aaaedfed928 up=68:4 xp=0 :0 r= 0 a= 0 # proc: error 14219: m= 6 pc=44e1a9 sp= 54b05020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc: psleep 14218: m= 6 pc=44e1a9 sp= 54b05020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc: search 14217: m= 6 pc=44e1a9 sp= 54b05020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc 14216: m= 6 pc=44df26 sp= 54b05070 up=0 :0 xp=8 :8 r= 0 a= 0 # proc: sched: calling runproc 14215: m= 6 pc=44df26 sp= 54b05070 up=0 :0 xp=8 :8 r= 0 a= 0 # proc: sched 14214: m= 6 pc=40e41b sp= 54b050b0 up=8 :8 xp=0 :0 r= 0 a= 0 # proc: schedinit: disable up 14213: m= 6 pc=40e41b sp= 54b050b0 up=8 :8 xp=0 :0 r= 0 a= 0 # proc: schedinit: up still active 14212: m= 6 pc=40e41b sp= 54b050b0 up=8 :8 xp=0 :0 r= 0 a= 0 # proc: schedinit: start 14211: m= 6 pc=40be22 sp=2aaaed2dff08 up=8 :8 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: going sched 14210: m= 6 pc=40be22 sp=2aaaed2dff08 up=8 :8 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: unlock r 1420f: m= 6 pc=40be22 sp=2aaaed2dff08 up=8 :8 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: unlock up 1420e: m= 6 pc=40be22 sp=2aaaed2dff08 up=8 :8 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: sleeping 1420d: m= 6 pc=40be22 sp=2aaaed2dff08 up=8 :4 xp=0 :0 r= 9a8340 a= 2 # proc: sleep: condition 1420c: m= 6 pc=40be22 sp=2aaaed2dff08 up=8 :4 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: lock up 1420b: m= 6 pc=40be22 sp=2aaaed2dff08 up=8 :4 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: lock r 1420a: m= 6 pc=40be22 sp=2aaaed2dff08 up=8 :4 xp=0 :0 r= 9a8340 a= 0 # proc: sleep 14209: m= 6 pc= 0 sp=2aaaed2dff68 up=8 :4 xp=0 :0 r= 9a8340 a= 445 # kserver: wait 14208: m= 6 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=66:2 r=2aaabb68bdb8 a= 0 # proc: wakeup: done 14207: m= 6 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=66:2 r=2aaabb68bdb8 a= 0 # proc: wakeup: unlock r 14206: m= 6 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=66:2 r=2aaabb68bdb8 a= 0 # proc: wakeup: unlock p 14205: m= 6 pc=40f22e sp=2aaaed2dfed8 up=8 :4 xp=0 :0 r= 0 a= 0 # kproc: done 14204: m= 6 pc=40f22e sp=2aaaed2dfed8 up=8 :4 xp=0 :0 r= 0 a= 0 # kproc: kick 14203: m= 6 pc=44fd4b sp=2aaaed2dfeb8 up=8 :4 xp=66:2 r= 0 a= 0 # proc: ready: done 14202: m= 6 pc=44e9da sp=2aaaed2dfe78 up=8 :4 xp=66:2 r= 0 a= 0 # proc: queue: done 14201: m= 6 pc=44e9da sp=2aaaed2dfe78 up=8 :4 xp=66:2 r= 0 a= 0 # proc: queue 14200: m= 6 pc=44fd4b sp=2aaaed2dfeb8 up=8 :4 xp=66:8 r= 0 a= 0 # proc: ready 141ff: m= 6 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=66:8 r=2aaabb68bdb8 a= 0 # proc: wakeup: ready p 141fe: m= 6 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=66:8 r=2aaabb68bdb8 a= 0 # proc: wakeup: lock p 141fd: m= 6 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: wakeup: lock r 141fc: m= 6 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: wakeup 141fb: m= 6 pc= 0 sp=2aaaed2dff68 up=8 :4 xp=66:8 r= 9a8340 a= 1092 # kserver: done 141fa: m= 6 pc=40c5e2 sp=2aaaed2dfcd8 up=8 :4 xp=0 :0 r= 0 a= 1093 # kcall: serve 141f9: m= 0 pc=40c5e2 sp=2aaaedfeda98 up=68:4 xp=0 :0 r= 0 a= 1089 # kcall: done 141f8: m= 0 pc=40c5e2 sp=2aaaedfeda98 up=68:4 xp=0 :0 r= 0 a= 1089 # kcall: release 141f7: m= 0 pc=40c269 sp=2aaaedfeda38 up=68:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: done 141f6: m= 6 pc= 0 sp=2aaaed2dff68 up=8 :4 xp=66:8 r= 9a8340 a= 1092 # kserver: serve 141f5: m= 6 pc=40be22 sp=2aaaed2dff08 up=8 :4 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: done 141f4: m= 6 pc=40be22 sp=2aaaed2dff08 up=8 :4 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: awakened 141f3: m= 0 pc=40c269 sp=2aaaedfeda38 up=68:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: awakened 141f2: m= 6 pc=44df26 sp= 54b05070 up=8 :4 xp=49:8 r= 0 a= 0 # proc: sched: switch 141f1: m= 6 pc=44df26 sp= 54b05070 up=0 :0 xp=8 :2 r= 0 a= 0 # proc: sched: found 141f0: m= 6 pc=44e1a9 sp= 54b05020 up=0 :0 xp=8 :2 r= 0 a= 0 # kproc: runproc: found 141ef: m= 6 pc=44e1a9 sp= 54b05020 up=0 :0 xp=8 :2 r= 0 a= 0 # kproc: runproc: p removed from kqueue 141ee: m= 6 pc=44e1a9 sp= 54b05020 up=0 :0 xp=8 :2 r= 0 a= 0 # kproc: runproc: remove p from kqueue 141ed: m= 6 pc=44e1a9 sp= 54b05020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc: psleep: done 141ec: m= 0 pc=44df26 sp=7fffffffe1d0 up=68:4 xp=66:8 r= 0 a= 0 # proc: sched: switch 141eb: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=68:3 r= 0 a= 0 # proc: sched: found 141ea: m= 0 pc=40f3d2 sp=7fffffffe120 up=0 :0 xp=68:3 r= 0 a= 0 # proc: runproc: done 141e9: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue: done 141e8: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue 141e7: m= 0 pc=40f3d2 sp=7fffffffe120 up=0 :0 xp=0 :0 r= 0 a= 0 # proc: runproc 141e6: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=66:8 r= 0 a= 0 # proc: sched: calling runproc 141e5: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=66:8 r= 0 a= 0 # proc: sched 141e4: m= 0 pc=40d2a0 sp=7fffffffe210 up=66:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: disable up 141e3: m= 0 pc=40d2a0 sp=7fffffffe210 up=66:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: up still active 141e2: m= 0 pc=40d2a0 sp=7fffffffe210 up=66:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: start 141e1: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:8 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: going sched 141e0: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:8 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: unlock r 141df: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:8 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: unlock up 141de: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:8 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: sleeping 141dd: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:4 xp=0 :0 r=2aaabb68bdb8 a= 2 # proc: sleep: condition 141dc: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: lock up 141db: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: lock r 141da: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep 141d9: m= 0 pc=40c39d sp=2aaaeddeb878 up=66:4 xp=0 :0 r= 0 a= 1092 # kcall: sleep 141d8: m= 0 pc=40c101 sp=2aaaeddeb838 up=66:4 xp=8 :2 r= 9a8340 a= 0 # proc: wakeup: done 141d7: m= 0 pc=40c101 sp=2aaaeddeb838 up=66:4 xp=8 :2 r= 9a8340 a= 0 # proc: wakeup: unlock r 141d6: m= 0 pc=40c101 sp=2aaaeddeb838 up=66:4 xp=8 :2 r= 9a8340 a= 0 # proc: wakeup: unlock p 141d5: m= 0 pc=44fd4b sp=2aaaeddeb808 up=66:4 xp=8 :2 r= 0 a= 0 # kproc: ready: done 141d4: m= 0 pc=44fd4b sp=2aaaeddeb808 up=66:4 xp=8 :2 r= 0 a= 0 # kproc: ready: pwakeup: done 141d3: m= 0 pc=44fd4b sp=2aaaeddeb808 up=66:4 xp=8 :2 r= 0 a= 0 # kproc: ready: pwakeup 141d2: m= 0 pc=44fd4b sp=2aaaeddeb808 up=66:4 xp=8 :8 r= 0 a= 0 # kproc: ready 141d1: m= 0 pc=40c101 sp=2aaaeddeb838 up=66:4 xp=8 :8 r= 9a8340 a= 0 # proc: wakeup: ready p 141d0: m= 0 pc=40c101 sp=2aaaeddeb838 up=66:4 xp=8 :8 r= 9a8340 a= 0 # proc: wakeup: lock p 141cf: m= 0 pc=40c101 sp=2aaaeddeb838 up=66:4 xp=0 :0 r= 9a8340 a= 0 # proc: wakeup: lock r 141ce: m= 0 pc=40c101 sp=2aaaeddeb838 up=66:4 xp=0 :0 r= 9a8340 a= 0 # proc: wakeup 141cd: m= 0 pc= 0 sp=2aaaeddeb878 up=66:4 xp=8 :8 r= 9a8340 a= 1092 # kcall: wakeup 141cc: m= 0 pc=40c39d sp=2aaaeddeb878 up=66:4 xp=0 :0 r= 0 a= 1092 # kcall: call 141cb: m= 0 pc=41ea29 sp=2aaaeddeb938 up=66:4 xp=0 :0 r= 0 a= 0 # proc: error 141ca: m= 0 pc=41ea29 sp=2aaaeddeb938 up=66:4 xp=0 :0 r= 0 a= 0 # proc: error 141c9: m= 0 pc=40c88b sp=2aaaeddebae8 up=66:4 xp=0 :0 r= 0 a= 1088 # kcall: done 141c8: m= 0 pc=40c88b sp=2aaaeddebae8 up=66:4 xp=0 :0 r= 0 a= 1088 # kcall: release 141c7: m= 0 pc=40c269 sp=2aaaeddeba88 up=66:4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: done 141c6: m= 0 pc=40c269 sp=2aaaeddeba88 up=66:4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: awakened 141c5: m= 0 pc=44df26 sp=7fffffffe1d0 up=66:4 xp=67:8 r= 0 a= 0 # proc: sched: switch 141c4: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=66:3 r= 0 a= 0 # proc: sched: found 141c3: m= 0 pc=40f3d2 sp=7fffffffe120 up=0 :0 xp=66:3 r= 0 a= 0 # proc: runproc: done 141c2: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=66:2 r= 0 a= 0 # proc: dequeue: done 141c1: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=66:2 r= 0 a= 0 # proc: dequeue 141c0: m= 0 pc=40f3d2 sp=7fffffffe120 up=0 :0 xp=0 :0 r= 0 a= 0 # proc: runproc 141bf: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=67:8 r= 0 a= 0 # proc: sched: calling runproc 141be: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=67:8 r= 0 a= 0 # proc: sched 141bd: m= 0 pc=40d2a0 sp=7fffffffe210 up=67:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: disable up 141bc: m= 0 pc=40d2a0 sp=7fffffffe210 up=67:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: up still active 141bb: m= 0 pc=40d2a0 sp=7fffffffe210 up=67:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: start 141ba: m= 0 pc=40c269 sp=2aaaedeec918 up=67:8 xp=0 :0 r=2aaabb71dd60 a= 0 # proc: sleep: going sched 141b9: m= 0 pc=40c269 sp=2aaaedeec918 up=67:8 xp=0 :0 r=2aaabb71dd60 a= 0 # proc: sleep: unlock r 141b8: m= 0 pc=40c269 sp=2aaaedeec918 up=67:8 xp=0 :0 r=2aaabb71dd60 a= 0 # proc: sleep: unlock up 141b7: m= 0 pc=40c269 sp=2aaaedeec918 up=67:8 xp=0 :0 r=2aaabb71dd60 a= 0 # proc: sleep: sleeping 141b6: m= 0 pc=40c269 sp=2aaaedeec918 up=67:4 xp=0 :0 r=2aaabb71dd60 a= 2 # proc: sleep: condition 141b5: m= 0 pc=40c269 sp=2aaaedeec918 up=67:4 xp=0 :0 r=2aaabb71dd60 a= 0 # proc: sleep: lock up 141b4: m= 0 pc=40c269 sp=2aaaedeec918 up=67:4 xp=0 :0 r=2aaabb71dd60 a= 0 # proc: sleep: lock r 141b3: m= 0 pc=40c269 sp=2aaaedeec918 up=67:4 xp=0 :0 r=2aaabb71dd60 a= 0 # proc: sleep 141b2: m= 0 pc=40c519 sp=2aaaedeec978 up=67:4 xp=0 :0 r= 0 a= 1091 # kcall: sleep 141b1: m= 0 pc=40c101 sp=2aaaedeec938 up=67:4 xp=0 :0 r= 983360 a= 0 # proc: wakeup: done 141b0: m= 0 pc=40c101 sp=2aaaedeec938 up=67:4 xp=0 :0 r= 983360 a= 0 # proc: wakeup: unlock r 141af: m= 0 pc=40c101 sp=2aaaedeec938 up=67:4 xp=0 :0 r= 983360 a= 0 # proc: wakeup: lock r 141ae: m= 0 pc=40c101 sp=2aaaedeec938 up=67:4 xp=0 :0 r= 983360 a= 0 # proc: wakeup 141ad: m= 0 pc= 0 sp=2aaaedeec978 up=67:4 xp=4 :4 r= 983360 a= 1091 # kcall: wakeup 141ac: m= 0 pc=40c519 sp=2aaaedeec978 up=67:4 xp=0 :0 r= 0 a= 1091 # kcall: call 141ab: m= 0 pc=40c5e2 sp=2aaaedeec8a8 up=67:4 xp=0 :0 r= 0 a= 1090 # kcall: done 141aa: m= 0 pc=40c5e2 sp=2aaaedeec8a8 up=67:4 xp=0 :0 r= 0 a= 1090 # kcall: release 141a9: m= 0 pc=40c101 sp=2aaaedeec868 up=67:4 xp=4 :4 r= 983360 a= 0 # proc: wakeup: done 141a8: m= 0 pc=40c101 sp=2aaaedeec868 up=67:4 xp=4 :4 r= 983360 a= 0 # proc: wakeup: unlock r 141a7: m= 0 pc=40c101 sp=2aaaedeec868 up=67:4 xp=4 :4 r= 983360 a= 0 # proc: wakeup: unlock p 141a6: m= 0 pc=44fd4b sp=2aaaedeec838 up=67:4 xp=4 :4 r= 0 a= 0 # kproc: ready: done 141a5: m= 6 pc=44e1a9 sp= 54b05020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc: psleep 141a4: m= 4 pc=40be22 sp=2aaaece5af08 up=4 :4 xp=0 :0 r= 983360 a= 0 # proc: sleep: lock r 141a3: m= 6 pc=44e1a9 sp= 54b05020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc: search 141a2: m= 4 pc=40be22 sp=2aaaece5af08 up=4 :4 xp=0 :0 r= 983360 a= 0 # proc: sleep 141a1: m= 4 pc= 0 sp=2aaaece5af68 up=4 :4 xp=0 :0 r= 983360 a= 380 # kserver: wait 141a0: m= 4 pc=40bfbe sp=2aaaece5af28 up=4 :4 xp=0 :0 r=2aaabb71dd60 a= 0 # proc: wakeup: done 1419f: m= 4 pc=40bfbe sp=2aaaece5af28 up=4 :4 xp=0 :0 r=2aaabb71dd60 a= 0 # proc: wakeup: unlock r 1419e: m= 4 pc=40bfbe sp=2aaaece5af28 up=4 :4 xp=0 :0 r=2aaabb71dd60 a= 0 # proc: wakeup: lock r 1419d: m= 4 pc=40bfbe sp=2aaaece5af28 up=4 :4 xp=0 :0 r=2aaabb71dd60 a= 0 # proc: wakeup 1419c: m= 4 pc= 0 sp=2aaaece5af68 up=4 :4 xp=67:4 r= 983360 a= 1090 # kserver: done 1419b: m= 4 pc= 0 sp=2aaaece5af68 up=4 :4 xp=67:4 r= 983360 a= 1090 # kserver: serve 1419a: m= 4 pc=40be22 sp=2aaaece5af08 up=4 :4 xp=0 :0 r= 983360 a= 0 # proc: sleep: done 14199: m= 4 pc=40be22 sp=2aaaece5af08 up=4 :4 xp=0 :0 r= 983360 a= 0 # proc: sleep: awakened 14198: m= 4 pc=44df26 sp= 53003070 up=4 :4 xp=4 :4 r= 0 a= 0 # proc: sched: switch 14197: m= 4 pc=44df26 sp= 53003070 up=0 :0 xp=4 :2 r= 0 a= 0 # proc: sched: found 14196: m= 4 pc=44e1a9 sp= 53003020 up=0 :0 xp=4 :2 r= 0 a= 0 # kproc: runproc: found 14195: m= 4 pc=44e1a9 sp= 53003020 up=0 :0 xp=4 :2 r= 0 a= 0 # kproc: runproc: p removed from kqueue 14194: m= 4 pc=44e1a9 sp= 53003020 up=0 :0 xp=4 :2 r= 0 a= 0 # kproc: runproc: remove p from kqueue 14193: m= 4 pc=44e1a9 sp= 53003020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc: psleep: done 14192: m= 0 pc=44fd4b sp=2aaaedeec838 up=67:4 xp=4 :2 r= 0 a= 0 # kproc: ready: pwakeup: done 14191: m= 6 pc=44e1a9 sp= 54b05020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc 14190: m= 6 pc=44df26 sp= 54b05070 up=0 :0 xp=49:8 r= 0 a= 0 # proc: sched: calling runproc 1418f: m= 6 pc=44df26 sp= 54b05070 up=0 :0 xp=49:8 r= 0 a= 0 # proc: sched 1418e: m= 6 pc=40e41b sp= 54b050b0 up=49:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: disable up 1418d: m= 6 pc=40e41b sp= 54b050b0 up=49:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: up still active 1418c: m= 6 pc=40e41b sp= 54b050b0 up=49:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: start 1418b: m= 0 pc=44fd4b sp=2aaaedeec838 up=67:4 xp=4 :2 r= 0 a= 0 # kproc: ready: pwakeup 1418a: m= 6 pc=40be22 sp=2aaaeda67f08 up=49:8 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: going sched 14189: m= 0 pc=44fd4b sp=2aaaedeec838 up=67:4 xp=4 :8 r= 0 a= 0 # kproc: ready 14188: m= 6 pc=40be22 sp=2aaaeda67f08 up=49:8 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: unlock r 14187: m= 0 pc=40c101 sp=2aaaedeec868 up=67:4 xp=4 :8 r= 983360 a= 0 # proc: wakeup: ready p 14186: m= 6 pc=40be22 sp=2aaaeda67f08 up=49:8 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: unlock up 14185: m= 0 pc=40c101 sp=2aaaedeec868 up=67:4 xp=4 :8 r= 983360 a= 0 # proc: wakeup: lock p 14184: m= 0 pc=40c101 sp=2aaaedeec868 up=67:4 xp=0 :0 r= 983360 a= 0 # proc: wakeup: lock r 14183: m= 0 pc=40c101 sp=2aaaedeec868 up=67:4 xp=0 :0 r= 983360 a= 0 # proc: wakeup 14182: m= 6 pc=40be22 sp=2aaaeda67f08 up=49:8 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: sleeping 14181: m= 0 pc= 0 sp=2aaaedeec8a8 up=67:4 xp=4 :8 r= 983360 a= 1090 # kcall: wakeup 14180: m= 6 pc=40be22 sp=2aaaeda67f08 up=49:4 xp=0 :0 r= 9c40f0 a= 2 # proc: sleep: condition 1417f: m= 0 pc=40c5e2 sp=2aaaedeec8a8 up=67:4 xp=0 :0 r= 0 a= 1090 # kcall: call 1417e: m= 6 pc=40be22 sp=2aaaeda67f08 up=49:4 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: lock up 1417d: m= 6 pc=40be22 sp=2aaaeda67f08 up=49:4 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: lock r 1417c: m= 6 pc=40be22 sp=2aaaeda67f08 up=49:4 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep 1417b: m= 6 pc= 0 sp=2aaaeda67f68 up=49:4 xp=0 :0 r= 9c40f0 a= 203 # kserver: wait 1417a: m= 6 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=68:2 r=2aaabb7afd08 a= 0 # proc: wakeup: done 14179: m= 0 pc=40c39d sp=2aaaedeec868 up=67:4 xp=0 :0 r= 0 a= 1087 # kcall: done 14178: m= 6 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=68:2 r=2aaabb7afd08 a= 0 # proc: wakeup: unlock r 14177: m= 0 pc=40c39d sp=2aaaedeec868 up=67:4 xp=0 :0 r= 0 a= 1087 # kcall: release 14176: m= 6 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=68:2 r=2aaabb7afd08 a= 0 # proc: wakeup: unlock p 14175: m= 0 pc=40c269 sp=2aaaedeec808 up=67:4 xp=0 :0 r=2aaabb71dd60 a= 0 # proc: sleep: done 14174: m= 6 pc=40f22e sp=2aaaeda67ed8 up=49:4 xp=0 :0 r= 0 a= 0 # kproc: done 14173: m= 6 pc=40f22e sp=2aaaeda67ed8 up=49:4 xp=0 :0 r= 0 a= 0 # kproc: kick 14172: m= 6 pc=44fd4b sp=2aaaeda67eb8 up=49:4 xp=68:2 r= 0 a= 0 # proc: ready: done 14171: m= 0 pc=40c269 sp=2aaaedeec808 up=67:4 xp=0 :0 r=2aaabb71dd60 a= 0 # proc: sleep: awakened 14170: m= 6 pc=44e9da sp=2aaaeda67e78 up=49:4 xp=68:2 r= 0 a= 0 # proc: queue: done 1416f: m= 6 pc=44e9da sp=2aaaeda67e78 up=49:4 xp=68:2 r= 0 a= 0 # proc: queue 1416e: m= 6 pc=44fd4b sp=2aaaeda67eb8 up=49:4 xp=68:8 r= 0 a= 0 # proc: ready 1416d: m= 6 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=68:8 r=2aaabb7afd08 a= 0 # proc: wakeup: ready p 1416c: m= 6 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=68:8 r=2aaabb7afd08 a= 0 # proc: wakeup: lock p 1416b: m= 6 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: wakeup: lock r 1416a: m= 0 pc=44df26 sp=7fffffffe1d0 up=67:4 xp=68:8 r= 0 a= 0 # proc: sched: switch 14169: m= 6 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: wakeup 14168: m= 6 pc= 0 sp=2aaaeda67f68 up=49:4 xp=68:8 r= 9c40f0 a= 1089 # kserver: done 14167: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=67:3 r= 0 a= 0 # proc: sched: found 14166: m= 0 pc=40f3d2 sp=7fffffffe120 up=0 :0 xp=67:3 r= 0 a= 0 # proc: runproc: done 14165: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=67:2 r= 0 a= 0 # proc: dequeue: done 14164: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=67:2 r= 0 a= 0 # proc: dequeue 14163: m= 6 pc= 0 sp=2aaaeda67f68 up=49:4 xp=68:8 r= 9c40f0 a= 1089 # kserver: serve 14162: m= 6 pc=40be22 sp=2aaaeda67f08 up=49:4 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: done 14161: m= 6 pc=40be22 sp=2aaaeda67f08 up=49:4 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: awakened 14160: m= 0 pc=40f3d2 sp=7fffffffe120 up=0 :0 xp=0 :0 r= 0 a= 0 # proc: runproc 1415f: m= 6 pc=44df26 sp= 54b05070 up=49:4 xp=0 :0 r= 0 a= 0 # proc: sched: switch 1415e: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=68:8 r= 0 a= 0 # proc: sched: calling runproc 1415d: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=68:8 r= 0 a= 0 # proc: sched 1415c: m= 0 pc=40d2a0 sp=7fffffffe210 up=68:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: disable up 1415b: m= 0 pc=40d2a0 sp=7fffffffe210 up=68:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: up still active 1415a: m= 0 pc=40d2a0 sp=7fffffffe210 up=68:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: start 14159: m= 0 pc=40c269 sp=2aaaedfeda38 up=68:8 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: going sched 14158: m= 6 pc=44df26 sp= 54b05070 up=0 :0 xp=49:2 r= 0 a= 0 # proc: sched: found 14157: m= 0 pc=40c269 sp=2aaaedfeda38 up=68:8 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: unlock r 14156: m= 6 pc=44e1a9 sp= 54b05020 up=0 :0 xp=49:2 r= 0 a= 0 # kproc: runproc: found 14155: m= 0 pc=40c269 sp=2aaaedfeda38 up=68:8 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: unlock up 14154: m= 6 pc=44e1a9 sp= 54b05020 up=0 :0 xp=49:2 r= 0 a= 0 # kproc: runproc: p removed from kqueue 14153: m= 6 pc=44e1a9 sp= 54b05020 up=0 :0 xp=49:2 r= 0 a= 0 # kproc: runproc: remove p from kqueue 14152: m= 6 pc=44e1a9 sp= 54b05020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc: psleep: done 14151: m= 0 pc=40c269 sp=2aaaedfeda38 up=68:8 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: sleeping 14150: m= 0 pc=40c269 sp=2aaaedfeda38 up=68:4 xp=0 :0 r=2aaabb7afd08 a= 2 # proc: sleep: condition 1414f: m= 0 pc=40c269 sp=2aaaedfeda38 up=68:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: lock up 1414e: m= 0 pc=40c269 sp=2aaaedfeda38 up=68:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: lock r 1414d: m= 0 pc=40c269 sp=2aaaedfeda38 up=68:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep 1414c: m= 0 pc=40c5e2 sp=2aaaedfeda98 up=68:4 xp=0 :0 r= 0 a= 1089 # kcall: sleep 1414b: m= 0 pc=40c101 sp=2aaaedfeda58 up=68:4 xp=49:2 r= 9c40f0 a= 0 # proc: wakeup: done 1414a: m= 0 pc=40c101 sp=2aaaedfeda58 up=68:4 xp=49:2 r= 9c40f0 a= 0 # proc: wakeup: unlock r 14149: m= 0 pc=40c101 sp=2aaaedfeda58 up=68:4 xp=49:2 r= 9c40f0 a= 0 # proc: wakeup: unlock p 14148: m= 0 pc=44fd4b sp=2aaaedfeda28 up=68:4 xp=49:2 r= 0 a= 0 # kproc: ready: done 14147: m= 0 pc=44fd4b sp=2aaaedfeda28 up=68:4 xp=49:2 r= 0 a= 0 # kproc: ready: pwakeup: done 14146: m= 0 pc=44fd4b sp=2aaaedfeda28 up=68:4 xp=49:2 r= 0 a= 0 # kproc: ready: pwakeup 14145: m= 0 pc=44fd4b sp=2aaaedfeda28 up=68:4 xp=49:8 r= 0 a= 0 # kproc: ready 14144: m= 0 pc=40c101 sp=2aaaedfeda58 up=68:4 xp=49:8 r= 9c40f0 a= 0 # proc: wakeup: ready p 14143: m= 0 pc=40c101 sp=2aaaedfeda58 up=68:4 xp=49:8 r= 9c40f0 a= 0 # proc: wakeup: lock p 14142: m= 0 pc=40c101 sp=2aaaedfeda58 up=68:4 xp=0 :0 r= 9c40f0 a= 0 # proc: wakeup: lock r 14141: m= 0 pc=40c101 sp=2aaaedfeda58 up=68:4 xp=0 :0 r= 9c40f0 a= 0 # proc: wakeup 14140: m= 0 pc= 0 sp=2aaaedfeda98 up=68:4 xp=49:8 r= 9c40f0 a= 1089 # kcall: wakeup 1413f: m= 0 pc=40c5e2 sp=2aaaedfeda98 up=68:4 xp=0 :0 r= 0 a= 1089 # kcall: call 1413e: m= 6 pc=44e1a9 sp= 54b05020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc: psleep 1413d: m= 6 pc=44e1a9 sp= 54b05020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc: psleep: done 1413c: m= 4 pc=44e1a9 sp= 53003020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc: psleep 1413b: m= 4 pc=44e1a9 sp= 53003020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc: psleep: done 1413a: m= 5 pc=44e1a9 sp= 53c04020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc: psleep 14139: m= 5 pc=44e1a9 sp= 53c04020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc: search 14138: m= 1 pc=44e1a9 sp= 50b00020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc: psleep 14137: m= 1 pc=44e1a9 sp= 50b00020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc: search 14136: m= 1 pc=44e1a9 sp= 50b00020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc 14135: m= 1 pc=44df26 sp= 50b00070 up=0 :0 xp=8 :8 r= 0 a= 0 # proc: sched: calling runproc 14134: m= 1 pc=44df26 sp= 50b00070 up=0 :0 xp=8 :8 r= 0 a= 0 # proc: sched 14133: m= 1 pc=40e41b sp= 50b000b0 up=8 :8 xp=0 :0 r= 0 a= 0 # proc: schedinit: disable up 14132: m= 1 pc=40e41b sp= 50b000b0 up=8 :8 xp=0 :0 r= 0 a= 0 # proc: schedinit: up still active 14131: m= 1 pc=40e41b sp= 50b000b0 up=8 :8 xp=0 :0 r= 0 a= 0 # proc: schedinit: start 14130: m= 1 pc=40be22 sp=2aaaed2dff08 up=8 :8 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: going sched 1412f: m= 1 pc=40be22 sp=2aaaed2dff08 up=8 :8 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: unlock r 1412e: m= 1 pc=40be22 sp=2aaaed2dff08 up=8 :8 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: unlock up 1412d: m= 1 pc=40be22 sp=2aaaed2dff08 up=8 :8 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: sleeping 1412c: m= 1 pc=40be22 sp=2aaaed2dff08 up=8 :4 xp=0 :0 r= 9a8340 a= 2 # proc: sleep: condition 1412b: m= 1 pc=40be22 sp=2aaaed2dff08 up=8 :4 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: lock up 1412a: m= 1 pc=40be22 sp=2aaaed2dff08 up=8 :4 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: lock r 14129: m= 1 pc=40be22 sp=2aaaed2dff08 up=8 :4 xp=0 :0 r= 9a8340 a= 0 # proc: sleep 14128: m= 1 pc= 0 sp=2aaaed2dff68 up=8 :4 xp=0 :0 r= 9a8340 a= 444 # kserver: wait 14127: m= 1 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=66:2 r=2aaabb68bdb8 a= 0 # proc: wakeup: done 14126: m= 1 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=66:2 r=2aaabb68bdb8 a= 0 # proc: wakeup: unlock r 14125: m= 1 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=66:2 r=2aaabb68bdb8 a= 0 # proc: wakeup: unlock p 14124: m= 1 pc=40f22e sp=2aaaed2dfed8 up=8 :4 xp=0 :0 r= 0 a= 0 # kproc: done 14123: m= 1 pc=40f22e sp=2aaaed2dfed8 up=8 :4 xp=0 :0 r= 0 a= 0 # kproc: kick 14122: m= 1 pc=44fd4b sp=2aaaed2dfeb8 up=8 :4 xp=66:2 r= 0 a= 0 # proc: ready: done 14121: m= 1 pc=44e9da sp=2aaaed2dfe78 up=8 :4 xp=66:2 r= 0 a= 0 # proc: queue: done 14120: m= 1 pc=44e9da sp=2aaaed2dfe78 up=8 :4 xp=66:2 r= 0 a= 0 # proc: queue 1411f: m= 1 pc=44fd4b sp=2aaaed2dfeb8 up=8 :4 xp=66:8 r= 0 a= 0 # proc: ready 1411e: m= 1 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=66:8 r=2aaabb68bdb8 a= 0 # proc: wakeup: ready p 1411d: m= 1 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=66:8 r=2aaabb68bdb8 a= 0 # proc: wakeup: lock p 1411c: m= 1 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: wakeup: lock r 1411b: m= 1 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: wakeup 1411a: m= 1 pc= 0 sp=2aaaed2dff68 up=8 :4 xp=66:8 r= 9a8340 a= 1088 # kserver: done 14119: m= 1 pc= 0 sp=2aaaed2dff68 up=8 :4 xp=66:8 r= 9a8340 a= 1088 # kserver: serve 14118: m= 1 pc=40be22 sp=2aaaed2dff08 up=8 :4 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: done 14117: m= 1 pc=40be22 sp=2aaaed2dff08 up=8 :4 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: awakened 14116: m= 0 pc=40c519 sp=2aaaedfed978 up=68:4 xp=0 :0 r= 0 a= 1086 # kcall: done 14115: m= 1 pc=44df26 sp= 50b00070 up=8 :4 xp=4 :8 r= 0 a= 0 # proc: sched: switch 14114: m= 0 pc=40c519 sp=2aaaedfed978 up=68:4 xp=0 :0 r= 0 a= 1086 # kcall: release 14113: m= 0 pc=40c269 sp=2aaaedfed918 up=68:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: done 14112: m= 0 pc=40c269 sp=2aaaedfed918 up=68:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: awakened 14111: m= 1 pc=44df26 sp= 50b00070 up=0 :0 xp=8 :2 r= 0 a= 0 # proc: sched: found 14110: m= 1 pc=44e1a9 sp= 50b00020 up=0 :0 xp=8 :2 r= 0 a= 0 # kproc: runproc: found 1410f: m= 0 pc=44df26 sp=7fffffffe1d0 up=68:4 xp=66:8 r= 0 a= 0 # proc: sched: switch 1410e: m= 1 pc=44e1a9 sp= 50b00020 up=0 :0 xp=8 :2 r= 0 a= 0 # kproc: runproc: p removed from kqueue 1410d: m= 1 pc=44e1a9 sp= 50b00020 up=0 :0 xp=8 :2 r= 0 a= 0 # kproc: runproc: remove p from kqueue 1410c: m= 1 pc=44e1a9 sp= 50b00020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc: search 1410b: m= 1 pc=44e1a9 sp= 50b00020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc 1410a: m= 1 pc=44df26 sp= 50b00070 up=0 :0 xp=4 :8 r= 0 a= 0 # proc: sched: calling runproc 14109: m= 1 pc=44df26 sp= 50b00070 up=0 :0 xp=4 :8 r= 0 a= 0 # proc: sched 14108: m= 1 pc=40e41b sp= 50b000b0 up=4 :8 xp=0 :0 r= 0 a= 0 # proc: schedinit: disable up 14107: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=68:3 r= 0 a= 0 # proc: sched: found 14106: m= 1 pc=40e41b sp= 50b000b0 up=4 :8 xp=0 :0 r= 0 a= 0 # proc: schedinit: up still active 14105: m= 0 pc=40f3d2 sp=7fffffffe120 up=0 :0 xp=68:3 r= 0 a= 0 # proc: runproc: done 14104: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue: done 14103: m= 1 pc=40e41b sp= 50b000b0 up=4 :8 xp=0 :0 r= 0 a= 0 # proc: schedinit: start 14102: m= 1 pc=40be22 sp=2aaaece5af08 up=4 :8 xp=0 :0 r= 983360 a= 0 # proc: sleep: going sched 14101: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue 14100: m= 1 pc=40be22 sp=2aaaece5af08 up=4 :8 xp=0 :0 r= 983360 a= 0 # proc: sleep: unlock r 140ff: m= 1 pc=40be22 sp=2aaaece5af08 up=4 :8 xp=0 :0 r= 983360 a= 0 # proc: sleep: unlock up 140fe: m= 0 pc=40f3d2 sp=7fffffffe120 up=0 :0 xp=0 :0 r= 0 a= 0 # proc: runproc 140fd: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=66:8 r= 0 a= 0 # proc: sched: calling runproc 140fc: m= 1 pc=40be22 sp=2aaaece5af08 up=4 :8 xp=0 :0 r= 983360 a= 0 # proc: sleep: sleeping 140fb: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=66:8 r= 0 a= 0 # proc: sched 140fa: m= 1 pc=40be22 sp=2aaaece5af08 up=4 :4 xp=0 :0 r= 983360 a= 2 # proc: sleep: condition 140f9: m= 0 pc=40d2a0 sp=7fffffffe210 up=66:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: disable up 140f8: m= 1 pc=40be22 sp=2aaaece5af08 up=4 :4 xp=0 :0 r= 983360 a= 0 # proc: sleep: lock up 140f7: m= 1 pc=40be22 sp=2aaaece5af08 up=4 :4 xp=0 :0 r= 983360 a= 0 # proc: sleep: lock r 140f6: m= 1 pc=40be22 sp=2aaaece5af08 up=4 :4 xp=0 :0 r= 983360 a= 0 # proc: sleep 140f5: m= 0 pc=40d2a0 sp=7fffffffe210 up=66:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: up still active 140f4: m= 1 pc= 0 sp=2aaaece5af68 up=4 :4 xp=0 :0 r= 983360 a= 379 # kserver: wait 140f3: m= 0 pc=40d2a0 sp=7fffffffe210 up=66:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: start 140f2: m= 1 pc=40bfbe sp=2aaaece5af28 up=4 :4 xp=67:2 r=2aaabb71dd60 a= 0 # proc: wakeup: done 140f1: m= 0 pc=40c269 sp=2aaaeddeba88 up=66:8 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: going sched 140f0: m= 0 pc=40c269 sp=2aaaeddeba88 up=66:8 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: unlock r 140ef: m= 1 pc=40bfbe sp=2aaaece5af28 up=4 :4 xp=67:2 r=2aaabb71dd60 a= 0 # proc: wakeup: unlock r 140ee: m= 0 pc=40c269 sp=2aaaeddeba88 up=66:8 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: unlock up 140ed: m= 1 pc=40bfbe sp=2aaaece5af28 up=4 :4 xp=67:2 r=2aaabb71dd60 a= 0 # proc: wakeup: unlock p 140ec: m= 1 pc=40f22e sp=2aaaece5aed8 up=4 :4 xp=0 :0 r= 0 a= 0 # kproc: done 140eb: m= 1 pc=40f22e sp=2aaaece5aed8 up=4 :4 xp=0 :0 r= 0 a= 0 # kproc: kick 140ea: m= 0 pc=40c269 sp=2aaaeddeba88 up=66:8 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: sleeping 140e9: m= 1 pc=44fd4b sp=2aaaece5aeb8 up=4 :4 xp=67:2 r= 0 a= 0 # proc: ready: done 140e8: m= 0 pc=40c269 sp=2aaaeddeba88 up=66:4 xp=0 :0 r=2aaabb68bdb8 a= 2 # proc: sleep: condition 140e7: m= 1 pc=44e9da sp=2aaaece5ae78 up=4 :4 xp=67:2 r= 0 a= 0 # proc: queue: done 140e6: m= 0 pc=40c269 sp=2aaaeddeba88 up=66:4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: lock up 140e5: m= 1 pc=44e9da sp=2aaaece5ae78 up=4 :4 xp=67:2 r= 0 a= 0 # proc: queue 140e4: m= 0 pc=40c269 sp=2aaaeddeba88 up=66:4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: lock r 140e3: m= 0 pc=40c269 sp=2aaaeddeba88 up=66:4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep 140e2: m= 0 pc=40c88b sp=2aaaeddebae8 up=66:4 xp=0 :0 r= 0 a= 1088 # kcall: sleep 140e1: m= 0 pc=40c101 sp=2aaaeddebaa8 up=66:4 xp=8 :2 r= 9a8340 a= 0 # proc: wakeup: done 140e0: m= 1 pc=44fd4b sp=2aaaece5aeb8 up=4 :4 xp=67:8 r= 0 a= 0 # proc: ready 140df: m= 1 pc=40bfbe sp=2aaaece5af28 up=4 :4 xp=67:8 r=2aaabb71dd60 a= 0 # proc: wakeup: ready p 140de: m= 0 pc=40c101 sp=2aaaeddebaa8 up=66:4 xp=8 :2 r= 9a8340 a= 0 # proc: wakeup: unlock r 140dd: m= 0 pc=40c101 sp=2aaaeddebaa8 up=66:4 xp=8 :2 r= 9a8340 a= 0 # proc: wakeup: unlock p 140dc: m= 1 pc=40bfbe sp=2aaaece5af28 up=4 :4 xp=67:8 r=2aaabb71dd60 a= 0 # proc: wakeup: lock p 140db: m= 0 pc=44fd4b sp=2aaaeddeba78 up=66:4 xp=8 :2 r= 0 a= 0 # kproc: ready: done 140da: m= 1 pc=40bfbe sp=2aaaece5af28 up=4 :4 xp=0 :0 r=2aaabb71dd60 a= 0 # proc: wakeup: lock r 140d9: m= 1 pc=40bfbe sp=2aaaece5af28 up=4 :4 xp=0 :0 r=2aaabb71dd60 a= 0 # proc: wakeup 140d8: m= 0 pc=44fd4b sp=2aaaeddeba78 up=66:4 xp=8 :2 r= 0 a= 0 # kproc: ready: pwakeup: done 140d7: m= 1 pc= 0 sp=2aaaece5af68 up=4 :4 xp=67:8 r= 983360 a= 1087 # kserver: done 140d6: m= 0 pc=44fd4b sp=2aaaeddeba78 up=66:4 xp=8 :2 r= 0 a= 0 # kproc: ready: pwakeup 140d5: m= 0 pc=44fd4b sp=2aaaeddeba78 up=66:4 xp=8 :8 r= 0 a= 0 # kproc: ready 140d4: m= 0 pc=40c101 sp=2aaaeddebaa8 up=66:4 xp=8 :8 r= 9a8340 a= 0 # proc: wakeup: ready p 140d3: m= 0 pc=40c101 sp=2aaaeddebaa8 up=66:4 xp=8 :8 r= 9a8340 a= 0 # proc: wakeup: lock p 140d2: m= 0 pc=40c101 sp=2aaaeddebaa8 up=66:4 xp=0 :0 r= 9a8340 a= 0 # proc: wakeup: lock r 140d1: m= 0 pc=40c101 sp=2aaaeddebaa8 up=66:4 xp=0 :0 r= 9a8340 a= 0 # proc: wakeup 140d0: m= 0 pc= 0 sp=2aaaeddebae8 up=66:4 xp=8 :8 r= 9a8340 a= 1088 # kcall: wakeup 140cf: m= 0 pc=40c88b sp=2aaaeddebae8 up=66:4 xp=0 :0 r= 0 a= 1088 # kcall: call 140ce: m= 0 pc=40c39d sp=2aaaeddeb878 up=66:4 xp=0 :0 r= 0 a= 1085 # kcall: done 140cd: m= 0 pc=40c39d sp=2aaaeddeb878 up=66:4 xp=0 :0 r= 0 a= 1085 # kcall: release 140cc: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: done 140cb: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: awakened 140ca: m= 0 pc=44df26 sp=7fffffffe1d0 up=66:4 xp=67:8 r= 0 a= 0 # proc: sched: switch 140c9: m= 1 pc= 0 sp=2aaaece5af68 up=4 :4 xp=67:8 r= 983360 a= 1087 # kserver: serve 140c8: m= 1 pc=40be22 sp=2aaaece5af08 up=4 :4 xp=0 :0 r= 983360 a= 0 # proc: sleep: done 140c7: m= 1 pc=40be22 sp=2aaaece5af08 up=4 :4 xp=0 :0 r= 983360 a= 0 # proc: sleep: awakened 140c6: m= 1 pc=44df26 sp= 50b00070 up=4 :4 xp=49:8 r= 0 a= 0 # proc: sched: switch 140c5: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=66:3 r= 0 a= 0 # proc: sched: found 140c4: m= 0 pc=40f3d2 sp=7fffffffe120 up=0 :0 xp=66:3 r= 0 a= 0 # proc: runproc: done 140c3: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=66:2 r= 0 a= 0 # proc: dequeue: done 140c2: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=66:2 r= 0 a= 0 # proc: dequeue 140c1: m= 1 pc=44df26 sp= 50b00070 up=0 :0 xp=4 :2 r= 0 a= 0 # proc: sched: found 140c0: m= 1 pc=44e1a9 sp= 50b00020 up=0 :0 xp=4 :2 r= 0 a= 0 # kproc: runproc: found 140bf: m= 1 pc=44e1a9 sp= 50b00020 up=0 :0 xp=4 :2 r= 0 a= 0 # kproc: runproc: p removed from kqueue 140be: m= 0 pc=40f3d2 sp=7fffffffe120 up=0 :0 xp=0 :0 r= 0 a= 0 # proc: runproc 140bd: m= 1 pc=44e1a9 sp= 50b00020 up=0 :0 xp=4 :2 r= 0 a= 0 # kproc: runproc: remove p from kqueue 140bc: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=67:8 r= 0 a= 0 # proc: sched: calling runproc 140bb: m= 1 pc=44e1a9 sp= 50b00020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc: search 140ba: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=67:8 r= 0 a= 0 # proc: sched 140b9: m= 1 pc=44e1a9 sp= 50b00020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc 140b8: m= 1 pc=44df26 sp= 50b00070 up=0 :0 xp=49:8 r= 0 a= 0 # proc: sched: calling runproc 140b7: m= 0 pc=40d2a0 sp=7fffffffe210 up=67:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: disable up 140b6: m= 1 pc=44df26 sp= 50b00070 up=0 :0 xp=49:8 r= 0 a= 0 # proc: sched 140b5: m= 1 pc=40e41b sp= 50b000b0 up=49:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: disable up 140b4: m= 0 pc=40d2a0 sp=7fffffffe210 up=67:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: up still active 140b3: m= 0 pc=40d2a0 sp=7fffffffe210 up=67:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: start 140b2: m= 1 pc=40e41b sp= 50b000b0 up=49:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: up still active 140b1: m= 0 pc=40c269 sp=2aaaedeec808 up=67:8 xp=0 :0 r=2aaabb71dd60 a= 0 # proc: sleep: going sched 140b0: m= 1 pc=40e41b sp= 50b000b0 up=49:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: start 140af: m= 0 pc=40c269 sp=2aaaedeec808 up=67:8 xp=0 :0 r=2aaabb71dd60 a= 0 # proc: sleep: unlock r 140ae: m= 1 pc=40be22 sp=2aaaeda67f08 up=49:8 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: going sched 140ad: m= 0 pc=40c269 sp=2aaaedeec808 up=67:8 xp=0 :0 r=2aaabb71dd60 a= 0 # proc: sleep: unlock up 140ac: m= 1 pc=40be22 sp=2aaaeda67f08 up=49:8 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: unlock r 140ab: m= 1 pc=40be22 sp=2aaaeda67f08 up=49:8 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: unlock up 140aa: m= 0 pc=40c269 sp=2aaaedeec808 up=67:8 xp=0 :0 r=2aaabb71dd60 a= 0 # proc: sleep: sleeping 140a9: m= 0 pc=40c269 sp=2aaaedeec808 up=67:4 xp=0 :0 r=2aaabb71dd60 a= 2 # proc: sleep: condition 140a8: m= 1 pc=40be22 sp=2aaaeda67f08 up=49:8 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: sleeping 140a7: m= 0 pc=40c269 sp=2aaaedeec808 up=67:4 xp=0 :0 r=2aaabb71dd60 a= 0 # proc: sleep: lock up 140a6: m= 1 pc=40be22 sp=2aaaeda67f08 up=49:4 xp=0 :0 r= 9c40f0 a= 2 # proc: sleep: condition 140a5: m= 0 pc=40c269 sp=2aaaedeec808 up=67:4 xp=0 :0 r=2aaabb71dd60 a= 0 # proc: sleep: lock r 140a4: m= 1 pc=40be22 sp=2aaaeda67f08 up=49:4 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: lock up 140a3: m= 0 pc=40c269 sp=2aaaedeec808 up=67:4 xp=0 :0 r=2aaabb71dd60 a= 0 # proc: sleep 140a2: m= 1 pc=40be22 sp=2aaaeda67f08 up=49:4 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: lock r 140a1: m= 0 pc=40c39d sp=2aaaedeec868 up=67:4 xp=0 :0 r= 0 a= 1087 # kcall: sleep 140a0: m= 1 pc=40be22 sp=2aaaeda67f08 up=49:4 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep 1409f: m= 1 pc= 0 sp=2aaaeda67f68 up=49:4 xp=0 :0 r= 9c40f0 a= 202 # kserver: wait 1409e: m= 0 pc=40c101 sp=2aaaedeec828 up=67:4 xp=4 :2 r= 983360 a= 0 # proc: wakeup: done 1409d: m= 1 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=68:2 r=2aaabb7afd08 a= 0 # proc: wakeup: done 1409c: m= 0 pc=40c101 sp=2aaaedeec828 up=67:4 xp=4 :2 r= 983360 a= 0 # proc: wakeup: unlock r 1409b: m= 0 pc=40c101 sp=2aaaedeec828 up=67:4 xp=4 :2 r= 983360 a= 0 # proc: wakeup: unlock p 1409a: m= 1 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=68:2 r=2aaabb7afd08 a= 0 # proc: wakeup: unlock r 14099: m= 0 pc=44fd4b sp=2aaaedeec7f8 up=67:4 xp=4 :2 r= 0 a= 0 # kproc: ready: done 14098: m= 1 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=68:2 r=2aaabb7afd08 a= 0 # proc: wakeup: unlock p 14097: m= 1 pc=40f22e sp=2aaaeda67ed8 up=49:4 xp=0 :0 r= 0 a= 0 # kproc: done 14096: m= 0 pc=44fd4b sp=2aaaedeec7f8 up=67:4 xp=4 :2 r= 0 a= 0 # kproc: ready: pwakeup: done 14095: m= 1 pc=40f22e sp=2aaaeda67ed8 up=49:4 xp=0 :0 r= 0 a= 0 # kproc: kick 14094: m= 1 pc=44fd4b sp=2aaaeda67eb8 up=49:4 xp=68:2 r= 0 a= 0 # proc: ready: done 14093: m= 1 pc=44e9da sp=2aaaeda67e78 up=49:4 xp=68:2 r= 0 a= 0 # proc: queue: done 14092: m= 0 pc=44fd4b sp=2aaaedeec7f8 up=67:4 xp=4 :2 r= 0 a= 0 # kproc: ready: pwakeup 14091: m= 1 pc=44e9da sp=2aaaeda67e78 up=49:4 xp=68:2 r= 0 a= 0 # proc: queue 14090: m= 0 pc=44fd4b sp=2aaaedeec7f8 up=67:4 xp=4 :8 r= 0 a= 0 # kproc: ready 1408f: m= 0 pc=40c101 sp=2aaaedeec828 up=67:4 xp=4 :8 r= 983360 a= 0 # proc: wakeup: ready p 1408e: m= 0 pc=40c101 sp=2aaaedeec828 up=67:4 xp=4 :8 r= 983360 a= 0 # proc: wakeup: lock p 1408d: m= 0 pc=40c101 sp=2aaaedeec828 up=67:4 xp=0 :0 r= 983360 a= 0 # proc: wakeup: lock r 1408c: m= 1 pc=44fd4b sp=2aaaeda67eb8 up=49:4 xp=68:8 r= 0 a= 0 # proc: ready 1408b: m= 0 pc=40c101 sp=2aaaedeec828 up=67:4 xp=0 :0 r= 983360 a= 0 # proc: wakeup 1408a: m= 1 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=68:8 r=2aaabb7afd08 a= 0 # proc: wakeup: ready p 14089: m= 0 pc= 0 sp=2aaaedeec868 up=67:4 xp=4 :8 r= 983360 a= 1087 # kcall: wakeup 14088: m= 1 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=68:8 r=2aaabb7afd08 a= 0 # proc: wakeup: lock p 14087: m= 0 pc=40c39d sp=2aaaedeec868 up=67:4 xp=0 :0 r= 0 a= 1087 # kcall: call 14086: m= 1 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: wakeup: lock r 14085: m= 1 pc=40bfbe sp=2aaaeda67f28 up=49:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: wakeup 14084: m= 1 pc= 0 sp=2aaaeda67f68 up=49:4 xp=68:8 r= 9c40f0 a= 1086 # kserver: done 14083: m= 0 pc=40c39d sp=2aaaedeec868 up=67:4 xp=0 :0 r= 0 a= 1084 # kcall: done 14082: m= 0 pc=40c39d sp=2aaaedeec868 up=67:4 xp=0 :0 r= 0 a= 1084 # kcall: release 14081: m= 0 pc=40c269 sp=2aaaedeec808 up=67:4 xp=0 :0 r=2aaabb71dd60 a= 0 # proc: sleep: done 14080: m= 0 pc=40c269 sp=2aaaedeec808 up=67:4 xp=0 :0 r=2aaabb71dd60 a= 0 # proc: sleep: awakened 1407f: m= 0 pc=44df26 sp=7fffffffe1d0 up=67:4 xp=68:8 r= 0 a= 0 # proc: sched: switch 1407e: m= 1 pc= 0 sp=2aaaeda67f68 up=49:4 xp=68:8 r= 9c40f0 a= 1086 # kserver: serve 1407d: m= 1 pc=40be22 sp=2aaaeda67f08 up=49:4 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: done 1407c: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=67:3 r= 0 a= 0 # proc: sched: found 1407b: m= 1 pc=40be22 sp=2aaaeda67f08 up=49:4 xp=0 :0 r= 9c40f0 a= 0 # proc: sleep: awakened 1407a: m= 0 pc=40f3d2 sp=7fffffffe120 up=0 :0 xp=67:3 r= 0 a= 0 # proc: runproc: done 14079: m= 1 pc=44df26 sp= 50b00070 up=49:4 xp=8 :8 r= 0 a= 0 # proc: sched: switch 14078: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=67:2 r= 0 a= 0 # proc: dequeue: done 14077: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=67:2 r= 0 a= 0 # proc: dequeue 14076: m= 0 pc=40f3d2 sp=7fffffffe120 up=0 :0 xp=0 :0 r= 0 a= 0 # proc: runproc 14075: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=68:8 r= 0 a= 0 # proc: sched: calling runproc 14074: m= 1 pc=44df26 sp= 50b00070 up=0 :0 xp=49:2 r= 0 a= 0 # proc: sched: found 14073: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=68:8 r= 0 a= 0 # proc: sched 14072: m= 1 pc=44e1a9 sp= 50b00020 up=0 :0 xp=49:2 r= 0 a= 0 # kproc: runproc: found 14071: m= 0 pc=40d2a0 sp=7fffffffe210 up=68:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: disable up 14070: m= 1 pc=44e1a9 sp= 50b00020 up=0 :0 xp=49:2 r= 0 a= 0 # kproc: runproc: p removed from kqueue 1406f: m= 1 pc=44e1a9 sp= 50b00020 up=0 :0 xp=49:2 r= 0 a= 0 # kproc: runproc: remove p from kqueue 1406e: m= 1 pc=44e1a9 sp= 50b00020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc: search 1406d: m= 1 pc=44e1a9 sp= 50b00020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc 1406c: m= 1 pc=44df26 sp= 50b00070 up=0 :0 xp=8 :8 r= 0 a= 0 # proc: sched: calling runproc 1406b: m= 1 pc=44df26 sp= 50b00070 up=0 :0 xp=8 :8 r= 0 a= 0 # proc: sched 1406a: m= 0 pc=40d2a0 sp=7fffffffe210 up=68:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: up still active 14069: m= 1 pc=40e41b sp= 50b000b0 up=8 :8 xp=0 :0 r= 0 a= 0 # proc: schedinit: disable up 14068: m= 0 pc=40d2a0 sp=7fffffffe210 up=68:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: start 14067: m= 0 pc=40c269 sp=2aaaedfed918 up=68:8 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: going sched 14066: m= 0 pc=40c269 sp=2aaaedfed918 up=68:8 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: unlock r 14065: m= 0 pc=40c269 sp=2aaaedfed918 up=68:8 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: unlock up 14064: m= 1 pc=40e41b sp= 50b000b0 up=8 :8 xp=0 :0 r= 0 a= 0 # proc: schedinit: up still active 14063: m= 1 pc=40e41b sp= 50b000b0 up=8 :8 xp=0 :0 r= 0 a= 0 # proc: schedinit: start 14062: m= 1 pc=40be22 sp=2aaaed2dff08 up=8 :8 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: going sched 14061: m= 1 pc=40be22 sp=2aaaed2dff08 up=8 :8 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: unlock r 14060: m= 0 pc=40c269 sp=2aaaedfed918 up=68:8 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: sleeping 1405f: m= 1 pc=40be22 sp=2aaaed2dff08 up=8 :8 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: unlock up 1405e: m= 0 pc=40c269 sp=2aaaedfed918 up=68:4 xp=0 :0 r=2aaabb7afd08 a= 2 # proc: sleep: condition 1405d: m= 0 pc=40c269 sp=2aaaedfed918 up=68:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: lock up 1405c: m= 0 pc=40c269 sp=2aaaedfed918 up=68:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: lock r 1405b: m= 0 pc=40c269 sp=2aaaedfed918 up=68:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep 1405a: m= 1 pc=40be22 sp=2aaaed2dff08 up=8 :8 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: sleeping 14059: m= 0 pc=40c519 sp=2aaaedfed978 up=68:4 xp=0 :0 r= 0 a= 1086 # kcall: sleep 14058: m= 1 pc=40be22 sp=2aaaed2dff08 up=8 :4 xp=0 :0 r= 9a8340 a= 2 # proc: sleep: condition 14057: m= 0 pc=40c101 sp=2aaaedfed938 up=68:4 xp=49:2 r= 9c40f0 a= 0 # proc: wakeup: done 14056: m= 1 pc=40be22 sp=2aaaed2dff08 up=8 :4 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: lock up 14055: m= 0 pc=40c101 sp=2aaaedfed938 up=68:4 xp=49:2 r= 9c40f0 a= 0 # proc: wakeup: unlock r 14054: m= 1 pc=40be22 sp=2aaaed2dff08 up=8 :4 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: lock r 14053: m= 0 pc=40c101 sp=2aaaedfed938 up=68:4 xp=49:2 r= 9c40f0 a= 0 # proc: wakeup: unlock p 14052: m= 1 pc=40be22 sp=2aaaed2dff08 up=8 :4 xp=0 :0 r= 9a8340 a= 0 # proc: sleep 14051: m= 1 pc= 0 sp=2aaaed2dff68 up=8 :4 xp=0 :0 r= 9a8340 a= 443 # kserver: wait 14050: m= 0 pc=44fd4b sp=2aaaedfed908 up=68:4 xp=49:2 r= 0 a= 0 # kproc: ready: done 1404f: m= 1 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=66:2 r=2aaabb68bdb8 a= 0 # proc: wakeup: done 1404e: m= 1 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=66:2 r=2aaabb68bdb8 a= 0 # proc: wakeup: unlock r 1404d: m= 1 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=66:2 r=2aaabb68bdb8 a= 0 # proc: wakeup: unlock p 1404c: m= 0 pc=44fd4b sp=2aaaedfed908 up=68:4 xp=49:2 r= 0 a= 0 # kproc: ready: pwakeup: done 1404b: m= 1 pc=40f22e sp=2aaaed2dfed8 up=8 :4 xp=0 :0 r= 0 a= 0 # kproc: done 1404a: m= 1 pc=40f22e sp=2aaaed2dfed8 up=8 :4 xp=0 :0 r= 0 a= 0 # kproc: kick 14049: m= 1 pc=44fd4b sp=2aaaed2dfeb8 up=8 :4 xp=66:2 r= 0 a= 0 # proc: ready: done 14048: m= 1 pc=44e9da sp=2aaaed2dfe78 up=8 :4 xp=66:2 r= 0 a= 0 # proc: queue: done 14047: m= 5 pc=44e1a9 sp= 53c04020 up=0 :0 xp=0 :0 r= 0 a= 0 # kproc: runproc 14046: m= 5 pc=44df26 sp= 53c04070 up=0 :0 xp=4 :8 r= 0 a= 0 # proc: sched: calling runproc 14045: m= 0 pc=44fd4b sp=2aaaedfed908 up=68:4 xp=49:2 r= 0 a= 0 # kproc: ready: pwakeup 14044: m= 5 pc=44df26 sp= 53c04070 up=0 :0 xp=4 :8 r= 0 a= 0 # proc: sched 14043: m= 5 pc=40e41b sp= 53c040b0 up=4 :8 xp=0 :0 r= 0 a= 0 # proc: schedinit: disable up 14042: m= 0 pc=44fd4b sp=2aaaedfed908 up=68:4 xp=49:8 r= 0 a= 0 # kproc: ready 14041: m= 0 pc=40c101 sp=2aaaedfed938 up=68:4 xp=49:8 r= 9c40f0 a= 0 # proc: wakeup: ready p 14040: m= 0 pc=40c101 sp=2aaaedfed938 up=68:4 xp=49:8 r= 9c40f0 a= 0 # proc: wakeup: lock p 1403f: m= 5 pc=40e41b sp= 53c040b0 up=4 :8 xp=0 :0 r= 0 a= 0 # proc: schedinit: up still active 1403e: m= 0 pc=40c101 sp=2aaaedfed938 up=68:4 xp=0 :0 r= 9c40f0 a= 0 # proc: wakeup: lock r 1403d: m= 5 pc=40e41b sp= 53c040b0 up=4 :8 xp=0 :0 r= 0 a= 0 # proc: schedinit: start 1403c: m= 0 pc=40c101 sp=2aaaedfed938 up=68:4 xp=0 :0 r= 9c40f0 a= 0 # proc: wakeup 1403b: m= 5 pc=40be22 sp=2aaaece5af08 up=4 :8 xp=0 :0 r= 983360 a= 0 # proc: sleep: going sched 1403a: m= 0 pc= 0 sp=2aaaedfed978 up=68:4 xp=49:8 r= 9c40f0 a= 1086 # kcall: wakeup 14039: m= 5 pc=40be22 sp=2aaaece5af08 up=4 :8 xp=0 :0 r= 983360 a= 0 # proc: sleep: unlock r 14038: m= 0 pc=40c519 sp=2aaaedfed978 up=68:4 xp=0 :0 r= 0 a= 1086 # kcall: call 14037: m= 5 pc=40be22 sp=2aaaece5af08 up=4 :8 xp=0 :0 r= 983360 a= 0 # proc: sleep: unlock up 14036: m= 5 pc=40be22 sp=2aaaece5af08 up=4 :8 xp=0 :0 r= 983360 a= 0 # proc: sleep: sleeping 14035: m= 5 pc=40be22 sp=2aaaece5af08 up=4 :4 xp=0 :0 r= 983360 a= 2 # proc: sleep: condition 14034: m= 5 pc=40be22 sp=2aaaece5af08 up=4 :4 xp=0 :0 r= 983360 a= 0 # proc: sleep: lock up 14033: m= 0 pc=40c5e2 sp=2aaaedfed8a8 up=68:4 xp=0 :0 r= 0 a= 1082 # kcall: done 14032: m= 5 pc=40be22 sp=2aaaece5af08 up=4 :4 xp=0 :0 r= 983360 a= 0 # proc: sleep: lock r 14031: m= 0 pc=40c5e2 sp=2aaaedfed8a8 up=68:4 xp=0 :0 r= 0 a= 1082 # kcall: release 14030: m= 5 pc=40be22 sp=2aaaece5af08 up=4 :4 xp=0 :0 r= 983360 a= 0 # proc: sleep 1402f: m= 0 pc=40c269 sp=2aaaedfed848 up=68:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: done 1402e: m= 5 pc= 0 sp=2aaaece5af68 up=4 :4 xp=0 :0 r= 983360 a= 378 # kserver: wait 1402d: m= 5 pc=40bfbe sp=2aaaece5af28 up=4 :4 xp=67:2 r=2aaabb71dd60 a= 0 # proc: wakeup: done 1402c: m= 5 pc=40bfbe sp=2aaaece5af28 up=4 :4 xp=67:2 r=2aaabb71dd60 a= 0 # proc: wakeup: unlock r 1402b: m= 0 pc=40c269 sp=2aaaedfed848 up=68:4 xp=0 :0 r=2aaabb7afd08 a= 0 # proc: sleep: awakened 1402a: m= 5 pc=40bfbe sp=2aaaece5af28 up=4 :4 xp=67:2 r=2aaabb71dd60 a= 0 # proc: wakeup: unlock p 14029: m= 0 pc=44df26 sp=7fffffffe1d0 up=68:4 xp=66:2 r= 0 a= 0 # proc: sched: switch 14028: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=68:3 r= 0 a= 0 # proc: sched: found 14027: m= 5 pc=40f22e sp=2aaaece5aed8 up=4 :4 xp=0 :0 r= 0 a= 0 # kproc: done 14026: m= 0 pc=40f3d2 sp=7fffffffe120 up=0 :0 xp=68:3 r= 0 a= 0 # proc: runproc: done 14025: m= 5 pc=40f22e sp=2aaaece5aed8 up=4 :4 xp=0 :0 r= 0 a= 0 # kproc: kick 14024: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue: done 14023: m= 5 pc=44fd4b sp=2aaaece5aeb8 up=4 :4 xp=67:2 r= 0 a= 0 # proc: ready: done 14022: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue 14021: m= 5 pc=44e9da sp=2aaaece5ae78 up=4 :4 xp=67:2 r= 0 a= 0 # proc: queue: done 14020: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue: failed 1401f: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue 1401e: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue: failed 1401d: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue 1401c: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue: failed 1401b: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue 1401a: m= 1 pc=44e9da sp=2aaaed2dfe78 up=8 :4 xp=66:2 r= 0 a= 0 # proc: queue 14019: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue: failed 14018: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue 14017: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue: failed 14016: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue 14015: m= 1 pc=44fd4b sp=2aaaed2dfeb8 up=8 :4 xp=66:8 r= 0 a= 0 # proc: ready 14014: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue: failed 14013: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue 14012: m= 1 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=66:8 r=2aaabb68bdb8 a= 0 # proc: wakeup: ready p 14011: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue: failed 14010: m= 1 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=66:8 r=2aaabb68bdb8 a= 0 # proc: wakeup: lock p 1400f: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue 1400e: m= 1 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: wakeup: lock r 1400d: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue: failed 1400c: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue 1400b: m= 1 pc=40bfbe sp=2aaaed2dff28 up=8 :4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: wakeup 1400a: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue: failed 14009: m= 1 pc= 0 sp=2aaaed2dff68 up=8 :4 xp=66:8 r= 9a8340 a= 1085 # kserver: done 14008: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue 14007: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue: failed 14006: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue 14005: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue: failed 14004: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue 14003: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue: failed 14002: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue 14001: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue: failed 14000: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue 13fff: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue: failed 13ffe: m= 0 pc=44ed36 sp=7fffffffe0d0 up=0 :0 xp=68:2 r= 0 a= 0 # proc: dequeue 13ffd: m= 0 pc=40f3d2 sp=7fffffffe120 up=0 :0 xp=0 :0 r= 0 a= 0 # proc: runproc 13ffc: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=66:8 r= 0 a= 0 # proc: sched: calling runproc 13ffb: m= 0 pc=44df26 sp=7fffffffe1d0 up=0 :0 xp=66:8 r= 0 a= 0 # proc: sched 13ffa: m= 0 pc=40d2a0 sp=7fffffffe210 up=66:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: disable up 13ff9: m= 0 pc=40d2a0 sp=7fffffffe210 up=66:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: up still active 13ff8: m= 0 pc=40d2a0 sp=7fffffffe210 up=66:8 xp=0 :0 r= 0 a= 0 # proc: schedinit: start 13ff7: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:8 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: going sched 13ff6: m= 1 pc= 0 sp=2aaaed2dff68 up=8 :4 xp=66:8 r= 9a8340 a= 1085 # kserver: serve 13ff5: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:8 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: unlock r 13ff4: m= 1 pc=40be22 sp=2aaaed2dff08 up=8 :4 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: done 13ff3: m= 1 pc=40be22 sp=2aaaed2dff08 up=8 :4 xp=0 :0 r= 9a8340 a= 0 # proc: sleep: awakened 13ff2: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:8 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: unlock up 13ff1: m= 1 pc=44df26 sp= 50b00070 up=8 :4 xp=49:8 r= 0 a= 0 # proc: sched: switch 13ff0: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:8 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: sleeping 13fef: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:4 xp=0 :0 r=2aaabb68bdb8 a= 2 # proc: sleep: condition 13fee: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: lock up 13fed: m= 1 pc=44df26 sp= 50b00070 up=0 :0 xp=8 :2 r= 0 a= 0 # proc: sched: found 13fec: m= 0 pc=40c269 sp=2aaaeddeb818 up=66:4 xp=0 :0 r=2aaabb68bdb8 a= 0 # proc: sleep: lock r 13feb: m= 1 pc=44e1a9 sp= 50b00020 up=0 :0 xp=8 :2 r= 0 a= 0 # kproc: runproc: found Richard Miller wrote: > Philippe said: > >> Again, the change I proposed is not about sleep/wakeup/postnote, but >> because wakeup() is ready()'ing the awakened process while the mach on >> which sleep() runs is still holdind a pointer (up) to the awakened >> process and can later (in schedinit()) assumes it is safe to access >> (up)->state. Because of this, schedinit() can tries to call ready() on >> (up), because because (up)->state may have been changed to Running by >> a third mach entity. > > and I tried to summarize: > >> It's a race between a process in sleep() returning to the scheduler on >> cpu A, and the same process being readied and rescheduled on cpu B >> after the wakeup. > > But after further study of proc.c, I now believe we were both wrong. > > A process on the ready queue can only be taken off the queue and > scheduled by calling dequeueproc(), which contains this: > /* > * p->mach==0 only when process state is saved > */ > if(p == 0 || p->mach){ > unlock(runq); > return nil; > } > So the process p can only be scheduled (i.e. p->state set to Running) > if p->mach==nil. > > The only place p->mach gets set to nil is in schedinit(), *after* > the test for p->state==Running. > > This seems to mean there isn't a race after all, and Philippe's > thought experiment is impossible. > > Am I missing something? > > > ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-13 13:01 ` Richard Miller 2010-06-13 13:43 ` Philippe Anel @ 2010-06-13 14:26 ` Philippe Anel 2010-06-13 16:20 ` ron minnich 1 sibling, 1 reply; 39+ messages in thread From: Philippe Anel @ 2010-06-13 14:26 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs In fact you're right, and this shows us this would only happens to 9vx. Indeed, the proc is a kproc and thus is not scheduled by the 9vx/a/proc.c scheduler, but by the one in 9vx/sched.c ... where dequeueproc() is not called and where p->mach is not checked. Thank you ! Phil; Richard Miller wrote: > Philippe said: > > >> Again, the change I proposed is not about sleep/wakeup/postnote, but >> because wakeup() is ready()'ing the awakened process while the mach on >> which sleep() runs is still holdind a pointer (up) to the awakened >> process and can later (in schedinit()) assumes it is safe to access >> (up)->state. Because of this, schedinit() can tries to call ready() on >> (up), because because (up)->state may have been changed to Running by >> a third mach entity. >> > > and I tried to summarize: > > >> It's a race between a process in sleep() returning to the scheduler on >> cpu A, and the same process being readied and rescheduled on cpu B >> after the wakeup. >> > > But after further study of proc.c, I now believe we were both wrong. > > A process on the ready queue can only be taken off the queue and > scheduled by calling dequeueproc(), which contains this: > /* > * p->mach==0 only when process state is saved > */ > if(p == 0 || p->mach){ > unlock(runq); > return nil; > } > So the process p can only be scheduled (i.e. p->state set to Running) > if p->mach==nil. > > The only place p->mach gets set to nil is in schedinit(), *after* > the test for p->state==Running. > > This seems to mean there isn't a race after all, and Philippe's > thought experiment is impossible. > > Am I missing something? > > > > ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-13 14:26 ` Philippe Anel @ 2010-06-13 16:20 ` ron minnich 2010-06-13 16:34 ` Philippe Anel 2010-06-13 18:03 ` Philippe Anel 0 siblings, 2 replies; 39+ messages in thread From: ron minnich @ 2010-06-13 16:20 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs On Sun, Jun 13, 2010 at 7:26 AM, Philippe Anel <xigh@bouyapop.org> wrote: > In fact you're right, and this shows us this would only happens to 9vx. > Indeed, the proc is a kproc and thus is not scheduled by the 9vx/a/proc.c > scheduler, > but by the one in 9vx/sched.c ... where dequeueproc() is not called and > where p->mach > is not checked. So is changing 9vx/sched.c to do these two steps the real fix? ron ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-13 16:20 ` ron minnich @ 2010-06-13 16:34 ` Philippe Anel 2010-06-13 17:23 ` Philippe Anel 2010-06-13 18:03 ` Philippe Anel 1 sibling, 1 reply; 39+ messages in thread From: Philippe Anel @ 2010-06-13 16:34 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs Should be, at least I think so. But I don't even know yet how we can do this. :) I think we can use the go trick and postpone the call to ready() while p->mach != 0. But I don't know where yet. Phil; ron minnich wrote: > On Sun, Jun 13, 2010 at 7:26 AM, Philippe Anel <xigh@bouyapop.org> wrote: > >> In fact you're right, and this shows us this would only happens to 9vx. >> Indeed, the proc is a kproc and thus is not scheduled by the 9vx/a/proc.c >> scheduler, >> but by the one in 9vx/sched.c ... where dequeueproc() is not called and >> where p->mach >> is not checked. >> > > So is changing 9vx/sched.c to do these two steps the real fix? > > ron > > > ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-13 16:34 ` Philippe Anel @ 2010-06-13 17:23 ` Philippe Anel 0 siblings, 0 replies; 39+ messages in thread From: Philippe Anel @ 2010-06-13 17:23 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs Hi, The solution is not that simple. I mean when kprocs go to sleep through the call to psleep(), a pwakeup() is required. We cannot simply change the following sched.c:^runproc() part : while((p = kprocq.head) == nil){ by: while(((p = kprocq.head) == nil) || p->mach){ The a/proc.c scheduler is different as it goes idle and can be awakened by an interrupt (or a working kproc in 9vx). Phil; >> >> So is changing 9vx/sched.c to do these two steps the real fix? >> >> ron ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-13 16:20 ` ron minnich 2010-06-13 16:34 ` Philippe Anel @ 2010-06-13 18:03 ` Philippe Anel 2010-06-14 19:15 ` Charles Forsyth 2010-06-15 3:36 ` ron minnich 1 sibling, 2 replies; 39+ messages in thread From: Philippe Anel @ 2010-06-13 18:03 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs I tried with adding : while (p->mach) sched_yield(); at the end of sched.c:^runproc(), before the return. It seems to work well. What do you think ? Phil; ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-13 18:03 ` Philippe Anel @ 2010-06-14 19:15 ` Charles Forsyth 2010-06-14 19:36 ` Philippe Anel 2010-06-15 2:57 ` ron minnich 2010-06-15 3:36 ` ron minnich 1 sibling, 2 replies; 39+ messages in thread From: Charles Forsyth @ 2010-06-14 19:15 UTC (permalink / raw) To: 9fans it's interesting that neither of philippe's changes, however justified, make any visible difference to 9vx on my ubuntu 10.04LTS system: 9vx still fails almost immediately. that's consistent with 9vx behaving itself as well as on any other platform until i changed the linux and/or ubuntu version. i'll see if i can brave gdb some more to find out. ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-14 19:15 ` Charles Forsyth @ 2010-06-14 19:36 ` Philippe Anel 2010-06-15 2:57 ` ron minnich 1 sibling, 0 replies; 39+ messages in thread From: Philippe Anel @ 2010-06-14 19:36 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs Charles, Can you please give us stack information with gdb ? Phil; On Mon, 2010-06-14 at 20:15 +0100, Charles Forsyth wrote: > it's interesting that neither of philippe's changes, > however justified, make any visible difference > to 9vx on my ubuntu 10.04LTS system: 9vx still > fails almost immediately. that's consistent with > 9vx behaving itself as well as on any other platform > until i changed the linux and/or ubuntu version. > i'll see if i can brave gdb some more to find out. > ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-14 19:15 ` Charles Forsyth 2010-06-14 19:36 ` Philippe Anel @ 2010-06-15 2:57 ` ron minnich 1 sibling, 0 replies; 39+ messages in thread From: ron minnich @ 2010-06-15 2:57 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs If anyone can help me with some valgrind patches we can see if valgrind can be useful. Charles, I am really puzzled about your ubuntu experience. Oh, wait, can you set LANG=C and try again? Or is it? BTW when you get the immediate explosion does a window even ever come up or does it die before that? ron ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-13 18:03 ` Philippe Anel 2010-06-14 19:15 ` Charles Forsyth @ 2010-06-15 3:36 ` ron minnich 1 sibling, 0 replies; 39+ messages in thread From: ron minnich @ 2010-06-15 3:36 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs On Sun, Jun 13, 2010 at 11:03 AM, Philippe Anel <xigh@bouyapop.org> wrote: > I tried with adding : > > while (p->mach) > sched_yield(); > > > at the end of sched.c:^runproc(), before the return. > > It seems to work well. > > What do you think ? Not sure I understand all the implications but I'll try anything at this point :-) I'm trying now with -O3 back on. The -O3 was a red herring. ron ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-11 14:49 ` erik quanstrom ` (2 preceding siblings ...) 2010-06-12 7:02 ` Philippe Anel @ 2010-06-12 20:15 ` Richard Miller 2010-06-12 20:30 ` ron minnich 3 siblings, 1 reply; 39+ messages in thread From: Richard Miller @ 2010-06-12 20:15 UTC (permalink / raw) To: 9fans > - does richard miller's alternate implementation of wakeup > solve this problem. No, it doesn't. ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-12 20:15 ` Richard Miller @ 2010-06-12 20:30 ` ron minnich 2010-06-12 22:15 ` Charles Forsyth 2010-06-13 9:00 ` Richard Miller 0 siblings, 2 replies; 39+ messages in thread From: ron minnich @ 2010-06-12 20:30 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs There's kind of an interesting similarity here to what I had to deal with on the Xen port. So, a few random thoughts, probably useless, from early problems of this sort I've had. - in Linux parlance, Plan 9 is a "preemptible" kernel. Interrupt handlers can be interrupted, so to speak. Except for the clock interrupt handler: you have to check the interrupt number to make sure you are not pre-empting the clock interrupt handler. Sorry if I'm not saying this very well. On Xen and lguest I had to make sure of this (I mention this in the lguest port talk slides) - splhi -- it's not a true splhi in some sense; is it possible that some code is sneaking in and running even when you splhi()? Could this explain it? - What other aspect of the transition from hardware to software-in-sandbox might explain a non-premptible bit of code getting pre-empted? OK, back to fixing my 1990 civic :-) ron - ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-12 20:30 ` ron minnich @ 2010-06-12 22:15 ` Charles Forsyth 2010-06-13 0:04 ` ron minnich 2010-06-13 13:32 ` erik quanstrom 2010-06-13 9:00 ` Richard Miller 1 sibling, 2 replies; 39+ messages in thread From: Charles Forsyth @ 2010-06-12 22:15 UTC (permalink / raw) To: 9fans >in Linux parlance, Plan 9 is a "preemptible" kernel. Interrupt handlers can be interrupted, so to speak. interrupt handlers are not normally interruptible during the interrupt processing, but rather at the end (eg, when anyhigher, anyready or preempted is called). processes running at non-interrupt level in the kernel can be interrupted unless they are splhi or using ilock. >Except for the clock interrupt handler that's only because the clock interrupt handler directly or indirectly (eg, via sched) calls spllo, and other trap or interrupt handlers could do that. ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-12 22:15 ` Charles Forsyth @ 2010-06-13 0:04 ` ron minnich 2010-06-13 13:32 ` erik quanstrom 1 sibling, 0 replies; 39+ messages in thread From: ron minnich @ 2010-06-13 0:04 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs On Sat, Jun 12, 2010 at 3:15 PM, Charles Forsyth <forsyth@terzarima.net> wrote: >>in Linux parlance, Plan 9 is a "preemptible" kernel. Interrupt handlers can be interrupted, so to speak. > > interrupt handlers are not normally interruptible during the interrupt > processing, but rather at the end (eg, when anyhigher, anyready or preempted > is called). Yes, I was not careful enough in how I said that. For those who wonder what I was trying to say, see trap(); note what happens after the isr() is called and look where preempted() is called. But, all this said, the problems we're seeing on 9vx are strangely similar to the ones I had on Xen when code that was not supposed to be interrupted got interrupted. There may be no real connection at all however. ron ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-12 22:15 ` Charles Forsyth 2010-06-13 0:04 ` ron minnich @ 2010-06-13 13:32 ` erik quanstrom 2010-06-13 22:34 ` Charles Forsyth 1 sibling, 1 reply; 39+ messages in thread From: erik quanstrom @ 2010-06-13 13:32 UTC (permalink / raw) To: 9fans > that's only because the clock interrupt handler directly or indirectly (eg, > via sched) calls spllo, and other trap or interrupt handlers could do that. wouldn't that be fatal with shared 8259 interrupts? - erik ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-13 13:32 ` erik quanstrom @ 2010-06-13 22:34 ` Charles Forsyth 0 siblings, 0 replies; 39+ messages in thread From: Charles Forsyth @ 2010-06-13 22:34 UTC (permalink / raw) To: 9fans [-- Attachment #1: Type: text/plain, Size: 406 bytes --] i wasn't clear enough what i meant by `interrupt handler'. i didn't mean the device-specific function, but its caller. in pc/trap.c you see if(up && !clockintr) preempted(); and on other platforms (perhaps not enough) there are calls to preempted() in the interrupt handler(s) that despatch the device-specific ones. preempted calls sched, which can switch to another process before returning. [-- Attachment #2: Type: message/rfc822, Size: 1841 bytes --] From: erik quanstrom <quanstro@quanstro.net> To: 9fans@9fans.net Subject: Re: [9fans] 9vx, kproc and *double sleep* Date: Sun, 13 Jun 2010 09:32:38 -0400 Message-ID: <125aa93ce1c20a7853557a368e77b0ca@kw.quanstro.net> > that's only because the clock interrupt handler directly or indirectly (eg, > via sched) calls spllo, and other trap or interrupt handlers could do that. wouldn't that be fatal with shared 8259 interrupts? - erik ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-12 20:30 ` ron minnich 2010-06-12 22:15 ` Charles Forsyth @ 2010-06-13 9:00 ` Richard Miller 1 sibling, 0 replies; 39+ messages in thread From: Richard Miller @ 2010-06-13 9:00 UTC (permalink / raw) To: 9fans > - splhi -- it's not a true splhi in some sense; is it possible that > some code is sneaking in and running even when you splhi()? Could this > explain it? The error Philippe has found is only indirectly related to splhi(). It's a race between a process in sleep() returning to the scheduler on cpu A, and the same process being readied and rescheduled on cpu B after the wakeup. On native plan 9, A always wins the race because it runs splhi() and the code path from sleep to schedinit (where up->state==Running is checked) is shorter than the code path from runproc to the point in sched where up->state is set to Running. But the fact that this works is timing-dependent: if cpu A for some reason ran slower than cpu B, it could lose the race even without being interrupted. As Philippe explained, in 9vx the cpus are being simulated by threads. Because these threads are being scheduled by the host operating system, the virtual cpus can appear to be running at different speeds or to pause at awkward moments. Even without any "preemption" at the plan 9 level of abstraction, the timing assumption which prevents the sleep - reschedule race is no longer guaranteed. ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-11 14:40 ` ron minnich 2010-06-11 14:49 ` erik quanstrom @ 2010-06-11 14:49 ` Philippe Anel 2010-06-11 14:59 ` ron minnich 1 sibling, 1 reply; 39+ messages in thread From: Philippe Anel @ 2010-06-11 14:49 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs Schedinit() initialize the scheduler label ... on which sleep() goes when calling gotolabel(&m->sched). Phil; ron minnich wrote: > I'll look but one thing doesn't make sense to me: > > > On Fri, Jun 11, 2010 at 2:06 PM, Philippe Anel <xigh@bouyapop.org> wrote: > > >> // xigh: move unlocking to schedinit() >> > > > schedinit only runs once and sleep runs all the time. That's the part > I don't get. > > But you might have found something, I sure wish I understood it all better :-) > > ron > > > ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-11 14:49 ` Philippe Anel @ 2010-06-11 14:59 ` ron minnich 0 siblings, 0 replies; 39+ messages in thread From: ron minnich @ 2010-06-11 14:59 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs On Fri, Jun 11, 2010 at 2:49 PM, Philippe Anel <xigh@bouyapop.org> wrote: > Schedinit() initialize the scheduler label ... on which sleep() goes when > calling gotolabel(&m->sched). > > Phil; yep. Toldja I was not awake yet. ron ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-11 14:06 [9fans] 9vx, kproc and *double sleep* Philippe Anel 2010-06-11 14:40 ` ron minnich @ 2010-06-11 15:02 ` ron minnich 2010-06-11 15:04 ` erik quanstrom 1 sibling, 1 reply; 39+ messages in thread From: ron minnich @ 2010-06-11 15:02 UTC (permalink / raw) To: Fans of the OS Plan 9 from Bell Labs I'm going to put this change into my hg repo for 9vx and do some testing; others are welcome to as well. That's a pretty interesting catch. ron ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-11 15:02 ` ron minnich @ 2010-06-11 15:04 ` erik quanstrom 2010-06-11 15:43 ` ron minnich 0 siblings, 1 reply; 39+ messages in thread From: erik quanstrom @ 2010-06-11 15:04 UTC (permalink / raw) To: rminnich, 9fans On Fri Jun 11 11:03:32 EDT 2010, rminnich@gmail.com wrote: > I'm going to put this change into my hg repo for 9vx and do some > testing; others are welcome to as well. > > That's a pretty interesting catch. please wait. we still don't understand this problem very well. (why does this work on real hardware?) i'd hate to just swap buggy implementations of sleep/wakeup. - erik ^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [9fans] 9vx, kproc and *double sleep* 2010-06-11 15:04 ` erik quanstrom @ 2010-06-11 15:43 ` ron minnich 0 siblings, 0 replies; 39+ messages in thread From: ron minnich @ 2010-06-11 15:43 UTC (permalink / raw) To: erik quanstrom; +Cc: 9fans On Fri, Jun 11, 2010 at 8:04 AM, erik quanstrom <quanstro@quanstro.net> wrote: > please wait. we still don't understand this problem > very well. (why does this work on real hardware?) all the 9vx failures I have seen are with the kexec threads. This is a major 0vx change from 9. I do think that there is something in what Phillipe is saying. ron ^ permalink raw reply [flat|nested] 39+ messages in thread
end of thread, other threads:[~2010-06-15 3:36 UTC | newest] Thread overview: 39+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2010-06-11 14:06 [9fans] 9vx, kproc and *double sleep* Philippe Anel 2010-06-11 14:40 ` ron minnich 2010-06-11 14:49 ` erik quanstrom 2010-06-11 14:54 ` Philippe Anel 2010-06-11 15:03 ` erik quanstrom 2010-06-11 15:22 ` Philippe Anel 2010-06-11 15:25 ` Philippe Anel 2010-06-11 14:59 ` Philippe Anel 2010-06-11 17:11 ` Bakul Shah 2010-06-11 17:31 ` Philippe Anel 2010-06-11 18:34 ` Bakul Shah 2010-06-11 18:36 ` erik quanstrom 2010-06-11 18:51 ` Philippe Anel 2010-06-12 7:02 ` Philippe Anel 2010-06-12 9:22 ` Philippe Anel 2010-06-12 11:51 ` erik quanstrom 2010-06-13 13:01 ` Richard Miller 2010-06-13 13:43 ` Philippe Anel 2010-06-13 14:26 ` Philippe Anel 2010-06-13 16:20 ` ron minnich 2010-06-13 16:34 ` Philippe Anel 2010-06-13 17:23 ` Philippe Anel 2010-06-13 18:03 ` Philippe Anel 2010-06-14 19:15 ` Charles Forsyth 2010-06-14 19:36 ` Philippe Anel 2010-06-15 2:57 ` ron minnich 2010-06-15 3:36 ` ron minnich 2010-06-12 20:15 ` Richard Miller 2010-06-12 20:30 ` ron minnich 2010-06-12 22:15 ` Charles Forsyth 2010-06-13 0:04 ` ron minnich 2010-06-13 13:32 ` erik quanstrom 2010-06-13 22:34 ` Charles Forsyth 2010-06-13 9:00 ` Richard Miller 2010-06-11 14:49 ` Philippe Anel 2010-06-11 14:59 ` ron minnich 2010-06-11 15:02 ` ron minnich 2010-06-11 15:04 ` erik quanstrom 2010-06-11 15:43 ` ron minnich
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).