9front - general discussion about 9front
 help / color / mirror / Atom feed
* [9front] qemu clock/timer issue
@ 2022-02-05  2:04 Michael Forney
  2022-02-05 17:04 ` hiro
  2022-02-05 18:14 ` Nick Owens
  0 siblings, 2 replies; 11+ messages in thread
From: Michael Forney @ 2022-02-05  2:04 UTC (permalink / raw)
  To: 9front

I was recently running 9front in a VM on linux (qemu, with
-enable-kvm), and noticed several weird timing/clock issues. The date
command gives wildly varying results +- 1 minute:

cpu% date
Fri Feb  4 17:39:23 PST 2022
cpu% date
Fri Feb  4 17:39:12 PST 2022
cpu% date
Fri Feb  4 17:40:19 PST 2022
cpu% date
Fri Feb  4 17:39:18 PST 2022

`sleep 1` exits pretty much instantly, stats moves through the entire
width in a second or two, and clicking in scrollbars starts repeating
immediately.

I notice the following messages in my dmesg:
kvm: vcpu 3: requested 38141 ns lapic timer period limited to 200000 ns
kvm: vcpu 1: requested 9449 ns lapic timer period limited to 200000 ns
kvm: vcpu 2: requested 51204 ns lapic timer period limited to 200000 ns
kvm: vcpu 0: requested 74421 ns lapic timer period limited to 200000 ns

I don't remember this problem last time I ran 9front in qemu (around a
year ago), so I looked through commit history and found one commit
that seemed like it might be related:
http://git.9front.org/plan9front/plan9front/a05bab362f66ddd6fa65f2e7cda9eaaa0217ec08/commit.html

To check, I tried setting *nohpet=1 in my plan9.ini, which *seems* to
have solved the problem, though I still see those same messages in
dmesg on the linux host (but now the requested lapic timer period is
always around 100000 ns or greater).

Anyone know what's going on here?

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [9front] qemu clock/timer issue
  2022-02-05  2:04 [9front] qemu clock/timer issue Michael Forney
@ 2022-02-05 17:04 ` hiro
  2022-02-05 18:14 ` Nick Owens
  1 sibling, 0 replies; 11+ messages in thread
From: hiro @ 2022-02-05 17:04 UTC (permalink / raw)
  To: 9front

do you have timesync running twice? check in ps output.

On 2/5/22, Michael Forney <mforney@mforney.org> wrote:
> I was recently running 9front in a VM on linux (qemu, with
> -enable-kvm), and noticed several weird timing/clock issues. The date
> command gives wildly varying results +- 1 minute:
>
> cpu% date
> Fri Feb  4 17:39:23 PST 2022
> cpu% date
> Fri Feb  4 17:39:12 PST 2022
> cpu% date
> Fri Feb  4 17:40:19 PST 2022
> cpu% date
> Fri Feb  4 17:39:18 PST 2022
>
> `sleep 1` exits pretty much instantly, stats moves through the entire
> width in a second or two, and clicking in scrollbars starts repeating
> immediately.
>
> I notice the following messages in my dmesg:
> kvm: vcpu 3: requested 38141 ns lapic timer period limited to 200000 ns
> kvm: vcpu 1: requested 9449 ns lapic timer period limited to 200000 ns
> kvm: vcpu 2: requested 51204 ns lapic timer period limited to 200000 ns
> kvm: vcpu 0: requested 74421 ns lapic timer period limited to 200000 ns
>
> I don't remember this problem last time I ran 9front in qemu (around a
> year ago), so I looked through commit history and found one commit
> that seemed like it might be related:
> http://git.9front.org/plan9front/plan9front/a05bab362f66ddd6fa65f2e7cda9eaaa0217ec08/commit.html
>
> To check, I tried setting *nohpet=1 in my plan9.ini, which *seems* to
> have solved the problem, though I still see those same messages in
> dmesg on the linux host (but now the requested lapic timer period is
> always around 100000 ns or greater).
>
> Anyone know what's going on here?
>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [9front] qemu clock/timer issue
  2022-02-05  2:04 [9front] qemu clock/timer issue Michael Forney
  2022-02-05 17:04 ` hiro
@ 2022-02-05 18:14 ` Nick Owens
  2022-02-18  3:25   ` Michael Forney
  1 sibling, 1 reply; 11+ messages in thread
From: Nick Owens @ 2022-02-05 18:14 UTC (permalink / raw)
  To: 9front

kvm emulates a local APIC timer which the 9front kernel uses. the
timer period is clamped to a minimum period, and my understanding is
that the purpose is to protect the host from CPU consumption by
guests, but i don't have a citation for that. 9front requests a timer
period shorter than what the default minimum is and kvm warns about
it. it maybe needs more investigation as to whether we really need
such short timers.

in the meantime, you can lower the timer period minimum with

echo "options kvm min_timer_period_us=10" >> /etc/modprobe.d/kvm.conf

which is 10000ns. it won't kill all of the warnings since sometimes
the timer period is slightly shorter, but it will help with the dmesg
spam.

On Sat, Feb 5, 2022 at 7:30 AM Michael Forney <mforney@mforney.org> wrote:
>
> I was recently running 9front in a VM on linux (qemu, with
> -enable-kvm), and noticed several weird timing/clock issues. The date
> command gives wildly varying results +- 1 minute:
>
> cpu% date
> Fri Feb  4 17:39:23 PST 2022
> cpu% date
> Fri Feb  4 17:39:12 PST 2022
> cpu% date
> Fri Feb  4 17:40:19 PST 2022
> cpu% date
> Fri Feb  4 17:39:18 PST 2022
>
> `sleep 1` exits pretty much instantly, stats moves through the entire
> width in a second or two, and clicking in scrollbars starts repeating
> immediately.
>
> I notice the following messages in my dmesg:
> kvm: vcpu 3: requested 38141 ns lapic timer period limited to 200000 ns
> kvm: vcpu 1: requested 9449 ns lapic timer period limited to 200000 ns
> kvm: vcpu 2: requested 51204 ns lapic timer period limited to 200000 ns
> kvm: vcpu 0: requested 74421 ns lapic timer period limited to 200000 ns
>
> I don't remember this problem last time I ran 9front in qemu (around a
> year ago), so I looked through commit history and found one commit
> that seemed like it might be related:
> http://git.9front.org/plan9front/plan9front/a05bab362f66ddd6fa65f2e7cda9eaaa0217ec08/commit.html
>
> To check, I tried setting *nohpet=1 in my plan9.ini, which *seems* to
> have solved the problem, though I still see those same messages in
> dmesg on the linux host (but now the requested lapic timer period is
> always around 100000 ns or greater).
>
> Anyone know what's going on here?

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [9front] qemu clock/timer issue
  2022-02-05 18:14 ` Nick Owens
@ 2022-02-18  3:25   ` Michael Forney
  2022-02-21 22:35     ` cinap_lenrek
  0 siblings, 1 reply; 11+ messages in thread
From: Michael Forney @ 2022-02-18  3:25 UTC (permalink / raw)
  To: 9front

On 2022-02-05, Nick Owens <mischief@offblast.org> wrote:
> kvm emulates a local APIC timer which the 9front kernel uses. the
> timer period is clamped to a minimum period, and my understanding is
> that the purpose is to protect the host from CPU consumption by
> guests, but i don't have a citation for that. 9front requests a timer
> period shorter than what the default minimum is and kvm warns about
> it. it maybe needs more investigation as to whether we really need
> such short timers.
>
> in the meantime, you can lower the timer period minimum with
>
> echo "options kvm min_timer_period_us=10" >> /etc/modprobe.d/kvm.conf
>
> which is 10000ns. it won't kill all of the warnings since sometimes
> the timer period is slightly shorter, but it will help with the dmesg
> spam.

Thanks for the info! I tried lowering to min_timer_period_us=5 and
even then it seems that 9front requests even shorter lapic timers:

kvm: vcpu 2: requested 828 ns lapic timer period limited to 5000 ns
kvm: vcpu 0: requested 828 ns lapic timer period limited to 5000 ns
kvm: vcpu 3: requested 3822 ns lapic timer period limited to 5000 ns
kvm: vcpu 1: requested 2526 ns lapic timer period limited to 5000 ns

I did some more debugging and I believe the issue is with CPU
frequency detection using hpetcpufreq(). I should have noticed this
earlier, but without *nohpet=1, 9front detects the CPUs as having a
very low clock rate, for example 27 MHz, when it should be 3.3 GHz.

Nothing in the code seemed wrong to me, so I added some debug prints
to try to figure out where detection was going wrong.

Here's a boot log with HPET, detecting the wrong frequency:

Plan 9
HPET: fed00000 8086a201 100 MHz
loops=1000
	cycles 17384 [13196648 13214032]
	hpet 502 [394 896]
loops=2000
	cycles 42652 [56658413 56701065]
	hpet 643 [1318776 1319419]
<snip>
loops=38000
	cycles 477510 [2494452748 2494930258]
	hpet 13729 [75232811 75246540]
loops=39000
	cycles 491293 [2781515810 2782007103]
	hpet 14149 [83936532 83950681]
loops=40000
	cycles 498652 [2935179496 2935678148]
	hpet 1986869 [88595551 90582420]
498652 cycles 1986869 hpet ticks
cpuhz=25097376 cpumhz=25 cyclefreq=25097376 loopconst=1930
125 holes free
0x00026000 0x0009f000 495616
0x00100000 0x00110000 65536
0x005a1000 0x7ffff000 2141577216
2142138368 bytes free
pcirouting: PCI.0.1.3 at pin 1 link 60 irq 9 -> 10
cpu0: 25MHz GenuineIntel P6 (AX 000306F2 CX F7FAB223 DX 1F8BFBFF)
LAPIC: fee00000 0xfffffe80fee00000
ELCR: 0C00
cpu0: lapic clock at 8MHz
cpu1: 25MHz GenuineIntel P6 (AX 000306F2 CX F7FAB223 DX 1F8BFBFF)
cpu2: 25MHz GenuineIntel P6 (AX 000306F2 CX F7FAB223 DX 1F8BFBFF)
cpu3: 25MHz GenuineIntel P6 (AX 000306F2 CX F7FAB223 DX 1F8BFBFF)

It seems that the hpetcpufreq loop is seeing a huge jump in hpet ticks
at some point, causing the loop to stop prematurely. Looking through
qemu source, I believe the HPET is emulated with it's
"QEMU_CLOCK_VIRTUAL", which uses the host's
clock_gettime(CLOCK_MONOTONIC), taking care to save offsets so it
doesn't increase in between vm_stop and vm_start. I confirmed that
vm_stop is never called while reproducing the bug, which means that
reading the HPET counter is effectively doing
clock_gettime(CLOCK_MONOTONIC)/10 (100 MHz, so 10 ns period).

As I was debugging, I noticed that it doesn't happen all of the time,
and I can usually trigger it by resizing the qemu window. When I run
with -display none, I do not see the issue.

I am not quite sure how TSC works under VT-x (and my searches weren't
too helpful), but if is only counting cycles of the virtual CPUs,
while HPET is counting real time on the host, that could explain this
result. Perhaps when the qemu window is resized, the virtual CPU stops
running for some period, but the HPET keeps ticking. Then, when it is
resumed, 9front sees a huge jump in HPET ticks, with a comparatively
small number of clock cycles, and deduces from this that the CPU clock
speed is very slow.

This leaves me with a few questions:
- Is there a more robust method for determining CPU clock rate?
- Why does i8253cpufreq() work with qemu? As far as I can tell, they
are both implemented in qemu using the same clock, but it seems that
it is immune to the resize trick.

Maybe this is something I need to ask qemu developers.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [9front] qemu clock/timer issue
  2022-02-18  3:25   ` Michael Forney
@ 2022-02-21 22:35     ` cinap_lenrek
  2024-04-16  5:40       ` Douglas Guptill
  0 siblings, 1 reply; 11+ messages in thread
From: cinap_lenrek @ 2022-02-21 22:35 UTC (permalink / raw)
  To: 9front

Nice writeup. Thank you for this investigation.

> Maybe this is something I need to ask qemu developers.

Yes, please try.

Maybe we are missing some instruction barriers when
reading the tsc counter?

The issue is hard to reproduce for me, i have no issues
with qemu or real hardware. I guess it also depends on
the host cpu and linux scheduling and the moon phase :(

Anyway, dont give up. It would be very nice to get to
the bottom of this.

--
cinap

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [9front] qemu clock/timer issue
  2022-02-21 22:35     ` cinap_lenrek
@ 2024-04-16  5:40       ` Douglas Guptill
  2024-04-16 13:32         ` Jacob Moody
  0 siblings, 1 reply; 11+ messages in thread
From: Douglas Guptill @ 2024-04-16  5:40 UTC (permalink / raw)
  To: 9front; +Cc: cinap_lenrek

Hi:

On Mon, Feb 21, 2022 at 11:35:31PM +0100, cinap_lenrek@felloff.net wrote:
> Nice writeup. Thank you for this investigation.
> 
> > Maybe this is something I need to ask qemu developers.
> 
> Yes, please try.
> 
> Maybe we are missing some instruction barriers when
> reading the tsc counter?
> 
> The issue is hard to reproduce for me, i have no issues
> with qemu or real hardware. I guess it also depends on
> the host cpu and linux scheduling and the moon phase :(
> 
> Anyway, dont give up. It would be very nice to get to
> the bottom of this.
> 
> --
> cinap

I am having what looks like a similar problem.  Consistently.  It
shows up as soom as I boot the 9front iso file in qemu.

/var/log/messages and /var/log/syslog become terribly large.

Included below are the details that come to my mind.  I'll be glad to
send any others.

Suggestions appreciated.

Regards,
Douglas.


Details:
=======

toc
---
* 9front iso
* qemu:
* run command for qemu
* host cpu
* /var/log/syslog:
* /var/log/messages:

* 9front iso
------------

9front-10277.amd64.is

* qemu:
-------

doug@blackscram $ qemu-system-x86_64 --version
QEMU emulator version 8.1.2
Copyright (c) 2003-2023 Fabrice Bellard and the QEMU Project developers

- my slackbuild


* run command for qemu
----------------------

com is <qemu-system-x86_64 -name frog  -cpu host -accel kvm -smp 1 -m 1024      -device e1000,netdev=mytap,mac=DE:AD:BE:EF:50:62 -netdev tap,id=mytap,ifname=tap2,script=no,downscript=no  -boot menu=on,order=d -device virtio-scsi-pci -device scsi-hd,drive=isob -blockdev driver=file,node-name=isoa,filename=/data/48f2/9front/9front-10277.amd64.iso -blockdev driver=raw,node-name=isob,file=isoa,read-only=on -device virtio-scsi-pci -device scsi-hd,drive=hd1b -blockdev driver=file,node-name=hd1a,filename=/data/48f2/9front/qemu-9front-frog-img.raw -blockdev driver=raw,node-name=hd1b,file=hd1a>


* host cpu
----------

host is slackware64_15.0 on a 4-core cpu. (2009 vintage)
Details for 1 core are:

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 26
model name      : Intel(R) Core(TM) i7 CPU         920  @ 2.67GHz
stepping        : 5
microcode       : 0xf
cpu MHz         : 2000.000
cache size      : 8192 KB
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 4
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 11
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt lahf_lm pti tpr_shadow vnmi flexpriority ept vpid dtherm ida
vmx flags       : vnmi preemption_timer invvpid ept_x_only flexpriority tsc_offset vtpr mtf vapic ept vpid
bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown
bogomips        : 5345.77
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:


* /var/log/syslog:
-----------------
Apr 15 12:05:17 blackscram kernel: limit_periodic_timer_frequency: 1 callbacks suppressed
Apr 15 14:17:52 blackscram kernel: limit_periodic_timer_frequency: 3 callbacks suppressed
Apr 15 14:17:57 blackscram kernel: limit_periodic_timer_frequency: 16 callbacks suppressed
Apr 15 14:18:02 blackscram kernel: limit_periodic_timer_frequency: 12 callbacks suppressed
Apr 15 14:18:07 blackscram kernel: limit_periodic_timer_frequency: 16 callbacks suppressed
Apr 15 14:18:13 blackscram kernel: limit_periodic_timer_frequency: 13 callbacks suppressed
Apr 15 14:18:18 blackscram kernel: limit_periodic_timer_frequency: 11 callbacks suppressed
Apr 15 14:18:23 blackscram kernel: limit_periodic_timer_frequency: 8 callbacks suppressed
Apr 15 14:18:29 blackscram kernel: limit_periodic_timer_frequency: 4 callbacks suppressed
Apr 15 14:27:46 blackscram kernel: limit_periodic_timer_frequency: 2 callbacks suppressed

* /var/log/messages:
--------------------
Apr 15 12:04:58 blackscram kernel: br0: port 3(tap2) entered blocking state
Apr 15 12:04:58 blackscram kernel: br0: port 3(tap2) entered forwarding state
Apr 15 12:05:10 blackscram kernel: kvm: vcpu 0: requested 99933 ns lapic timer period limited to 200000 ns
Apr 15 12:05:10 blackscram last message buffered 1 times
Apr 15 12:05:10 blackscram kernel: kvm: vcpu 0: requested 190362 ns lapic timer period limited to 200000 ns
Apr 15 12:05:11 blackscram kernel: kvm: vcpu 0: requested 99933 ns lapic timer period limited to 200000 ns
Apr 15 12:05:12 blackscram last message buffered 2 times
Apr 15 12:05:12 blackscram kernel: kvm: vcpu 0: requested 184651 ns lapic timer period limited to 200000 ns
Apr 15 12:05:12 blackscram kernel: kvm: vcpu 0: requested 99933 ns lapic timer period limited to 200000 ns
Apr 15 12:05:12 blackscram kernel: kvm: vcpu 0: requested 106347 ns lapic timer period limited to 200000 ns
Apr 15 12:05:13 blackscram kernel: kvm: vcpu 0: requested 99933 ns lapic timer period limited to 200000 ns
Apr 15 12:05:17 blackscram last message buffered 1 times
Apr 15 12:05:17 blackscram kernel: kvm: vcpu 0: requested 198200 ns lapic timer period limited to 200000 ns
Apr 15 12:05:17 blackscram kernel: kvm: vcpu 0: requested 99933 ns lapic timer period limited to 200000 ns
Apr 15 12:05:18 blackscram kernel: kvm: vcpu 0: requested 161606 ns lapic timer period limited to 200000 ns
Apr 15 12:05:18 blackscram kernel: kvm: vcpu 0: requested 165828 ns lapic timer period limited to 200000 ns
Apr 15 12:05:19 blackscram kernel: kvm: vcpu 0: requested 188536 ns lapic timer period limited to 200000 ns
Apr 15 12:05:19 blackscram kernel: kvm: vcpu 0: requested 99933 ns lapic timer period limited to 200000 ns
Apr 15 12:05:20 blackscram kernel: kvm: vcpu 0: requested 140594 ns lapic timer period limited to 200000 ns
Apr 15 12:05:21 blackscram kernel: kvm: vcpu 0: requested 190006 ns lapic timer period limited to 200000 ns
Apr 15 12:05:22 blackscram kernel: kvm: vcpu 0: requested 121065 ns lapic timer period limited to 200000 ns
Apr 15 12:05:23 blackscram kernel: kvm: vcpu 0: requested 193715 ns lapic timer period limited to 200000 ns
Apr 15 12:05:24 blackscram kernel: kvm: vcpu 0: requested 99933 ns lapic timer period limited to 200000 ns
Apr 15 12:05:25 blackscram kernel: kvm: vcpu 0: requested 155432 ns lapic timer period limited to 200000 ns
Apr 15 12:05:26 blackscram kernel: kvm: vcpu 0: requested 177594 ns lapic timer period limited to 200000 ns
Apr 15 12:05:28 blackscram kernel: kvm: vcpu 0: requested 112406 ns lapic timer period limited to 200000 ns
Apr 15 12:05:29 blackscram kernel: kvm: vcpu 0: requested 116480 ns lapic timer period limited to 200000 ns
Apr 15 12:05:30 blackscram kernel: kvm: vcpu 0: requested 99933 ns lapic timer period limited to 200000 ns
Apr 15 12:05:31 blackscram last message buffered 2 times
Apr 15 12:05:33 blackscram kernel: kvm: vcpu 0: requested 106635 ns lapic timer period limited to 200000 ns
Apr 15 12:05:33 blackscram kernel: kvm: vcpu 0: requested 175396 ns lapic timer period limited to 200000 ns
Apr 15 12:05:34 blackscram kernel: kvm: vcpu 0: requested 146901 ns lapic timer period limited to 200000 ns
Apr 15 12:05:34 blackscram kernel: kvm: vcpu 0: requested 168680 ns lapic timer period limited to 200000 ns
Apr 15 12:05:35 blackscram kernel: kvm: vcpu 0: requested 99933 ns lapic timer period limited to 200000 ns
Apr 15 12:05:36 blackscram kernel: kvm: vcpu 0: requested 151168 ns lapic timer period limited to 200000 ns
Apr 15 12:05:36 blackscram kernel: kvm: vcpu 0: requested 177654 ns lapic timer period limited to 200000 ns
Apr 15 12:05:38 blackscram kernel: kvm: vcpu 0: requested 142241 ns lapic timer period limited to 200000 ns
Apr 15 12:05:39 blackscram kernel: kvm: vcpu 0: requested 145284 ns lapic timer period limited to 200000 ns
Apr 15 12:05:39 blackscram kernel: kvm: vcpu 0: requested 161734 ns lapic timer period limited to 200000 ns
Apr 15 12:05:39 blackscram kernel: kvm: vcpu 0: requested 135094 ns lapic timer period limited to 200000 ns
Apr 15 12:05:41 blackscram kernel: kvm: vcpu 0: requested 99933 ns lapic timer period limited to 200000 ns
Apr 15 12:05:41 blackscram last message buffered 1 times
Apr 15 12:05:41 blackscram kernel: kvm: vcpu 0: requested 110602 ns lapic timer period limited to 200000 ns
Apr 15 12:05:43 blackscram kernel: kvm: vcpu 0: requested 99933 ns lapic timer period limited to 200000 ns
Apr 15 12:05:45 blackscram kernel: kvm: vcpu 0: requested 176470 ns lapic timer period limited to 200000 ns
Apr 15 12:05:45 blackscram kernel: kvm: vcpu 0: requested 144036 ns lapic timer period limited to 200000 ns
Apr 15 12:05:48 blackscram kernel: kvm: vcpu 0: requested 99933 ns lapic timer period limited to 200000 ns
Apr 15 12:05:49 blackscram kernel: kvm: vcpu 0: requested 112791 ns lapic timer period limited to 200000 ns
Apr 15 12:05:51 blackscram kernel: kvm: vcpu 0: requested 156264 ns lapic timer period limited to 200000 ns
Apr 15 12:05:51 blackscram kernel: kvm: vcpu 0: requested 111279 ns lapic timer period limited to 200000 ns
Apr 15 12:05:52 blackscram kernel: kvm: vcpu 0: requested 171748 ns lapic timer period limited to 200000 ns
Apr 15 12:05:53 blackscram kernel: kvm: vcpu 0: requested 99933 ns lapic timer period limited to 200000 ns
Apr 15 12:05:53 blackscram last message buffered 1 times
Apr 15 12:05:56 blackscram kernel: kvm: vcpu 0: requested 166632 ns lapic timer period limited to 200000 ns
Apr 15 12:05:57 blackscram kernel: kvm: vcpu 0: requested 115348 ns lapic timer period limited to 200000 ns
Apr 15 12:05:58 blackscram kernel: kvm: vcpu 0: requested 168569 ns lapic timer period limited to 200000 ns
Apr 15 12:05:58 blackscram kernel: kvm: vcpu 0: requested 198141 ns lapic timer period limited to 200000 ns
Apr 15 12:06:00 blackscram kernel: kvm: vcpu 0: requested 121266 ns lapic timer period limited to 200000 ns
Apr 15 12:06:02 blackscram kernel: kvm: vcpu 0: requested 99933 ns lapic timer period limited to 200000 ns
Apr 15 12:06:02 blackscram kernel: kvm: vcpu 0: requested 106288 ns lapic timer period limited to 200000 ns
Apr 15 12:06:03 blackscram kernel: kvm: vcpu 0: requested 99933 ns lapic timer period limited to 200000 ns
Apr 15 12:06:03 blackscram kernel: kvm: vcpu 0: requested 100004 ns lapic timer period limited to 200000 ns
Apr 15 12:24:21 blackscram kernel: br0: port 3(tap2) entered disabled state
Apr 15 12:25:25 blackscram kernel: br0: port 3(tap2) entered blocking state
Apr 15 12:25:25 blackscram kernel: br0: port 3(tap2) entered forwarding state
Apr 15 12:25:26 blackscram kernel: br0: port 3(tap2) entered disabled state
Apr 15 12:55:53 blackscram syslogd[979]: -- MARK --
Apr 15 13:15:53 blackscram syslogd[979]: -- MARK --
Apr 15 13:35:53 blackscram syslogd[979]: -- MARK --
Apr 15 13:39:32 blackscram kernel: br0: port 3(tap2) entered blocking state
Apr 15 13:39:32 blackscram kernel: br0: port 3(tap2) entered forwarding state
Apr 15 13:39:33 blackscram kernel: br0: port 3(tap2) entered disabled state
Apr 15 13:45:46 blackscram kernel: br0: port 3(tap2) entered blocking state
Apr 15 13:45:46 blackscram kernel: br0: port 3(tap2) entered forwarding state
Apr 15 13:45:47 blackscram kernel: br0: port 3(tap2) entered disabled state
Apr 15 13:54:22 blackscram kernel: br0: port 3(tap2) entered blocking state
Apr 15 13:54:22 blackscram kernel: br0: port 3(tap2) entered forwarding state
Apr 15 13:54:23 blackscram kernel: br0: port 3(tap2) entered disabled state
Apr 15 14:03:40 blackscram kernel: br0: port 3(tap2) entered blocking state
Apr 15 14:03:40 blackscram kernel: br0: port 3(tap2) entered forwarding state
Apr 15 14:03:41 blackscram kernel: br0: port 3(tap2) entered disabled state
Apr 15 14:05:10 blackscram kernel: br0: port 3(tap2) entered blocking state
Apr 15 14:05:10 blackscram kernel: br0: port 3(tap2) entered forwarding state
Apr 15 14:05:20 blackscram dnsmasq-dhcp[28872]: DHCPDISCOVER(br0) de:ad:be:ef:cf:f6 
Apr 15 14:05:20 blackscram dnsmasq-dhcp[28872]: DHCPOFFER(br0) 192.168.50.5 de:ad:be:ef:cf:f6 
Apr 15 14:05:20 blackscram dnsmasq-dhcp[28872]: DHCPDISCOVER(br0) de:ad:be:ef:cf:f6 
Apr 15 14:05:20 blackscram dnsmasq-dhcp[28872]: DHCPOFFER(br0) 192.168.50.5 de:ad:be:ef:cf:f6 
Apr 15 14:05:20 blackscram dnsmasq-dhcp[28872]: DHCPREQUEST(br0) 192.168.50.5 de:ad:be:ef:cf:f6 
Apr 15 14:05:20 blackscram dnsmasq-dhcp[28872]: DHCPACK(br0) 192.168.50.5 de:ad:be:ef:cf:f6 
Apr 15 14:10:33 blackscram kernel: br0: port 3(tap2) entered disabled state
Apr 15 14:17:33 blackscram kernel: br0: port 3(tap2) entered blocking state
Apr 15 14:17:33 blackscram kernel: br0: port 3(tap2) entered forwarding state
Apr 15 14:17:47 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:17:47 blackscram last message buffered 1 times
Apr 15 14:17:47 blackscram kernel: kvm: vcpu 0: requested 111596 ns lapic timer period limited to 200000 ns
Apr 15 14:17:48 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:17:49 blackscram last message buffered 3 times
Apr 15 14:17:49 blackscram kernel: kvm: vcpu 0: requested 113549 ns lapic timer period limited to 200000 ns
Apr 15 14:17:50 blackscram kernel: kvm: vcpu 0: requested 194804 ns lapic timer period limited to 200000 ns
Apr 15 14:17:51 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:17:52 blackscram kernel: kvm: vcpu 0: requested 108682 ns lapic timer period limited to 200000 ns
Apr 15 14:17:52 blackscram kernel: kvm: vcpu 0: requested 194041 ns lapic timer period limited to 200000 ns
Apr 15 14:17:52 blackscram kernel: kvm: vcpu 0: requested 190017 ns lapic timer period limited to 200000 ns
Apr 15 14:17:53 blackscram kernel: kvm: vcpu 0: requested 150899 ns lapic timer period limited to 200000 ns
Apr 15 14:17:53 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:17:54 blackscram last message buffered 4 times
Apr 15 14:17:54 blackscram kernel: kvm: vcpu 0: requested 160123 ns lapic timer period limited to 200000 ns
Apr 15 14:17:57 blackscram kernel: kvm: vcpu 0: requested 120837 ns lapic timer period limited to 200000 ns
Apr 15 14:17:57 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:17:58 blackscram last message buffered 1 times
Apr 15 14:17:58 blackscram kernel: kvm: vcpu 0: requested 195711 ns lapic timer period limited to 200000 ns
Apr 15 14:17:58 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:17:59 blackscram last message buffered 2 times
Apr 15 14:17:59 blackscram kernel: kvm: vcpu 0: requested 114063 ns lapic timer period limited to 200000 ns
Apr 15 14:17:59 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:18:02 blackscram last message buffered 2 times
Apr 15 14:18:03 blackscram kernel: kvm: vcpu 0: requested 125424 ns lapic timer period limited to 200000 ns
Apr 15 14:18:03 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:18:03 blackscram last message buffered 2 times
Apr 15 14:18:03 blackscram kernel: kvm: vcpu 0: requested 147094 ns lapic timer period limited to 200000 ns
Apr 15 14:18:03 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:18:03 blackscram kernel: kvm: vcpu 0: requested 141200 ns lapic timer period limited to 200000 ns
Apr 15 14:18:03 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:18:04 blackscram kernel: kvm: vcpu 0: requested 116465 ns lapic timer period limited to 200000 ns
Apr 15 14:18:07 blackscram kernel: kvm: vcpu 0: requested 177058 ns lapic timer period limited to 200000 ns
Apr 15 14:18:08 blackscram kernel: kvm: vcpu 0: requested 135125 ns lapic timer period limited to 200000 ns
Apr 15 14:18:08 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:18:08 blackscram last message buffered 1 times
Apr 15 14:18:08 blackscram kernel: kvm: vcpu 0: requested 149235 ns lapic timer period limited to 200000 ns
Apr 15 14:18:09 blackscram kernel: kvm: vcpu 0: requested 128011 ns lapic timer period limited to 200000 ns
Apr 15 14:18:09 blackscram kernel: kvm: vcpu 0: requested 189997 ns lapic timer period limited to 200000 ns
Apr 15 14:18:09 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:18:13 blackscram last message buffered 5 times
Apr 15 14:18:14 blackscram kernel: kvm: vcpu 0: requested 176660 ns lapic timer period limited to 200000 ns
Apr 15 14:18:14 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:18:15 blackscram last message buffered 4 times
Apr 15 14:18:15 blackscram kernel: kvm: vcpu 0: requested 146119 ns lapic timer period limited to 200000 ns
Apr 15 14:18:18 blackscram kernel: kvm: vcpu 0: requested 181147 ns lapic timer period limited to 200000 ns
Apr 15 14:18:18 blackscram kernel: kvm: vcpu 0: requested 101907 ns lapic timer period limited to 200000 ns
Apr 15 14:18:19 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:18:19 blackscram kernel: kvm: vcpu 0: requested 101555 ns lapic timer period limited to 200000 ns
Apr 15 14:18:20 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:18:20 blackscram kernel: kvm: vcpu 0: requested 168584 ns lapic timer period limited to 200000 ns
Apr 15 14:18:20 blackscram kernel: kvm: vcpu 0: requested 189223 ns lapic timer period limited to 200000 ns
Apr 15 14:18:21 blackscram kernel: kvm: vcpu 0: requested 162485 ns lapic timer period limited to 200000 ns
Apr 15 14:18:21 blackscram kernel: kvm: vcpu 0: requested 178309 ns lapic timer period limited to 200000 ns
Apr 15 14:18:21 blackscram kernel: kvm: vcpu 0: requested 122322 ns lapic timer period limited to 200000 ns
Apr 15 14:18:23 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:18:24 blackscram last message buffered 1 times
Apr 15 14:18:24 blackscram kernel: kvm: vcpu 0: requested 165024 ns lapic timer period limited to 200000 ns
Apr 15 14:18:24 blackscram kernel: kvm: vcpu 0: requested 113987 ns lapic timer period limited to 200000 ns
Apr 15 14:18:25 blackscram kernel: kvm: vcpu 0: requested 171776 ns lapic timer period limited to 200000 ns
Apr 15 14:18:25 blackscram kernel: kvm: vcpu 0: requested 135552 ns lapic timer period limited to 200000 ns
Apr 15 14:18:25 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:18:25 blackscram kernel: kvm: vcpu 0: requested 126206 ns lapic timer period limited to 200000 ns
Apr 15 14:18:26 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:18:27 blackscram kernel: kvm: vcpu 0: requested 153040 ns lapic timer period limited to 200000 ns
Apr 15 14:18:29 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:18:29 blackscram kernel: kvm: vcpu 0: requested 106049 ns lapic timer period limited to 200000 ns
Apr 15 14:18:29 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:18:29 blackscram kernel: kvm: vcpu 0: requested 123580 ns lapic timer period limited to 200000 ns
Apr 15 14:18:30 blackscram kernel: kvm: vcpu 0: requested 159240 ns lapic timer period limited to 200000 ns
Apr 15 14:18:30 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:18:30 blackscram kernel: kvm: vcpu 0: requested 164069 ns lapic timer period limited to 200000 ns
Apr 15 14:18:30 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:18:30 blackscram kernel: kvm: vcpu 0: requested 190019 ns lapic timer period limited to 200000 ns
Apr 15 14:18:31 blackscram kernel: kvm: vcpu 0: requested 122008 ns lapic timer period limited to 200000 ns
Apr 15 14:26:56 blackscram dnsmasq-dhcp[28872]: DHCPRELEASE(br0) 192.168.50.5 52:54:00:12:34:56 unknown lease
Apr 15 14:27:06 blackscram kernel: br0: port 1(tap0) entered disabled state
Apr 15 14:27:27 blackscram kernel: br0: port 2(tap1) entered disabled state
Apr 15 14:27:46 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:27:46 blackscram kernel: kvm: vcpu 0: requested 136164 ns lapic timer period limited to 200000 ns
Apr 15 14:27:46 blackscram kernel: kvm: vcpu 0: requested 115297 ns lapic timer period limited to 200000 ns
Apr 15 14:27:46 blackscram kernel: kvm: vcpu 0: requested 178152 ns lapic timer period limited to 200000 ns
Apr 15 14:27:46 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:27:46 blackscram last message buffered 1 times
Apr 15 14:27:47 blackscram kernel: kvm: vcpu 0: requested 106015 ns lapic timer period limited to 200000 ns
Apr 15 14:27:47 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:27:47 blackscram kernel: kvm: vcpu 0: requested 147817 ns lapic timer period limited to 200000 ns
Apr 15 14:27:48 blackscram kernel: kvm: vcpu 0: requested 100335 ns lapic timer period limited to 200000 ns
Apr 15 14:27:48 blackscram kernel: br0: port 3(tap2) entered disabled state



-- 
Bold character derived from stubborn patience and adversity.
==
Douglas Guptill, B.Sc., CCP, M.Comp.Sci., THB

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [9front] qemu clock/timer issue
  2024-04-16  5:40       ` Douglas Guptill
@ 2024-04-16 13:32         ` Jacob Moody
  2024-04-16 17:07           ` Douglas Guptill
  2024-04-16 18:46           ` Michael Forney
  0 siblings, 2 replies; 11+ messages in thread
From: Jacob Moody @ 2024-04-16 13:32 UTC (permalink / raw)
  To: 9front

It is understood that this bug only happens with intel kvm virtual machines.
 From all that has been tested it was currently understood to affect newer intel machines.
It seems however that your machine is a bit older(10+ years) and still triggering this.
I've certainly run 9front in kvm on intel machines before so it is likely either
something that changed with us or something that has changed with QEMU. My money
is on some QEMU change.

If you do not feel comfortable debugging but would still be interested in helping
perhaps you can help bisect 9front and qemu (one side at a time) to see if you can
narrow down what has changed to get more information to fix these things.
I would start with the nightly iso just to double check, then move back through the releases
going back. Give that a go for a couple of them, then maybe try some older qemu.


Thanks,
moody


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [9front] qemu clock/timer issue
  2024-04-16 13:32         ` Jacob Moody
@ 2024-04-16 17:07           ` Douglas Guptill
  2024-04-16 17:34             ` Jacob Moody
  2024-04-16 18:46           ` Michael Forney
  1 sibling, 1 reply; 11+ messages in thread
From: Douglas Guptill @ 2024-04-16 17:07 UTC (permalink / raw)
  To: 9front; +Cc: moody

[-- Attachment #1: Type: text/plain, Size: 1714 bytes --]

Hello moody:

On Tue, Apr 16, 2024 at 08:32:42AM -0500, Jacob Moody wrote:

> It is understood that this bug only happens with intel kvm virtual machines.
>  From all that has been tested it was currently understood to affect newer intel machines.
> It seems however that your machine is a bit older(10+ years) and still triggering this.
> I've certainly run 9front in kvm on intel machines before so it is likely either
> something that changed with us or something that has changed with QEMU. My money
> is on some QEMU change.

Ahhh.. That is a big help.  

> If you do not feel comfortable debugging

I'm a programmer. :)

I can look at the code, build and insert write statements, in qemu and
the kvm module of linux.  Cannot yet do anything like that with
9front.

> but would still be interested in helping

For sure.

> perhaps you can help bisect 9front and qemu (one side at a time) to see if you can
> narrow down what has changed to get more information to fix these things.
> I would start with the nightly iso just to double check,

Assuming that you mean a nightly iso of 9front, I just did that with
9front-10501.amd64.iso.gz.  No noticable change; but then I don't yet
understand 9front.  I stopped it quickly.  Attached is a jpg of what I
saw.

> then move back through the releases going back.

Sorry, I don't understand.  Releases of...qemu?  9front?

There is a newer qemu (9.0.0-rc3) out; I will make a build of that,
and then start looking at its code, and the kvm module code, to see if
I can find, and maybe understand, the offending code.

Thank you for the reply,
Douglas.
-- 
Bold character derived from stubborn patience and adversity.
==
Douglas Guptill, B.Sc., CCP, M.Comp.Sci., THB

[-- Attachment #2: 9front-10501.amd64.iso_boot-screen.jpg --]
[-- Type: image/jpeg, Size: 118357 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [9front] qemu clock/timer issue
  2024-04-16 17:07           ` Douglas Guptill
@ 2024-04-16 17:34             ` Jacob Moody
  0 siblings, 0 replies; 11+ messages in thread
From: Jacob Moody @ 2024-04-16 17:34 UTC (permalink / raw)
  To: 9front

On 4/16/24 12:07, Douglas Guptill wrote:
> Hello moody:
> 
> On Tue, Apr 16, 2024 at 08:32:42AM -0500, Jacob Moody wrote:
> 
>> It is understood that this bug only happens with intel kvm virtual machines.
>>  From all that has been tested it was currently understood to affect newer intel machines.
>> It seems however that your machine is a bit older(10+ years) and still triggering this.
>> I've certainly run 9front in kvm on intel machines before so it is likely either
>> something that changed with us or something that has changed with QEMU. My money
>> is on some QEMU change.
> 
> Ahhh.. That is a big help.  
> 
>> If you do not feel comfortable debugging
> 
> I'm a programmer. :)

Excellent news. Thank you. :)

> 
> I can look at the code, build and insert write statements, in qemu and
> the kvm module of linux.  Cannot yet do anything like that with
> 9front.

We can help with understanding any of the 9front bits.

> 
>> but would still be interested in helping
> 
> For sure.
> 
>> perhaps you can help bisect 9front and qemu (one side at a time) to see if you can
>> narrow down what has changed to get more information to fix these things.
>> I would start with the nightly iso just to double check,
> 
> Assuming that you mean a nightly iso of 9front, I just did that with
> 9front-10501.amd64.iso.gz.  No noticable change; but then I don't yet
> understand 9front.  I stopped it quickly.  Attached is a jpg of what I
> saw.

Got it, thanks for checking that. Also uramekus on the grid had some
additional context about this. He said that he personally has experienced
this syslog spam but there was otherwise no adverse effects. He mentioned
that: "options kvm min_timer_period_us=10" in modprobe.conf about kvm makes it shut up.

However there was _also_ the issue that mcf originally reported regarding the fast
running clock, just to clarify are you also observing this behavior? It seems perhaps
this syslog may not be directly related to the fast running clock. So perhaps there
are two different issues that we're observing.

In either case it would be nice to at least understand what is going on, and then perhaps
fix these things.

> 
>> then move back through the releases going back.
> 
> Sorry, I don't understand.  Releases of...qemu?  9front?
> 
> There is a newer qemu (9.0.0-rc3) out; I will make a build of that,
> and then start looking at its code, and the kvm module code, to see if
> I can find, and maybe understand, the offending code.

Yes I initially meant going back through 9front releases, however with
uramekus saying he has observed this for a long time I think it might
be better use of time to debug this with the latest code and just dig in.

I will say that your attached image does not having anything out of order
beside those mpintrassign lines, not entirely sure what that's about.


Thank you,
moody


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [9front] qemu clock/timer issue
  2024-04-16 13:32         ` Jacob Moody
  2024-04-16 17:07           ` Douglas Guptill
@ 2024-04-16 18:46           ` Michael Forney
  2024-04-16 18:56             ` Jacob Moody
  1 sibling, 1 reply; 11+ messages in thread
From: Michael Forney @ 2024-04-16 18:46 UTC (permalink / raw)
  To: 9front

Jacob Moody <moody@posixcafe.org> wrote:
> It is understood that this bug only happens with intel kvm virtual machines.
>  From all that has been tested it was currently understood to affect newer intel machines.
> It seems however that your machine is a bit older(10+ years) and still triggering this.
> I've certainly run 9front in kvm on intel machines before so it is likely either
> something that changed with us or something that has changed with QEMU. My money
> is on some QEMU change.
> 
> If you do not feel comfortable debugging but would still be interested in helping
> perhaps you can help bisect 9front and qemu (one side at a time) to see if you can
> narrow down what has changed to get more information to fix these things.
> I would start with the nightly iso just to double check, then move back through the releases
> going back. Give that a go for a couple of them, then maybe try some older qemu.

If you look at my original mails, I had already bisected it to:
http://git.9front.org/plan9front/plan9front/a05bab362f66ddd6fa65f2e7cda9eaaa0217ec08/commit.html

This changes the mechanism by which CPU frequencies are detected
by default in 9front, and the timing/clock issues arise when the
frequency is detected incorrectly as 25MHz instead of 3.3GHz. The
issue went away when using *nohpet=1 to revert to i8253
clockinit/fastclock.

I wasn't able to explain why hpetcpufreq didn't work, but i8253cpufreq
did. They are both emulated with QEMU_CLOCK_VIRTUAL:
https://github.com/qemu/qemu/blob/62dbe54c24dbf77051bafe1039c31ddc8f37602d/hw/timer/hpet.c#L156
https://github.com/qemu/qemu/blob/62dbe54c24dbf77051bafe1039c31ddc8f37602d/hw/timer/i8254.c#L57-L58

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [9front] qemu clock/timer issue
  2024-04-16 18:46           ` Michael Forney
@ 2024-04-16 18:56             ` Jacob Moody
  0 siblings, 0 replies; 11+ messages in thread
From: Jacob Moody @ 2024-04-16 18:56 UTC (permalink / raw)
  To: 9front

On 4/16/24 13:46, Michael Forney wrote:
> Jacob Moody <moody@posixcafe.org> wrote:
>> It is understood that this bug only happens with intel kvm virtual machines.
>>  From all that has been tested it was currently understood to affect newer intel machines.
>> It seems however that your machine is a bit older(10+ years) and still triggering this.
>> I've certainly run 9front in kvm on intel machines before so it is likely either
>> something that changed with us or something that has changed with QEMU. My money
>> is on some QEMU change.
>>
>> If you do not feel comfortable debugging but would still be interested in helping
>> perhaps you can help bisect 9front and qemu (one side at a time) to see if you can
>> narrow down what has changed to get more information to fix these things.
>> I would start with the nightly iso just to double check, then move back through the releases
>> going back. Give that a go for a couple of them, then maybe try some older qemu.
> 
> If you look at my original mails, I had already bisected it to:
> http://git.9front.org/plan9front/plan9front/a05bab362f66ddd6fa65f2e7cda9eaaa0217ec08/commit.html
> 
> This changes the mechanism by which CPU frequencies are detected
> by default in 9front, and the timing/clock issues arise when the
> frequency is detected incorrectly as 25MHz instead of 3.3GHz. The
> issue went away when using *nohpet=1 to revert to i8253
> clockinit/fastclock.
> 
> I wasn't able to explain why hpetcpufreq didn't work, but i8253cpufreq
> did. They are both emulated with QEMU_CLOCK_VIRTUAL:
> https://github.com/qemu/qemu/blob/62dbe54c24dbf77051bafe1039c31ddc8f37602d/hw/timer/hpet.c#L156
> https://github.com/qemu/qemu/blob/62dbe54c24dbf77051bafe1039c31ddc8f37602d/hw/timer/i8254.c#L57-L58

Ah my bad, I had forgotten about this. Thank you for the original bisecting work and for reiterating.

- moody

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2024-04-16 18:57 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-05  2:04 [9front] qemu clock/timer issue Michael Forney
2022-02-05 17:04 ` hiro
2022-02-05 18:14 ` Nick Owens
2022-02-18  3:25   ` Michael Forney
2022-02-21 22:35     ` cinap_lenrek
2024-04-16  5:40       ` Douglas Guptill
2024-04-16 13:32         ` Jacob Moody
2024-04-16 17:07           ` Douglas Guptill
2024-04-16 17:34             ` Jacob Moody
2024-04-16 18:46           ` Michael Forney
2024-04-16 18:56             ` Jacob Moody

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).