9front - general discussion about 9front
 help / color / mirror / Atom feed
* 9front on Xen 4.4 HVM
@ 2015-02-20 16:54 Giacomo Tesio
  2015-02-20 17:09 ` [9front] " cinap_lenrek
  0 siblings, 1 reply; 8+ messages in thread
From: Giacomo Tesio @ 2015-02-20 16:54 UTC (permalink / raw)
  To: 9front

Hi, yesterday night I've (almost) successfully installed 9front on a
Xen 4.4 HVM (debian dom0).

I've just a "small" problem: I can't obtain an IP from the dhcp.
The virtual machine should be using a bridged network interface.

I've tried different configurations, trying both rtl8139 and e1000 but
without success [1].

On the same virtualization server different other PV had no problem to
connect, but this is the first HVM.

Any tip?
I'd also like to try cinap's virtio drivers, but I don't know how (I
suppose I need to enable them in the kernel via plan9.ini)


Tonight I'll try to futher investigate the problem.


Giacomo

[1] http://xenbits.xen.org/docs/4.3-testing/misc/xl-network-configuration.html


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [9front] 9front on Xen 4.4 HVM
  2015-02-20 16:54 9front on Xen 4.4 HVM Giacomo Tesio
@ 2015-02-20 17:09 ` cinap_lenrek
  2015-02-21 10:50   ` Giacomo Tesio
  0 siblings, 1 reply; 8+ messages in thread
From: cinap_lenrek @ 2015-02-20 17:09 UTC (permalink / raw)
  To: 9front

no, you dont need to put anything in plan9.ini. virtio devices appear as
pci devices to the guest with specific vid/did's. but you probably need
to configure that with your virtual machine host for them to appear.

check if you can at least send packets (using wireshark or tcpdump on
the other end). check if pci interrupts work. maybe try *acpi= in plan9.ini
if interrupts dont work.

--
cinap


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [9front] 9front on Xen 4.4 HVM
  2015-02-20 17:09 ` [9front] " cinap_lenrek
@ 2015-02-21 10:50   ` Giacomo Tesio
  2015-02-21 11:51     ` cinap_lenrek
  0 siblings, 1 reply; 8+ messages in thread
From: Giacomo Tesio @ 2015-02-21 10:50 UTC (permalink / raw)
  To: 9front

[-- Attachment #1: Type: text/plain, Size: 730 bytes --]

Il 20/Feb/2015 18:10 <cinap_lenrek@felloff.net> ha scritto:

> check if you can at least send packets (using wireshark or tcpdump on
> the other end). check if pci interrupts work. maybe try *acpi= in
plan9.ini
> if interrupts dont work.
>
> --
> cinap

Tcpdump on the dom0 show dhcp requests and responses going through the
bridged physical network. Idem for Icmp packets: responses for the 9front
guest goes through the eth0.

Still 9front does not see any packet.

This with either virtio or rtl8139. I've also disabled acpi on plan9.ini.

How can I debug the ip stack on 9front side?
Btw, /dev/kmsgs show a message about mpintrenable can't find bus type 12,
but I can't see how my network issue can be related to mp.

Giacomo

[-- Attachment #2: Type: text/html, Size: 963 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [9front] 9front on Xen 4.4 HVM
  2015-02-21 10:50   ` Giacomo Tesio
@ 2015-02-21 11:51     ` cinap_lenrek
  2015-02-21 18:46       ` Giacomo Tesio
  0 siblings, 1 reply; 8+ messages in thread
From: cinap_lenrek @ 2015-02-21 11:51 UTC (permalink / raw)
  To: 9front

well, that sounds like a interrupt problem indeed. when you send
packets, all the driver does is to put packets in transmit ring
and optionally kick the card with some register that it should start
transmitting.

but receiving works by the card issuing a interrupt and then the
driver looks in the receive ring if there are new packets from the
card.

if interrupt couldnt be enabled (because if broken mp tables), then
we wont receive anything. but sending might still work because it
doesnt need an interrupt.

the relation with mp is that mp systems use the apic interrupt
controller instead of the legacy pic controller (it is mandatory
as you need to use the apic to bootstrap the other processors).

enabling interrupt on pic is easy. you read the irq register
from the pci config space (that the bios programmed for you)...
sometimes, pci interrupt router needs to be programmed.

apic is a bit more complicated than pic because there can be multiple
apic controllers (and multiple processors/lapic's) and we require
tables (from bios) to find the mapping from pci bus interrupt
lines to the apics (which can then be programmed to send interrupts
to the lapics/processors).

and then theres msi interrupts that *some* devices support that
do not require any tables. you program register in pci config
space for the device and you'r done. this also works only with
apic.

when you specify *acpi= in plan9.ini, then the kernel will use
the acpi tables instead of the mp tables.

when you use *nomp=, then we will use legacy pic interrupt controller
and only one cpu can be used (this is also what happens when we cannot
find mp table).

--
cinap


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [9front] 9front on Xen 4.4 HVM
  2015-02-21 11:51     ` cinap_lenrek
@ 2015-02-21 18:46       ` Giacomo Tesio
  2015-02-21 19:50         ` mischief
  2015-02-21 20:46         ` cinap_lenrek
  0 siblings, 2 replies; 8+ messages in thread
From: Giacomo Tesio @ 2015-02-21 18:46 UTC (permalink / raw)
  To: 9front

Thanks for your help cinap.  It's an interesting dive into the kernel internals.

Btw, I've tried various combinations of networks drivers (virtio-net
or rtl) and plan9.ini options but with no success.

Now I'm planning to write a plan9.ini menu with the following
configurations trying one after another looking for a working
solution.

As far as I have undestood, I should test
1) *npmp=
2) *nomsi=
3) *acpi=
and the 4 combinations of them, right?

Is there anything else that I can do to identify and fix the issue?

I assigned to the 9front guest only one cpu, to reduce the variables,
but actually, if to run on xen 9front must use a single cpu, then the
disadvantage of the pv guest will be lower, and I could give it a try
too.

Strangely, looks like I'm the first to try installing plan9 on xen in
the last few years...


Giacomo

2015-02-21 12:51 GMT+01:00  <cinap_lenrek@felloff.net>:
> well, that sounds like a interrupt problem indeed. when you send
> packets, all the driver does is to put packets in transmit ring
> and optionally kick the card with some register that it should start
> transmitting.
>
> but receiving works by the card issuing a interrupt and then the
> driver looks in the receive ring if there are new packets from the
> card.
>
> if interrupt couldnt be enabled (because if broken mp tables), then
> we wont receive anything. but sending might still work because it
> doesnt need an interrupt.
>
> the relation with mp is that mp systems use the apic interrupt
> controller instead of the legacy pic controller (it is mandatory
> as you need to use the apic to bootstrap the other processors).
>
> enabling interrupt on pic is easy. you read the irq register
> from the pci config space (that the bios programmed for you)...
> sometimes, pci interrupt router needs to be programmed.
>
> apic is a bit more complicated than pic because there can be multiple
> apic controllers (and multiple processors/lapic's) and we require
> tables (from bios) to find the mapping from pci bus interrupt
> lines to the apics (which can then be programmed to send interrupts
> to the lapics/processors).
>
> and then theres msi interrupts that *some* devices support that
> do not require any tables. you program register in pci config
> space for the device and you'r done. this also works only with
> apic.
>
> when you specify *acpi= in plan9.ini, then the kernel will use
> the acpi tables instead of the mp tables.
>
> when you use *nomp=, then we will use legacy pic interrupt controller
> and only one cpu can be used (this is also what happens when we cannot
> find mp table).
>
> --
> cinap


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [9front] 9front on Xen 4.4 HVM
  2015-02-21 18:46       ` Giacomo Tesio
@ 2015-02-21 19:50         ` mischief
  2015-02-21 20:46         ` cinap_lenrek
  1 sibling, 0 replies; 8+ messages in thread
From: mischief @ 2015-02-21 19:50 UTC (permalink / raw)
  To: 9front, Giacomo Tesio

[-- Attachment #1: Type: text/plain, Size: 3030 bytes --]

I am the one who imported the xen kernel from labs and I ran 9front on xen a few months ago. It supports one cpu and iirc 512mb ram. I can recover my xen configs and post them shortly.

On February 21, 2015 10:46:12 AM PST, Giacomo Tesio <giacomo@tesio.it> wrote:
>Thanks for your help cinap.  It's an interesting dive into the kernel
>internals.
>
>Btw, I've tried various combinations of networks drivers (virtio-net
>or rtl) and plan9.ini options but with no success.
>
>Now I'm planning to write a plan9.ini menu with the following
>configurations trying one after another looking for a working
>solution.
>
>As far as I have undestood, I should test
>1) *npmp=
>2) *nomsi=
>3) *acpi=
>and the 4 combinations of them, right?
>
>Is there anything else that I can do to identify and fix the issue?
>
>I assigned to the 9front guest only one cpu, to reduce the variables,
>but actually, if to run on xen 9front must use a single cpu, then the
>disadvantage of the pv guest will be lower, and I could give it a try
>too.
>
>Strangely, looks like I'm the first to try installing plan9 on xen in
>the last few years...
>
>
>Giacomo
>
>2015-02-21 12:51 GMT+01:00  <cinap_lenrek@felloff.net>:
>> well, that sounds like a interrupt problem indeed. when you send
>> packets, all the driver does is to put packets in transmit ring
>> and optionally kick the card with some register that it should start
>> transmitting.
>>
>> but receiving works by the card issuing a interrupt and then the
>> driver looks in the receive ring if there are new packets from the
>> card.
>>
>> if interrupt couldnt be enabled (because if broken mp tables), then
>> we wont receive anything. but sending might still work because it
>> doesnt need an interrupt.
>>
>> the relation with mp is that mp systems use the apic interrupt
>> controller instead of the legacy pic controller (it is mandatory
>> as you need to use the apic to bootstrap the other processors).
>>
>> enabling interrupt on pic is easy. you read the irq register
>> from the pci config space (that the bios programmed for you)...
>> sometimes, pci interrupt router needs to be programmed.
>>
>> apic is a bit more complicated than pic because there can be multiple
>> apic controllers (and multiple processors/lapic's) and we require
>> tables (from bios) to find the mapping from pci bus interrupt
>> lines to the apics (which can then be programmed to send interrupts
>> to the lapics/processors).
>>
>> and then theres msi interrupts that *some* devices support that
>> do not require any tables. you program register in pci config
>> space for the device and you'r done. this also works only with
>> apic.
>>
>> when you specify *acpi= in plan9.ini, then the kernel will use
>> the acpi tables instead of the mp tables.
>>
>> when you use *nomp=, then we will use legacy pic interrupt controller
>> and only one cpu can be used (this is also what happens when we
>cannot
>> find mp table).
>>
>> --
>> cinap

[-- Attachment #2: Type: text/html, Size: 3586 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [9front] 9front on Xen 4.4 HVM
  2015-02-21 18:46       ` Giacomo Tesio
  2015-02-21 19:50         ` mischief
@ 2015-02-21 20:46         ` cinap_lenrek
  2015-02-21 21:57           ` Giacomo Tesio
  1 sibling, 1 reply; 8+ messages in thread
From: cinap_lenrek @ 2015-02-21 20:46 UTC (permalink / raw)
  To: 9front

just try *acpi= first. if that doesnt work, remove it and
try *nomp= option. that should basically cover all combinations.
the *nomsi= option makes only sense in mp mode (apic) but i never
found the need for it. msi's usually just work when the hardware
supports them, and even can work arround wrong mp tables as it
bypasses the whole interrupt routing madness.

--
cinap


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [9front] 9front on Xen 4.4 HVM
  2015-02-21 20:46         ` cinap_lenrek
@ 2015-02-21 21:57           ` Giacomo Tesio
  0 siblings, 0 replies; 8+ messages in thread
From: Giacomo Tesio @ 2015-02-21 21:57 UTC (permalink / raw)
  To: 9front

It works! Finally! Thanks: this problem had alread stolen a few sleep
hours of mine during the last nights.

I managed to get it working with

vif = ['mac=00:16:3E:0A:15:65, model=virtio-net, bridge=xenbr0,
script=vif-bridge' ]

in the domU.cfg and *acpi= in the plan9.ini.

Now I can start with some more interesting tasks.


I'm wondering if *acpi= limit somehow the other cpus from working.


But, thanks you all!


Giacomo

2015-02-21 21:46 GMT+01:00  <cinap_lenrek@felloff.net>:
> just try *acpi= first. if that doesnt work, remove it and
> try *nomp= option. that should basically cover all combinations.
> the *nomsi= option makes only sense in mp mode (apic) but i never
> found the need for it. msi's usually just work when the hardware
> supports them, and even can work arround wrong mp tables as it
> bypasses the whole interrupt routing madness.
>
> --
> cinap


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2015-02-21 21:57 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-20 16:54 9front on Xen 4.4 HVM Giacomo Tesio
2015-02-20 17:09 ` [9front] " cinap_lenrek
2015-02-21 10:50   ` Giacomo Tesio
2015-02-21 11:51     ` cinap_lenrek
2015-02-21 18:46       ` Giacomo Tesio
2015-02-21 19:50         ` mischief
2015-02-21 20:46         ` cinap_lenrek
2015-02-21 21:57           ` Giacomo Tesio

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).