[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] PCI Passthrough, Radeon 7950 and Windows 7 64-bit



I compiled Xen 4.1.2 and 4.1.3 during a variety of GPU tests and saw the same problems while using the xl toolstack. ÂI never had success with passthrough using xm, but I read plenty of guides about it. ÂI had no luck using the packaged versions of Xen for a variety of reasons, and since I went down the path of compiling from source I didn't bother looking back. ÂI am working on a UEFI Booted GPT boot disk OCZ Vertex 3 SSD Debian Wheezy with Xen 4.2, so unstable everything essentially, yet my system has been rock solid for two months. ÂJust last weekend I upgraded Debian and rebuilt the latest Xen, a lot of bugfixes to the Xen source since the last compile (the efi memory problem I encountered was finally fixed).

Xen 4.2 has updates every week, and the xl toolstack is constantly under improvement (http://blog.xen.org/index.php/2012/05/22/libxl-event-api-improvements/). ÂI wouldn't say that xl is stable yet though, when they give us a release date for Xen 4.2 stable then you may have a better answer.

I can say that the xm toolstack was deprecated as of Xen 4.1, and is no longer being actively maintained (http://wiki.xen.org/wiki/Choice_of_Toolstacks). ÂIt doesn't compile by default with Xen 4.2, I don't know if it would succeed if you tried given the changes to Xen's source either.

I have PVHVM for Windows and Linux, but I have not successfully passed the card to a Linux DomU yet (my own lack of linux experience and time). ÂIt works fine. ÂI know stubdom is compiled when I build Xen, but I don't know if it is put to use automatically; I have done no reading on this topic.

The only other topic I can think of is qemu versions. ÂCurrently qemu-traditional is required for pci passthrough, though from testing the upstream qemu package has a lot of great features I would like to see.

My understanding is that PCI Passthrough is not part of the standard qemu package, and adding it at this stage would be a lot of work, so they will probably update the qemu package as or after Xen 4.2 stable is released. ÂThe emulated graphics and disk IO were noticeably better with the upstream qemu version, so I really hope to see it implemented sooner than later.

My conclusion agrees with yours, provided that unstable qemu-traditional is the same version used by Xen 4.1.2, the toolstack is one of the most likely places for these bugs to exist.

On Tue, Jun 26, 2012 at 6:57 PM, Matthias <matthias.kannenberg@xxxxxxxxxxxxxx> wrote:
Well, then as i said before: I thing xm is a lot more stable right now
then xl and i don't really see any benefit of using xl right now.

Actually I tried using xl instead of xm last week and but run intu
issues with creating vifs when using nat. Interesting to see that
there are also issues with vga passthrough.

Are there - right now - any benefits of using xl instead of xm? I
thought that it's easier to run stubdoms with xl, but after reading
that stubdoms don't bring you any significant performance increate
when using PVOPS-HVMs, i gave up on trieing to get stubdoms running.
So it would be nice to here if migrating to xl is actually something
to look into in the (near) future.

2012/6/27 Casey DeLorme <cdelorme@xxxxxxxxx>:
> I am using the xl toolstack, with Xen 4.2. ÂThat's probably the difference,
> I tried the pci flags (both inline and stand-alone versions of them) without
> any changes.
>
> On Tue, Jun 26, 2012 at 7:19 AM, Matthias
> <matthias.kannenberg@xxxxxxxxxxxxxx> wrote:
>>
>> Hi,
>>
>> did some testing in my lunch break:
>>
>> # dmesg -c && xm dmesg -c
>> # xm destroy WINDOMU
>> # dmesg
>> [257342.161103] xen1: port 1(work) entered disabled state
>> [257342.162777] xen1: port 1(work) entered disabled state
>> [257345.003372] irq 18: nobody cared (try booting with the "irqpoll"
>> option)
>> [257345.004847] Pid: 11048, comm: qemu-dm Not tainted 3.4.2-xen #2
>> [257345.006208] Call Trace:
>> [257345.006208] Â[<ffffffff800085d5>] dump_trace+0x85/0x1c0
>> [257345.006208] Â[<ffffffff8088c036>] dump_stack+0x69/0x6f
>> [257345.006208] Â[<ffffffff800c9936>] __report_bad_irq+0x36/0xe0
>> [257345.006208] Â[<ffffffff800c9c8b>] note_interrupt+0x1fb/0x240
>> [257345.006208] Â[<ffffffff800c72c4>] handle_irq_event_percpu+0x94/0x1d0
>> [257345.006208] Â[<ffffffff800c7464>] handle_irq_event+0x64/0x90
>> [257345.006208] Â[<ffffffff800ca7b4>] handle_fasteoi_irq+0x64/0x120
>> [257345.006208] Â[<ffffffff80008468>] handle_irq+0x18/0x30
>> [257345.006208] Â[<ffffffff805c1cc4>] evtchn_do_upcall+0x1c4/0x2e0
>> [257345.006208] Â[<ffffffff808a308e>] do_hypervisor_callback+0x1e/0x30
>> [257345.006208] Â[<ffffffff8014802a>] fget_light+0x3a/0xc0
>> [257345.006208] Â[<ffffffff80159c65>] do_select+0x315/0x660
>> [257345.006208] Â[<ffffffff8015a15a>] core_sys_select+0x1aa/0x2e0
>> [257345.006208] Â[<ffffffff8015a347>] sys_select+0xb7/0x110
>> [257345.006208] Â[<ffffffff808a29bb>] system_call_fastpath+0x1a/0x1f
>> [257345.006208] Â[<00007f1a6d0081d3>] 0x7f1a6d0081d2
>> [257345.006208] handlers:
>> [257345.006208] [<ffffffff80658a80>] usb_hcd_irq
>> [257345.006208] [<ffffffff80658a80>] usb_hcd_irq
>> [257345.006208] [<ffffffff80658a80>] usb_hcd_irq
>> [257345.006208] Disabling IRQ #18
>> # xm dmesg
>> <nothing here>
>> # xm create WINDOMU
>> booted without problems and i played some Diablo 3 for testing
>> purpose: no performance change, everything normal
>>
>> Then i analysed the kernel output in dmesg from the kill and found out
>> that the IRQ 18 is not in fact my graphic card what i always thought,
>> but my usb host controler which i have forwarded to the domU, too. So
>> appereantly there is no nothing vga related in the dmesg. Then I
>> thought okay, if the dom0 in fact doesn't care about the vga state, it
>> might actually be the domU and the only thing i guess can be
>> responsible for that is this in my domU config:
>>
>> #######################
>> # Â PCI Power Management:
>> #
>> # Â If it's set, the guest OS will be able to program D0-D3hot states of
>> the
>> # PCI device for the purpose of low power consumption.
>> #
>> pci_power_mgmt=1
>> #######################
>>
>> My new theory is that this allowes the domU (and therefor my windows)
>> to reset the vga on boot so i don't have the problems you have..
>>
>> Question is: Are you using this option in your domU or might that be
>> the difference in our configs?
>>
>>
>>
>>
>> 2012/6/26 Radoslaw Szkodzinski <astralstorm@xxxxxxxxx>:
>> > On Tue, Jun 26, 2012 at 2:51 AM, Casey DeLorme <cdelorme@xxxxxxxxx>
>> > wrote:
>> >> I agree with you that Xen has an awareness, but what I read suggested
>> >> that
>> >> the DomU is supposed to be responsible for the reset.
>> >
>> > Quite silly, many OSes and drivers don't care about device shutdown on
>> > "poweroff".
>> > Why should they, the power usually will be off real soon anyway.
>> > I think currently only Linux cares enough - due to the need to support
>> > kexec.
>> >
>> > Suspend to ram is different matter. Perhaps that avenue would be
>> > useful to explore.
>> > (Attempting to S2R the VM instead of shutdown...)
>> >
>> > Still, it'd be nice for Xen to force reset the devices (FLR, then
>> > D0->D3->D0 ASPM, then finally PCI bus reset as a last resort) when the
>> > VM stops using the devices.
>> > Better safe than sorry.
>> >
>> >> In any event, please
>> >> do post your results. ÂIf you don't have the same performance
>> >> degradation
>> >> and you can help identify where our configurations differ, it could
>> >> help fix
>> >> the problem which would be awesome.
>> >
>> > --
>> > RadosÅaw SzkodziÅski
>
>

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.