[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 1/2] xen, libxc: init msix addr/data with value from qemu via hypercall
On 2013-05-10 03:05, Jan Beulich wrote: This pattern is used only when hvm_pirq is enabled. kernel need to check XEN_PIRQ_MSI_DATA. It's not a issue if data is 0 at first driver load, kernel will call __write_msi_msg with pirq and XEN_PIRQ_MSI_DATA set.Zhenzhong Duan <zhenzhong.duan@xxxxxxxxxx> 05/09/13 5:02 AM >>>On 2013/5/8 20:03, Jan Beulich wrote:But of course I still don't really understand why all of the sudden this needs to be passed in rather than being under the full control of the hypervisor at all times. Perhaps this is related to me not understanding why the kernel would read these values at all: There's no other place in the kernel where the message would be read before first getting written (in fact, apart from the use of __read_msi_msg() by the Xen code, there's only one other user under arch/powerpc/, and there - according to the accompanying comment - this is just to save away the data for later use during resume).There is a bug if msi_ad is not passed in. when driver first load, kernel.__read_msi_msg() (got all zero)But you don't even comment on the apparently bogus use of the function here. Because pirq and related mapping and binding are not freed between driver load-unload-load. They are freed when device detach.kernel.__write_msi_msg(pirq) (ioreq passed to qemu as no msixtbl_entry established yet) qemu.pt_msi_update_one() xc_domain_update_msi_irq() (msixtbl_entry dynamicly allocated with msi_ad all zero) then driver unload, ... driver load again, kernel.__read_msi_msg() (got all zero from xen as accelerated entry just established with all zero)If all zeroes get returned, why would the flow here be different then above? We should try to use the last pirq. Regards zduan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |