[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] amd iommu: Dump flags of IO page faults
On Fri, Sep 07, 2012 at 12:01:33PM +0200, Sander Eikelenboom wrote: > > Friday, September 7, 2012, 10:54:40 AM, you wrote: > > > On 09/07/2012 09:32 AM, Sander Eikelenboom wrote: > >> > >> Thursday, September 6, 2012, 5:03:05 PM, you wrote: > >> > >>> On 09/06/2012 03:50 PM, Sander Eikelenboom wrote: > >>>> > >>>> Thursday, September 6, 2012, 3:32:51 PM, you wrote: > >>>> > >>>>> On 09/06/2012 12:59 AM, Sander Eikelenboom wrote: > >>>>>> > >>>>>> Wednesday, September 5, 2012, 4:42:42 PM, you wrote: > >>>>>> > >>>>>>> Hi Jan, > >>>>>>> Attached patch dumps io page fault flags. The flags show the reason of > >>>>>>> the fault and tell us if this is an unmapped interrupt fault or a DMA > >>>>>>> fault. > >>>>>> > >>>>>>> Thanks, > >>>>>>> Wei > >>>>>> > >>>>>>> signed-off-by: Wei Wang<wei.wang2@xxxxxxx> > >>>>>> > >>>>>> > >>>>>> I have applied the patch and the flags seem to differ between the > >>>>>> faults: > >>>>>> > >>>>>> AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0x0a06, fault address = > >>>>>> 0xc2c2c2c0, flags = 0x000 > >>>>>> (XEN) [2012-09-05 20:54:16] AMD-Vi: IO_PAGE_FAULT: domain = 0, device > >>>>>> id = 0x0a06, fault address = 0xc2c2c2c0, flags = 0x000 > >>>>>> (XEN) [2012-09-05 20:54:16] AMD-Vi: IO_PAGE_FAULT: domain = 14, device > >>>>>> id = 0x0700, fault address = 0xa8d339e0, flags = 0x020 > >>>>>> (XEN) [2012-09-05 20:54:16] AMD-Vi: IO_PAGE_FAULT: domain = 14, device > >>>>>> id = 0x0700, fault address = 0xa8d33a40, flags = 0x020 > >>>> > >>>>> OK, so they are not interrupt requests. I guess further information from > >>>>> your system would be helpful to debug this issue: > >>>>> 1) xl info > >>>>> 2) xl list > >>>>> 3) lscpi -vvv (NOTE: not in dom0 but in your guest) > >>>>> 4) cat /proc/iomem (in both dom0 and your hvm guest) > >>>> > >>>> dom14 is not a HVM guest,it's a PV guest. > >> > >>> Ah, I see. PV guest is quite different than hvm, it does use p2m tables > >>> as io page tables. So no-sharept option does not work in this case. PV > >>> guests always use separated io page tables. There might be some > >>> incorrect mappings on the page tables. I will check this on my side. > >> > >> I have reverted the machine to xen-4.1.4-pre (changeset 23353) and kept > >> everything else the same. > >> I haven't seen any IO PAGE FAULTS after that. > >> > >> I did spot some differences in the output from lspci between xen 4.1 and > >> 4.2, related to MSI enabled or not for the IOMMU device. > >> Have attached the xl/xm dmesg and lspci from booting with both versions. > >> > >> lspci: > >> > >> 00:00.2 Generic system peripheral [0806]: ATI Technologies Inc RD990 I/O > >> Memory Management Unit (IOMMU) [1002:5a23] > >> Subsystem: ATI Technologies Inc RD990 I/O Memory Management Unit > >> (IOMMU) [1002:5a23] > >> Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- > >> ParErr- Stepping- SERR- FastB2B- DisINTx- > >> Status: Cap+ 66MHz- UDF- FastB2B- ParErr- > >> DEVSEL=fast>TAbort-<TAbort-<MAbort->SERR-<PERR- INTx- > >> Latency: 0 > >> Interrupt: pin A routed to IRQ 10 > >> Capabilities: [40] Secure device<?> > >> 4.1: Capabilities: [54] MSI: Enable- Count=1/1 Maskable- 64bit+ > > > Eh... That is interesting. So which dom0 are you using? There is a c/s > > in 4.2 to prevent recent dom0 to disable iommu interrupt (changeset > > 25492:61844569a432) Otherwise, iommu cannot send any events including IO > > PAGE faults. You could try to revert dom0 to an old version like 2.6 > > pv_ops to see if you really have no io page faults on 4.1 > > Ok i will give that a try, only dom0 will have to be a 2.6 pv_ops i assume ? > So the failure they are describing is due to: http://lists.xen.org/archives/html/xen-devel/2012-06/msg00668.html Or you can use the patch that Jan posted http://lists.xen.org/archives/html/xen-devel/2012-06/msg01196.html and use the existing kernel.. But more interesting - is this device (00:00.2) in the Xen-pciback.hide arguments (if not, then don't worry)? _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |