[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Dom0 linux 4.0 + devel/for-linus-4.1 branch: p2m.c:884:d0v0 gfn_to_mfn failed! gfn=ffffffff001ed type:4



Monday, April 13, 2015, 2:21:21 PM, you wrote:

> On 13/04/15 13:14, Sander Eikelenboom wrote:
>> 
>> Monday, April 13, 2015, 2:07:02 PM, you wrote:
>> 
>>> On 13/04/15 12:21, Sander Eikelenboom wrote:
>>>>
>>>> Monday, April 13, 2015, 11:50:51 AM, you wrote:
>>>>
>>>>> On 13/04/15 10:39, Sander Eikelenboom wrote:
>>>>>> Hi David,
>>>>>>
>>>>>> I seem to have spotted some trouble with a 4.0 dom0 kernel with the 
>>>>>> devel/for-linus-4.1 branch pulled on top.
>>>>>>
>>>>>> Does this remind you of any specific commits in the devel/for-linus-4.1 
>>>>>> branch that could
>>>>>> likely be involved that i could try to revert ?
>>>>
>>>>> Yes.  This will probably be 4e8c0c8c4bf3a (xen/privcmd: improve
>>>>> performance of MMAPBATCH_V2) which makes the kernel try harder to map
>>>>> all GFNs instead of failing on the first one.
>>>>
>>>>> I think this is qemu incorrectly trying to map GFNs.
>>>>
>>>>> David
>>>>
>>>> Reverted that specific one, but still get those messages.
>> 
>>> You'll have to bisect it then.  Because I don't see any other relevant
>>> commits in devel/for-linus-4.1
>> 
>>> David
>> 
>> Ok .. hmm first candidate of the bisect also looks interessting:
>> [628c28eefd6f2cef03b212081b466ae43fd093a3] xen: unify foreign GFN map/unmap 
>> for auto-xlated physmap guests

> Unless your dom0 is PVH, no.

> David

Bisection came back with:

22d8a8938407cb1342af763e937fdf9ee8daf24a is the first bad commit
commit 22d8a8938407cb1342af763e937fdf9ee8daf24a
Author: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Date:   Fri Apr 3 10:28:08 2015 -0400

    xen/pciback: Don't disable PCI_COMMAND on PCI device reset.

    There is no need for this at all. Worst it means that if
    the guest tries to write to BARs it could lead (on certain
    platforms) to PCI SERR errors.

    Please note that with af6fc858a35b90e89ea7a7ee58e66628c55c776b
    "xen-pciback: limit guest control of command register"
    a guest is still allowed to enable those control bits (safely), but
    is not allowed to disable them and that therefore a well behaved
    frontend which enables things before using them will still
    function correctly.

    This is done via an write to the configuration register 0x4 which
    triggers on the backend side:
    command_write
      \- pci_enable_device
         \- pci_enable_device_flags
            \- do_pci_enable_device
               \- pcibios_enable_device
                  \-pci_enable_resourcess
                    [which enables the PCI_COMMAND_MEMORY|PCI_COMMAND_IO]

    However guests (and drivers) which don't do this could cause
    problems, including the security issues which XSA-120 sought
    to address.

    CC: stable@xxxxxxxxxxxxxxx
    Reported-by: Jan Beulich <jbeulich@xxxxxxxx>
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
    Signed-off-by: David Vrabel <david.vrabel@xxxxxxxxxx>


Could it be due to the kind of "split brain" relation pciback and qemu have ?
(qemu+libxl not setting up things like pcifront does (with event channel and 
conf space access going through pciback instead of qemu doing things by itself,
which also leads to pciback not installing a interrupt handler in dom0 for the 
devices since that in pciback relies on:
  - proper state signaling via xenbus
  - setting up an event channel
  - doing conf space acces via pciback 
)

Is there actually a technical reason why the xen pci passthrough parts in qemu
don't act more like pci-front for PV does:
setting up things properly via xenbus, event channel etc, so it would be more 
similar ?

Or was this more a kind of an oversight when HVM's were introduced ?

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.