[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [question] Does HVM domain support xen-pcifront/xen-pciback?



On Wed, Feb 8, 2012 at 1:14 AM, Konrad Rzeszutek Wilk
<konrad.wilk@xxxxxxxxxx> wrote:
> On Tue, Feb 07, 2012 at 09:36:31PM +0800, cody wrote:
>> On 02/07/2012 01:58 AM, Konrad Rzeszutek Wilk wrote:
>> >On Mon, Feb 06, 2012 at 04:32:05PM +0800, Kai Huang wrote:
>> >>Hi,
>> >>
>> >>I see in pcifront_init, if domain is not PV domain, pcifront_init just
>> >>returns error. So seems HVM domain does not support
>> >>xen-pcifront/xen-pciback mechanism? If it is true, why? I think
>> >Yup. B/c the only thing that the PV PCI protocol does is enable
>> >PCI configuration emulation. And if you boot an HVM guest - QEMU does
>> >that already.
>> >
>> I heard qemu does not support PCIE simulation, and Xen does not
>> provides MMIO mechanism but only legacy IO port mechanism to guest
>> for configuration space access. Is this true?
>
> The upstream version has a X58 north bridge implementation to support this.
> (ioh3420.c).
>
> In regards to MMIO mechanism are you talking about MSI-X and such? Then
> the answer is it does. QEMU traps when a guest tries to write MSI vectors
> in the BAR space and translates those to appropiate xen calls to setup
> vectors for the guest.

MMIO mechanism means software can access PCI configuration space
through memory space, using normal mov instruction, like normal memory
access. I believe modern PCs all provide this mechanism. Basically
there'll be a physical memory address area reserved for PCI
configuration space, and base address will be reported in ACPI table.
If there's no such information in ACPI table, we need to use legacy IO
port mechanism.

I am not specifically talking about MSI-X. Accessing PCI configuration
space through IO port has limitation that we can only access first
256B of configuration space as it is designed for PCI bus. For PCIE
device, configuration space is extended to 4K, which means we cannot
access configuration space after 256B by using legacy IO port
mechanism. This is why PCIE device requires MMIO mechanism to access
it's configuration space. If PCIE device implements some capabilities,
such as MSI-X capability, after first 256B in it's configuration
space, we will never be able to enable MSI-X by using IO port
mechanism.

I am not familiar Xen and don't know if Xen has provides MMIO
mechanism to guest for PCI configuration space access (To provide it,
I think we need to report base address of configuration space in
guest's ACPI table, and mark pages of configuration space to be
non-accessible, as hypervisor needs to capture guest's configuration
space access).

-cody

>
>>
>> If using IO port mechanism, we can only access first 256B of
>> configuration space, but if using PV PCI protocol, we will not have
>> such limitation. I think this is an advantage of PCI PV protocol. Of
>> course if Xen provides MMIO mechanism to guest for configuration
>> space, it will not have this limitation too.
>
> What do you mean by MMIO mechanism? Setup of MSI-x vectors?

No. See above.

>>
>> >>technically there's nothing that can block to support pcifront/pciback
>> >>in HVM, and for performance reason, there will be benefits if HVM
>> >>supports PV PCI operation.
>> >Nope. The PCI operations are just for writting configuration deta in the
>> >PCI space. Whcih is done mostly when a driver starts/stops and no more.
>> >
>> >The performance is with interrupts and how to inject them in a guest - and
>> >in there the PV is much faster than HVM due to the emulation layer 
>> >complexity.
>> >However, work by Stefano on making that path shorter has made the interrupt
>> >injection much much faster.
>> I think PV PCI protocol can be used for other purpose in the future,
>> such as PF/VF communication. In this case it will be better if HVM
>> domain can support PV PCI protocol.
>
> What is 'PF/VF' communication? Is it just the negoting in the guest of
> parameters for the PF to inject in the VF? That seems a huge security risk.
>
> Or is it some other esoteric PCI configuration options that are not
> available in the VF's BARs?
>

I think in real SRIOV card, there'll be some cases that VF needs to
get data (register, or memory data) from PF (or send data to) to work
correctly, not just configuration space data. It's highly card
specific and we cannot assume the dependencies between PF/VF for
specific card. I have no experience on specific SRIOV card driver
development, so I can't tell exactly what kinds of communication will
be needed, but it indeed exists -- Intel's SRIOV card has it's own
mailbox protocol for this, though it's communication is implemented in
SRIOV card hardware.

-cody

>>
>> -cody
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.