[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC] ARM PCI Passthrough design document





On 30/05/17 06:53, Manish Jaggi wrote:
Hi Julien,

On 5/29/2017 11:44 PM, Julien Grall wrote:


On 05/29/2017 03:30 AM, Manish Jaggi wrote:
Hi Julien,

Hello Manish,

On 5/26/2017 10:44 PM, Julien Grall wrote:
PCI pass-through allows the guest to receive full control of
physical PCI
devices. This means the guest will have full and direct access to
the PCI
device.

ARM is supporting a kind of guest that exploits as much as possible
virtualization support in hardware. The guest will rely on PV driver
only
for IO (e.g block, network) and interrupts will come through the
virtualized
interrupt controller, therefore there are no big changes required
within the
kernel.

As a consequence, it would be possible to replace PV drivers by
assigning real
devices to the guest for I/O access. Xen on ARM would therefore be
able to
run unmodified operating system.

To achieve this goal, it looks more sensible to go towards emulating
the
host bridge (there will be more details later).
IIUC this means that domU would have an emulated host bridge and dom0
will see the actual host bridge?

You don't want the hardware domain and Xen access the configuration
space at the same time. So if Xen is in charge of the host bridge,
then an emulated host bridge should be exposed to the hardware.
I believe in x86 case dom0 and Xen do access the config space. In the
context of pci device add hypercall.
Thats when the pci_config_XXX functions in xen are called.

I don't understand how this is related to what I said... If DOM0 has an emulated host bridge, it will not be possible to have both poking the real hardware at the same time as only Xen would do hardware access.


Although, this is depending on who is in charge of the the host
bridge. As you may have noticed, this design document is proposing two
ways to handle configuration space access. At the moment any generic
host bridge (see the definition in the design document) will be
handled in Xen and the hardware domain will have an emulated host bridge.

So in case of generic hb, xen will manage the config space and provide a
emulated I/f to dom0, and accesses would be trapped by Xen.
Essentially the goal is to scan all pci devices and register them with
Xen (which in turn will configure the smmu).
For a  generic hb, this can be done either in dom0/xen. The only doubt
here is what extra benefit the emulated hb give in case of dom0.

Because you don't have 2 entities to access hardware at the same time. You don't know how it will behave. You may also want to trap some registers for configuration. Note that this what is already done on x86.

[...]

For mapping the MMIO space of the device in Stage2, we need to add
support in Xen / via a map hypercall in linux/drivers/xen/pci.c

Mapping MMIO space in stage-2 is not PCI specific and already
addressed in Xen 4.9 (see commit 80f9c31 "xen/arm: acpi: Map MMIO on
fault in stage-2 page table for the hardware domain"). So I don't
understand why we should care about that here...

This approach is ok.
But we could have more granular approach than trapping IMHO.
For ACPI
   -xen parses MCFG and can map pci hb (emulated / original) in stage2
for dom0
   -device MMIO can be mapped in stage2 alongside pci_device_add call .
What do you think?

There are plenty of way to map MMIO today and again this is not related to this design document. It does not matter how you are going to map (trapping, XENMEM_add_to_add_physmap, parsing MCFG, reading BARs...) at this stage.

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.