[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Xen-devel] PCI Passthrough ARM Design : Draft1
- To: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
- From: Manish Jaggi <mjaggi@xxxxxxxxxxxxxxxxxx>
- Date: Tue, 16 Jun 2015 10:46:49 -0700
- Cc: Ian Campbell <ian.campbell@xxxxxxxxxx>, Vijay Kilari <vijay.kilari@xxxxxxxxx>, Prasun Kapoor <Prasun.kapoor@xxxxxxxxxxxxxxxxxx>, "Kumar, Vijaya" <Vijaya.Kumar@xxxxxxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxx>, Julien Grall <julien.grall@xxxxxxxxxx>, Stefano Stabellini <stefano.stabellini@xxxxxxxxxx>, "Kulkarni, Ganapatrao" <Ganapatrao.Kulkarni@xxxxxxxxxxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
- Delivery-date: Tue, 16 Jun 2015 17:47:14 +0000
- List-id: Xen developer discussion <xen-devel.lists.xen.org>
On Tuesday 16 June 2015 10:28 AM,
Stefano Stabellini wrote:
On Tue, 16 Jun 2015, Manish Jaggi wrote:
On Tuesday 16 June 2015 09:21 AM, Roger Pau Monné wrote:
El 16/06/15 a les 18.13, Stefano Stabellini ha escrit:
On Thu, 11 Jun 2015, Ian Campbell wrote:
On Thu, 2015-06-11 at 07:25 -0400, Julien Grall wrote:
Hi Ian,
On 11/06/2015 04:56, Ian Campbell wrote:
On Wed, 2015-06-10 at 15:21 -0400, Julien Grall wrote:
Hi,
On 10/06/2015 08:45, Ian Campbell wrote:
4. DomU access / assignment PCI device
--------------------------------------
When a device is attached to a domU, provision has to be made
such that
it can
access the MMIO space of the device and xen is able to
identify the mapping
between guest bdf and system bdf. Two hypercalls are
introduced
I don't think we want/need new hypercalls here, the same
existing
hypercalls which are used on x86 should be suitable. That's
XEN_DOMCTL_memory_mapping from the toolstack I think.
XEN_DOMCTL_memory_mapping is done by QEMU for x86 HVM when the
guest
(i.e hvmloader?) is writing in the PCI BAR.
What about for x86 PV? I think it is done by the toolstack there, I
don't know what pciback does with accesses to BAR registers.
XEN_DOMCTL_memory_mapping is only used to map memory in stage-2 page
table. This is only used for auto-translated guest.
In the case of x86 PV, the page-table is managed by the guest. The
only
things to do is to give the MMIO permission to the guest in order to
the
let him use them. This is done at boot time in the toolstack.
Ah yes, makes sense.
Manish, this sort of thing and the constraints etc should be discussed
in the doc please.
I think that the toolstack (libxl) will need to call
xc_domain_memory_mapping (XEN_DOMCTL_memory_mapping), in addition to
xc_domain_iomem_permission, for auto-translated PV guests on x86 (PVH)
and ARM guests.
I'm not sure about this, AFAICT you are suggesting that the toolstack
(or domain builder for Dom0) should setup the MMIO regions on behalf of
the guest using the XEN_DOMCTL_memory_mapping hypercall.
IMHO the toolstack should not setup MMIO regions and instead the guest
should be in charge of setting them in the p2m by using a hypercall (or
at least that was the plan on x86 PVH).
Roger.
There were couple of points discussed,
a) There needs to be a hypercall issued from an entity to map the device MMIO
space to domU.
What that entity be
i) Toolstack
ii) domU kernel.
b) Should the MMIO mapping be 1:1
For (a) I have implemented in domU kernel in the context of the notification
received when a device is added on the pci-front bus. This was a logical point
I thought this hypercall should be called. Keep in mind that I am still not
aware how this works on x86.
I think that is OK, but we would like to avoid a new hypercall. Roger's
suggestion looks good.
Roger,
as per your comment if guest is charge of setting p2m, which
existing hypercall to be used ?
For (b) The BAR region is not updated AFAIK by the pci device driver running
in domU. So once set the BARs by firmware or enumeration logic, are not
changed, not in domU for sure. Then it is 1:1 always.
Should the BAR region of the device be updated to make it not 1:1 ?
I think the point Ian and Julien were trying to make is that we should
not rely on the mapping being 1:1. It is OK for the guest not to change
the BARs. But given that the memory layout of the guest is different
from the one on the host, it is possible that the BAR might have an
address that overlaps with a valid memory range in DomU. In that case
the guest should map the MMIO region elsewhere in the guest physical
space. It might also want to update the virtual BAR accordingly.
(Alternatively the toolstack could come up with an appropriate placement
of the virtual BARs and MMIO region mappings in the guest.
should the pci-back driver return a virtual BAR in response to a
pci conf space read from the domU.
I am not sure if the pci-back driver can query the guest
memory map. Is there an existing hypercall ?
This is what
I was suggesting, but this solution is probably more complex than
letting the guest choose.)
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|