|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Design doc of adding ACPI support for arm64 on Xen
Hi Shannon,
On 05/08/15 10:29, Shannon Zhao wrote:
>>
>>> 3)DOM0 how to get grant table and event channel irq informations
>>> As said above, we assign the hypervisor_id be "XenVMM" to tell DOM0
>>> that it runs on Xen hypervisor.
>>> Then save the start address and size
>>> of grant table in domain->grant_table->start_addr and
>>> domain->grant_table->size. DOM0 can call a new hypercall
>>> GNTTABOP_get_start_addr to get these info.
>>> Same to event channel, we've already save interrupt number in
>>> d->arch.evtchn_irq, so DOM0 can call a new hypercall EVTCHNOP_get_irq
>>> to get the irq.
>>
>> It would be nice to go down into more details and write the parameters
>> of the hypercalls in the doc as they will become a newly supported ABI.
>>
> The parameters of GNTTABOP_get_start_addr is like below:
The hypercall is not only giving the start but also the size. So I would
rename into GNTTABOP_get_region
> struct gnttab_get_start_addr {
> /* IN parameters */
> domid_t dom;
The domain ID is not necessary. We only want to retrieve grant table
region of the current domain.
> uint16_t pad;
> /* OUT parameters */
> uint64_t start_addr;
> uint64_t size;
> };
>
> The parameters of EVTCHNOP_get_irq is like below:
> struct evtchn_get_irq {
> /* IN parameters. */
> domid_t dom;
Same remark here.
> uint16_t pad;
> /* OUT parameters. */
> uint32_t irq;
We also need to expose the type of the IRQ (i.e level/edge ...) as ACPI
and DT does.
This is to avoid been tight on edge interrupt for the event channel.
> };
>
OOI, did you consider to use an HVM_PARAM rather than introducing two
new hypercalls?
>> The evtchnop would need to be called something like
>> EVTCHNOP_get_notification_irq and would need to be ARM specific (on x86
>> things are different).
>>
>>
>>
>>> 4)How to map MMIO regions
>>> a)Current implementation is mapping MMIO regions in Dom0 on demand
>>> when trapping in Xen with a data abort.
>>
>> I think this approach is prone to failures. A driver could program a
>> device for DMA involving regions not yet mapped. As a consequence the
>> DMA operation would fail because the SMMU would stop the transaction.
>>
>>
>>> b)Another way is to map all the non-ram memory regions before booting.
>>> But as suggested by Stefano, this will use a lot of memory to store
>>> the pagetables.
>>> c)Another suggested way is to use a hypercall from DOM0 to request
>>> MMIO regions mappings after Linux complete parsing the DSDT. But I
>>> didn't find a proper place to issue this call. Anyone has some
>>> suggestion?
>>
>> I suggested to exploit the bus_notifier callbacks and issue an hypercall
>> there. In the case of the PCI bus, we are already handling notifications
>> in drivers/xen/pci.c:xen_pci_notifier.
>>
>> Once you have a struct pci_dev pointer in your hand, you can get the
>> MMIO regions from pdev->resource[bar].
>>
>> Does that work?
>>
>
> I investigate and test this approach. Adding a bus notifier for platform
> bus, it could map the mmio regions.
>
> Stefano, thanks for your suggestion. And does anyone else have other
> comments on this approach?
I think this approach would be the best one.
Regards,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |