[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.6 0/5] xen: arm: Parse PCI DT nodes' ranges and interrupt-map



Hi Suravee,

On 18/02/2015 05:28, Suravee Suthikulanit wrote:

Actually, that seems to be more related to the PCI pass-through devices.
Isn't the Cavium guys already done that work to support their PCI device
pass-through?

They were working on it, but so far there is no patch series on the ML. It would be nice to come with a common solution (i.e between GICv2m and GICv3 ITS) for MSI.

Anyways, at this point, I am able to generated Dom0 device tree with
correct v2m node, and I can see Dom0 gicv2m driver probing and
initializing correctly as it would on bare-metal.

# Snippet from /sys/firmware/fdt showing dom0 GIC node
     interrupt-controller {
         compatible = "arm,gic-400", "arm,cortex-a15-gic";
         #interrupt-cells = <0x3>;
         interrupt-controller;
         reg = <0x0 0xe1110000 0x0 0x1000 0x0 0xe112f000 0x0 0x2000>;
         phandle = <0x1>;
         #address-cells = <0x2>;
         #size-cells = <0x2>;
         ranges = <0x0 0x0 0x0 0xe1100000 0x0 0x100000>;

         v2m {
             compatible = "arm,gic-v2m-frame";
             msi-controller;
             arm,msi-base-spi = <0x40>;
             arm,msi-num-spis = <0x100>;
             phandle = <0x5>;
             reg = <0x0 0x80000 0x0 0x1000>;
         };
     };

linux:~ # dmesg | grep v2m
[    0.000000] GICv2m: Overriding V2M MSI_TYPER (base:64, num:256)
[    0.000000] GICv2m: Node v2m: range[0xe1180000:0xe1180fff], SPI[64:320]

So, during setting up v2m in hypervisor, I also call
route_irq_to_guest() for the all SPIs used for MSI (i.e. 64-320 on
Seattle), which will force the MSIs to Dom0. However, we would need to
figure out how to detach and re-route certain interrupts to a specific
DomU in case of passing through PCI devices in the future.

Who decide to assign the MSI n to the SPI x? DOM0 or Xen?

Wouldn't it be possible to route the SPI dynamically when the domain decide to use the MSI n? We would need to implement PHYSDEVOP_map_pirq for MSI.

[..]

And there you have it.... GICv2m MSI(-X) supports in Dom 0 for Seattle
;)  Thanks to Ian's PCI patch, which makes porting much simpler.

Next, I'll clean up the code and send out Xen patch for review. I'll
also push Linux changes (for adding ARM64 PCI Generic host controller
supports and MSI IRQ domain from Marc) into my Linux tree on Github.
Then you could give it a try on your Seattle box.

Congrats!

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.