[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC 06/19] xen/arm: Implement hypercall PHYSDEVOP_map_pirq
Hi Ian, Sorry I forgot to answer to this mail. On 03/07/14 13:53, Ian Campbell wrote: On Thu, 2014-07-03 at 13:02 +0100, Julien Grall wrote:Handling virq != pirq will be more complex as we need to take into account of the hotplug solution.What's the issue here? Something to do with irqdesc->irq-pending lookup? Seems like irqdesc needs to store the domain and virq number when the irq is passed through. I assume it must store the dmain already.The issues are mostly: - we need to defer the vGIC IRQs allocation - Add a new hypercall to setup the number of IRQs - How do we handle hotplug?Those all sound like issues with having dynamic numbers of irqs, rather than with a non-1:1 virq<->pirq mapping per-se. Yes. I will stick on a static allocation and 1:1 mapping for this first version. We could rework it when PCI passthrough will be added. Do we think it is the case that we are eventually going to need a guest cfg option pci = 0|1? I think the answer is yes. Assinging a pci device would cause pci=1, or you can set pci=1 to enable hotplug of pci devices later (i.e. mmio space is reserved, INTx interrupts are assigned etc).I'm not sure to understand what we would need a "pci" cfg option... For now, this series doesn't aim to support PCI. So I think we could defer this problem later.Yeah, we got onto PCI somehow. So long as we are happy not being able to hotplug platform devices I think we don't need an equivalent option (the point would be to only setup huge numbers of SPIs, reserve MMIO space etc if it was going to be used). We can reuse the number of IRQs used by the HW. The current series allocates SPIs for guest even if we don't plan to passthrough a device. Are we fine with this drawback for Xen 4.5? Regards, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |