[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 1/5] x86/dpci: allow hvm_irq_dpci to handle a variable number of GSIs
On Tue, Apr 18, 2017 at 06:13:54AM -0600, Jan Beulich wrote: > >>> On 27.03.17 at 12:44, <roger.pau@xxxxxxxxxx> wrote: > > --- a/xen/include/xen/hvm/irq.h > > +++ b/xen/include/xen/hvm/irq.h > > @@ -81,14 +81,16 @@ struct hvm_girq_dpci_mapping { > > > > /* Protected by domain's event_lock */ > > struct hvm_irq_dpci { > > - /* Guest IRQ to guest device/intx mapping. */ > > - struct list_head girq[NR_HVM_IRQS]; > > /* Record of mapped ISA IRQs */ > > DECLARE_BITMAP(isairq_map, NR_ISAIRQS); > > /* Record of mapped Links */ > > uint8_t link_cnt[NR_LINK]; > > + /* Guest IRQ to guest device/intx mapping. */ > > + struct list_head girq[]; > > }; > > Considering what you say in the overview mail I don't think the > comment can be moved without adjusting it, as it doesn't seem > to reflect Dom0 in any way. Which then puts under question > whether struct hvm_girq_dpci_mapping is the right data format > for Dom0 in the first place: With bus, device, and intx taken > out, all that's left if machine_gsi, and iirc you identity map GSIs. Yes, I also got the same feeling. I've done it this way so that I don't have to touch a lot of code, it's mostly using the same paths as the ones a HVM guest would use for pass-through. OTHO, there's a lot of unneeded faff here. I could certainly do with simpler structures for PVH Dom0 GSI passthrough, but then I would have to do more changes to pt_irq_create_bind and probably the hvm/irq.c functions. I can look into that. > Even if the array needed to remain, the sparseness of the GSI > space opens up the question whether using a simple array here > is the right choice. > > The patch needs re-basing anyway afaict. Sure, most of those where sent a while back. Thanks, Roger. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |