[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC XEN PATCH v2 2/3] x86/pvh: Open PHYSDEVOP_map_pirq for PVH dom0



On Thu, Nov 30, 2023 at 06:32:00AM +0000, Chen, Jiqian wrote:
> 
> On 2023/11/28 22:17, Roger Pau Monné wrote:
> > On Fri, Nov 24, 2023 at 06:41:35PM +0800, Jiqian Chen wrote:
> >> If we run Xen with PVH dom0 and hvm domU, hvm will map a pirq for
> >> a passthrough device by using gsi, see xen_pt_realize->xc_physdev_map_pirq
> >> and pci_add_dm_done->xc_physdev_map_pirq. Then xc_physdev_map_pirq will
> >> call into Xen, but in hvm_physdev_op, PHYSDEVOP_map_pirq is not allowed
> >> because currd is PVH dom0 and PVH has no X86_EMU_USE_PIRQ flag, it will
> >> fail at has_pirq check.
> >>
> >> So, I think we may need to allow PHYSDEVOP_map_pirq when currd is dom0 (at
> > 
> > And PHYSDEVOP_unmap_pirq also?
> Yes, in the failed path, PHYSDEVOP_unmap_pirq will be called. I will add some 
> descriptions about it into the commit message.
> 
> > 
> >> present dom0 is PVH).
> > 
> > IMO it would be better to implement a new set of DMOP hypercalls that
> > handle the setup of interrupts from physical devices that are assigned
> > to a guest.  That should allow us to get rid of the awkward PHYSDEVOP
> > + XEN_DOMCTL_{,un}bind_pt_irq hypercall interface, which currently
> > prevents QEMU from being hypervisor version agnostic (since the domctl
> > interface is not stable).
> > 
> > I understand this might be too much to ask for, but something to take
> > into account.
> Yes, that will be a complex project. I think current change can meet the 
> needs. We can take DMOP into account in the future. Thank you.

The issue with this approach is that we always do things in a rush and
cut corners, and then never pay back the debt.  Anyway, I'm not going
to block this, and I'm not blaming you.

Sadly this is just focused on getting something working in the short
term rather than thinking long term in a maintainable interface.

Regards, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.