[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [RFC XEN PATCH v3 2/3] x86/pvh: Add (un)map_pirq and setup_gsi for PVH dom0
On Thu, Dec 14, 2023 at 08:55:45AM +0000, Chen, Jiqian wrote: > On 2023/12/13 15:03, Jan Beulich wrote: > > On 13.12.2023 03:47, Chen, Jiqian wrote: > >> On 2023/12/12 17:30, Jan Beulich wrote: > >>> On 12.12.2023 07:49, Chen, Jiqian wrote: > >>>> On 2023/12/11 23:31, Roger Pau Monné wrote: > >>>>> On Mon, Dec 11, 2023 at 12:40:08AM +0800, Jiqian Chen wrote: > >>>>>> --- a/xen/arch/x86/hvm/hypercall.c > >>>>>> +++ b/xen/arch/x86/hvm/hypercall.c > >>>>>> @@ -72,8 +72,11 @@ long hvm_physdev_op(int cmd, > >>>>>> XEN_GUEST_HANDLE_PARAM(void) arg) > >>>>>> > >>>>>> switch ( cmd ) > >>>>>> { > >>>>>> + case PHYSDEVOP_setup_gsi: > >>>>> > >>>>> I think given the new approach on the Linux side patches, where > >>>>> pciback will configure the interrupt, there's no need to expose > >>>>> setup_gsi anymore? > >>>> The latest patch(the second patch of v3 on kernel side) does setup_gsi > >>>> and map_pirq for passthrough device in pciback, so we need this and > >>>> below. > >>>> > >>>>> > >>>>>> case PHYSDEVOP_map_pirq: > >>>>>> case PHYSDEVOP_unmap_pirq: > >>>>>> + if ( is_hardware_domain(currd) ) > >>>>>> + break; > >>>>> > >>>>> Also Jan already pointed this out in v2: this hypercall needs to be > >>>>> limited so a PVH dom0 cannot execute it against itself. IOW: refuse > >>>>> the hypercall if DOMID_SELF or the passed domid matches the current > >>>>> domain domid. > >>>> Yes, I remember Jan's suggestion, but since the latest patch(the second > >>>> patch of v3 on kernel side) has change the implementation, it does > >>>> setup_gsi and map_pirq for dom0 itself, so I didn't add the DOMID_SELF > >>>> check. > >>> > >>> And why exactly would it do specifically the map_pirq? (Even the setup_gsi > >>> looks questionable to me, but there might be reasons there.) > >> Map_pirq is to solve the check failure problem. (pci_add_dm_done-> > >> xc_domain_irq_permission-> XEN_DOMCTL_irq_permission-> > >> pirq_access_permitted->domain_pirq_to_irq->return irq is 0) > >> Setup_gsi is because the gsi is never be unmasked, so the gsi is never be > >> registered( vioapic_hwdom_map_gsi-> mp_register_gsi is never be called). > > > > And it was previously made pretty clear by Roger, I think, that doing a > > "map" > > just for the purpose of granting permission is, well, at best a temporary > > workaround in the early development phase. If there's presently no hypercall > > to _only_ grant permission to IRQ, we need to add one. > Could you please describe it in detail? Do you mean to add a new hypercall to > grant irq access for dom0 or domU? > It seems XEN_DOMCTL_irq_permission is the hypercall to grant irq access from > dom0 to domU(see XEN_DOMCTL_irq_permission-> irq_permit_access). There is no > need to add hypercall to grant irq access. > We failed here (XEN_DOMCTL_irq_permission-> > pirq_access_permitted->domain_pirq_to_irq->return irq is 0) is because the > PVH dom0 didn't use PIRQ, so we can't get irq from pirq if "current" is PVH > dom0. One way to bodge this would be to detect whether the caller of XEN_DOMCTL_irq_permission is a PV or an HVM domain, and in case of HVM assume the pirq field is a GSI. I'm unsure however how that will work with non-x86 architectures. It would be better to introduce a new XEN_DOMCTL_gsi_permission, or maybe XEN_DOMCTL_intr_permission that can take a struct we can use to accommodate GSIs and other arch specific interrupt identifiers. I'm also wondering whether the hypercall should be in a stable interface so it could be easily used from QEMU if needed. > So, it seems the logic of XEN_DOMCTL_irq_permission is not suitable when PVH > dom0? Maybe it directly needs to get irq from the caller(domU) instead of > "current" if the "current" has no PIRQ flag? Hm, I'm kind of confused by this last sentence, as you mention "the caller(domU)". The caller of XEN_DOMCTL_irq_permission will always be dom0 or the hardware domain. Thanks, Roger.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |