[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 4/6] xen/x86: Allow stubdom access to irq created for msi.



On Thu, Feb 07, 2019 at 01:07:47AM +0100, Marek Marczykowski-Górecki wrote:
> From: Simon Gaiser <simon@xxxxxxxxxxxxxxxxxxxxxx>
> 
> Stubdomains need to be given sufficient privilege over the guest which it
> provides emulation for in order for PCI passthrough to work correctly.
> When a HVM domain try to enable MSI, QEMU in stubdomain calls
> PHYSDEVOP_map_pirq, but later it needs to call XEN_DOMCTL_bind_pt_irq as
> part of xc_domain_update_msi_irq. Allow for that as part of
> PHYSDEVOP_map_pirq.
> 
> This is not needed for PCI INTx, because IRQ in that case is known
> beforehand and the stubdomain is given permissions over this IRQ by
> libxl__device_pci_add (there's a do_pci_add against the stubdomain).
> 
> Based on 
> https://github.com/OpenXT/xenclient-oe/blob/5e0e7304a5a3c75ef01240a1e3673665b2aaf05e/recipes-extended/xen/files/stubdomain-msi-irq-access.patch
>  by Eric Chanudet <chanudete@xxxxxxxxxxxx>.
> 
> Signed-off-by: Simon Gaiser <simon@xxxxxxxxxxxxxxxxxxxxxx>
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>
> ---
> Changes in v3:
>  - extend commit message
> Changes in v4:
>  - add missing destroy_irq on error path
> 
> With this patch, stubdomain will be able to create and map multiple irq
> (DoS possibility?), as only target domain is validated in practice. Is
> that ok? If not, what additional limits could be applied here?
> In INTx case the problem doesn't apply, because toolstack grant access
> to particular IRQ and no allocation happen on stubdomain request. But in
> MSI case, it isn't that easy as IRQ number isn't known before (as
> explained in the commit message).
> ---
>  xen/arch/x86/irq.c     | 24 ++++++++++++++++++++++++
>  xen/arch/x86/physdev.c |  9 +++++++++
>  2 files changed, 33 insertions(+)
> 
> diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
> index 8b44d6c..5e5dcac 100644
> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -2674,6 +2674,22 @@ int allocate_and_map_msi_pirq(struct domain *d, int 
> index, int *pirq_p,
>          {
>      case MAP_PIRQ_TYPE_MULTI_MSI:
>              irq = create_irq(NUMA_NO_NODE);
> +            if ( !(irq < nr_irqs_gsi || irq >= nr_irqs) &&
> +                    current->domain->target == d )
> +            {
> +                ret = irq_permit_access(current->domain, irq);
> +                if ( ret ) {
> +                    dprintk(XENLOG_G_ERR,
> +                            "dom%d: can't grant it's stubdom (%d) access to "
> +                            "irq %d for msi: %d!\n",
> +                            d->domain_id,
> +                            current->domain->domain_id,
> +                            irq,
> +                            ret);
> +                    destroy_irq(irq);
> +                    return ret;

I'm afraid his won't work for devices that support multiple MSI vectors.
Note that map_domain_pirq also has a call to create_irq, and you are
not adding the sutbdom permissions there.

IMO, the safer way to fix this would be to modify create_irq and
destroy_irq so that you give permissions to the subtdomain in the same
place that hardware domain permissions are given. Note that you will
have to change the function to take an extra domain parameter
AFAICT.

Alternatively the permissions could be granted/revoked in
{un}map_domain_pirq, which already contains a call to
irq_access_permitted/irq_deny_access, I think I've suggested this in a
previous version already [0]. This seems less intrusive that modifying
create_irq/destroy_irq if viable.

Thanks, Roger.

[0] https://lists.xenproject.org/archives/html/xen-devel/2019-01/msg01240.html

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.