[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v6 4/5] [FUTURE] xen/arm: enable vPCI for domUs



On Fri, Dec 01, 2023 at 06:56:32PM -0800, Stefano Stabellini wrote:
> On Fri, 1 Dec 2023, Roger Pau Monné wrote:
> > On Mon, Nov 13, 2023 at 05:21:13PM -0500, Stewart Hildebrand wrote:
> > > @@ -1618,6 +1630,14 @@ int iommu_do_pci_domctl(
> > >          bus = PCI_BUS(machine_sbdf);
> > >          devfn = PCI_DEVFN(machine_sbdf);
> > >  
> > > +        if ( needs_vpci(d) && !has_vpci(d) )
> > > +        {
> > > +            printk(XENLOG_G_WARNING "Cannot assign %pp to %pd: vPCI 
> > > support not enabled\n",
> > > +                   &PCI_SBDF(seg, bus, devfn), d);
> > > +            ret = -EPERM;
> > > +            break;
> > 
> > I think this is likely too restrictive going forward.  The current
> > approach is indeed to enable vPCI on a per-domain basis because that's
> > how PVH dom0 uses it, due to being unable to use ioreq servers.
> > 
> > If we start to expose vPCI suport to guests the interface should be on
> > a per-device basis, so that vPCI could be enabled for some devices,
> > while others could still be handled by ioreq servers.
> > 
> > We might want to add a new flag to xen_domctl_assign_device (used by
> > XEN_DOMCTL_assign_device) in order to signal whether the device will
> > use vPCI.
> 
> Actually I don't think this is a good idea. I am all for flexibility but
> supporting multiple different configurations comes at an extra cost for
> both maintainers and contributors. I think we should try to reduce the
> amount of configurations we support rather than increasing them
> (especially on x86 where we have PV, PVH, HVM).

I think it's perfectly fine to initially require a domain to have all
its devices either passed through using vPCI or ireqs, but the
interface IMO should allow for such differentiation in the future.
That's why I think introducing a domain wide vPCI flag might not be
the best option going forward.

It would be perfectly fine for XEN_DOMCTL_assign_device to set a
domain wide vPCI flag, iow:

if ( HYPERCALL_VPCI_FLAG_SET && !has_vpci(d) )
{
    if ( has_arch_pdevs(d) )
    {
        printk("All passthrough devices must use the same backend\n");
        return -EINVAL;
    }

    /* Set vPCI domain flag */
}

We have already agreed that we want to aim for a setup where ioreqs
and vPCI could be used for the same domain, but I guess you assumed
that ioreqs would be used for emulated devices exclusively and vPCI
for passthrough devices?

Overall if we agree that ioreqs and vPCI should co-exist for a domain,
I'm not sure there's much reason to limit ioreqs to only handle
emulated devices, seems like an arbitrary limitation.

> I don't think we should enable IOREQ servers to handle PCI passthrough
> for PVH guests and/or guests with vPCI. If the domain has vPCI, PCI
> Passthrough can be handled by vPCI just fine. I think this should be a
> good anti-feature to have (a goal to explicitly not add this feature) to
> reduce complexity. Unless you see a specific usecase to add support for
> it?

There are passthrough devices (GPUs) that might require some extra
mediation on dom0 (like the Intel GVT-g thing) that would force the
usage of ioreqs to passthrough.

It's important that the interfaces we introduce are correct IMO,
because that ends up reflecting on the configuration options that we
expose to users on xl/libxl.  While both XEN_DOMCTL_createdomain and
XEN_DOMCTL_assign_device are unstable interfaces, how the vPCI option
gets placed there will ultimately influence how the option gets
exposed in xl/libxl, and the interface there is relevant to keep
stable for end user sanity.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.