[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v6 4/5] [FUTURE] xen/arm: enable vPCI for domUs


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>, "Stefano Stabellini" <sstabellini@xxxxxxxxxx>
  • From: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
  • Date: Mon, 4 Dec 2023 14:09:14 -0500
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=citrix.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=AK6IDfB+uHzESfBkkm0rwzbiYaOUfaR6S9LDe9NxFCI=; b=E09jxR0MPHmMQgJZWhgDFn1ueuMChHkzC0HVNTmFB2RgziszTY1cxVoOElqLyzw83vKDKnybMSnock47Vkdy0gZi4uwGS/2qzErMlmM+n8rW+agWww/dDhw8IflJsNJVbODOLw8HzKVWcATbm0lVQwMFKCXOvcgow9PSukRIvBtgt4Nw3aM2wdVCRjyxhtl1wBypcRswieZ8yLWYrsvpblj7Dl5KFVpQMQaXvEEhhUqdA8YKcKFiypmr9Fif4U3x2IYsh/HrKTFO5GN9IMBmRH2eViXcBhGnRUlJkngWYHarjGFY+bfRhl4/sMqS1Vkd3NdaIyScvfo/n+X4/xWowQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hmtQAz0Vw7QkAaGXNyFS9waieVgKAmrH+1UnEn553vIaY0cFTmK6XHPE4pkdFVBJgevj85FwA3/zOnypv36vkBEiQ803bZ+CNeJ9I/94zzqgG/kkLc89yAdUNxBeB3VCbMJxFQrs/DCUPrm47hoHvsWOz0UcAgSKPIeiMrRgqd0KjT8KLrnKhC29XU1kFQKJ7njA4BOmFVsOMeb3Y7wXQqOdfXXK+C5A6pm3ZS46e4EX4Pm7Zlbr28je9jq/vpXI6kE7rljZeOlo8PiJ/KNzC6N1iXTfxCy8WTRENUSM/fohWJkAVXEjUnikXJcPVjqLilecdZdLxQKJPRnUU0eBNA==
  • Cc: <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, "Bertrand Marquis" <bertrand.marquis@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, "Jan Beulich" <jbeulich@xxxxxxxx>, Wei Liu <wl@xxxxxxx>, Paul Durrant <paul@xxxxxxx>
  • Delivery-date: Mon, 04 Dec 2023 19:09:27 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 12/4/23 05:58, Roger Pau Monné wrote:
> On Fri, Dec 01, 2023 at 06:56:32PM -0800, Stefano Stabellini wrote:
>> On Fri, 1 Dec 2023, Roger Pau Monné wrote:
>>> On Mon, Nov 13, 2023 at 05:21:13PM -0500, Stewart Hildebrand wrote:
>>>> @@ -1618,6 +1630,14 @@ int iommu_do_pci_domctl(
>>>>          bus = PCI_BUS(machine_sbdf);
>>>>          devfn = PCI_DEVFN(machine_sbdf);
>>>>  
>>>> +        if ( needs_vpci(d) && !has_vpci(d) )
>>>> +        {
>>>> +            printk(XENLOG_G_WARNING "Cannot assign %pp to %pd: vPCI 
>>>> support not enabled\n",
>>>> +                   &PCI_SBDF(seg, bus, devfn), d);
>>>> +            ret = -EPERM;
>>>> +            break;
>>>
>>> I think this is likely too restrictive going forward.  The current
>>> approach is indeed to enable vPCI on a per-domain basis because that's
>>> how PVH dom0 uses it, due to being unable to use ioreq servers.
>>>
>>> If we start to expose vPCI suport to guests the interface should be on
>>> a per-device basis, so that vPCI could be enabled for some devices,
>>> while others could still be handled by ioreq servers.
>>>
>>> We might want to add a new flag to xen_domctl_assign_device (used by
>>> XEN_DOMCTL_assign_device) in order to signal whether the device will
>>> use vPCI.
>>
>> Actually I don't think this is a good idea. I am all for flexibility but
>> supporting multiple different configurations comes at an extra cost for
>> both maintainers and contributors. I think we should try to reduce the
>> amount of configurations we support rather than increasing them
>> (especially on x86 where we have PV, PVH, HVM).
> 
> I think it's perfectly fine to initially require a domain to have all
> its devices either passed through using vPCI or ireqs, but the
> interface IMO should allow for such differentiation in the future.
> That's why I think introducing a domain wide vPCI flag might not be
> the best option going forward.
> 
> It would be perfectly fine for XEN_DOMCTL_assign_device to set a
> domain wide vPCI flag, iow:
> 
> if ( HYPERCALL_VPCI_FLAG_SET && !has_vpci(d) )
> {
>     if ( has_arch_pdevs(d) )
>     {
>         printk("All passthrough devices must use the same backend\n");
>         return -EINVAL;
>     }
> 
>     /* Set vPCI domain flag */

There is a vPCI initializer that would need to be called too (on Arm, to set up 
mmio handlers). It is normally called earlier during arch_domain_create(), but 
it looks to be okay to call here as long as it is done before the guest boots.

    d->options |= XEN_DOMCTL_CDF_vpci;
    domain_vpci_init(d);

Perhaps the flag should be set inside domain_vpci_init(d).

> }
> 
> We have already agreed that we want to aim for a setup where ioreqs
> and vPCI could be used for the same domain, but I guess you assumed
> that ioreqs would be used for emulated devices exclusively and vPCI
> for passthrough devices?
> 
> Overall if we agree that ioreqs and vPCI should co-exist for a domain,
> I'm not sure there's much reason to limit ioreqs to only handle
> emulated devices, seems like an arbitrary limitation.
> 
>> I don't think we should enable IOREQ servers to handle PCI passthrough
>> for PVH guests and/or guests with vPCI. If the domain has vPCI, PCI
>> Passthrough can be handled by vPCI just fine. I think this should be a
>> good anti-feature to have (a goal to explicitly not add this feature) to
>> reduce complexity. Unless you see a specific usecase to add support for
>> it?
> 
> There are passthrough devices (GPUs) that might require some extra
> mediation on dom0 (like the Intel GVT-g thing) that would force the
> usage of ioreqs to passthrough.
> 
> It's important that the interfaces we introduce are correct IMO,
> because that ends up reflecting on the configuration options that we
> expose to users on xl/libxl.  While both XEN_DOMCTL_createdomain and
> XEN_DOMCTL_assign_device are unstable interfaces, how the vPCI option
> gets placed there will ultimately influence how the option gets
> exposed in xl/libxl, and the interface there is relevant to keep
> stable for end user sanity.
> 
> Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.