[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC XEN PATCH v2 2/3] x86/pvh: Open PHYSDEVOP_map_pirq for PVH dom0


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: "Chen, Jiqian" <Jiqian.Chen@xxxxxxx>
  • Date: Thu, 30 Nov 2023 06:32:00 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=APYI3t14YeJvsWc0EnnFBmUDiBFpb6HpHejzf1t2Zj0=; b=U34S9Ev4s/B/BzGQIkNOcC7lT0IMjqJU4Ws38GZq3Dwk9MgDC+2CHGszRlj6TgoG8wOv7b8bqg8Y2g4rES0pHJDVcmcjxOkrf4+d0mZvVlrbMg2MouaFiDML05XrJv4gBxg3jOgpjbRbTaBoio7sJu/MTXs4n1ibJqrGvZ1pXVEOfi8QukBXuEA7T4ILZteE+ltT3/kWeALyqr6nKahVfoSfQC/iJtClOJMzT1WiSr3pp98dwnEWXxt7l8tWXsaDd04ZprXBaGmV71HKQQzgCbKAa9HPQn0HNLalsLwjlV0KloN+wRwLyjPJ1aPv9IzUyEXoSuxHfn2mBeANfrg7Ig==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FysdsCEBIAF8+thZtM++R0Hr/LqDYW9Tjm3YRuvzzlYbSDgGJYq6Z8TDGni5tlIGcGZW44eWshOLFqO6f14xkBljLAYOgdAsOIlwWtmBzg3n8x0tNq8id03E8B3M7w1Lih96Z49mvFiPxtOm3tAv4q1UE9R1GOT4m+q/wcL+3JoPqA9wSdkHZYm59wSzAMb4LuRcq7mUw2Mkbcp2eZvzBpZptJW25D86NJFNVjS/kE+sxS45hem4MgKi73gsc5K2/BtMbiDfZ5YcQ1T46GEcn6OYuHx8hIWIy4lxliTY/fpJWp+m/+j5O2x46UEWVW4tC67662lUXJ/Oc2OmpM/OsA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com;
  • Cc: Jan Beulich <jbeulich@xxxxxxxx>, Wei Liu <wl@xxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "Hildebrand, Stewart" <Stewart.Hildebrand@xxxxxxx>, "Deucher, Alexander" <Alexander.Deucher@xxxxxxx>, "Ragiadakou, Xenia" <Xenia.Ragiadakou@xxxxxxx>, "Stabellini, Stefano" <stefano.stabellini@xxxxxxx>, "Huang, Ray" <Ray.Huang@xxxxxxx>, "Huang, Honglei1" <Honglei1.Huang@xxxxxxx>, "Zhang, Julia" <Julia.Zhang@xxxxxxx>, "Chen, Jiqian" <Jiqian.Chen@xxxxxxx>
  • Delivery-date: Thu, 30 Nov 2023 06:32:18 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHaHsLnjKJ102tGrkSg9na/6oLNI7CPzbCAgAMmdYA=
  • Thread-topic: [RFC XEN PATCH v2 2/3] x86/pvh: Open PHYSDEVOP_map_pirq for PVH dom0

On 2023/11/28 22:17, Roger Pau Monné wrote:
> On Fri, Nov 24, 2023 at 06:41:35PM +0800, Jiqian Chen wrote:
>> If we run Xen with PVH dom0 and hvm domU, hvm will map a pirq for
>> a passthrough device by using gsi, see xen_pt_realize->xc_physdev_map_pirq
>> and pci_add_dm_done->xc_physdev_map_pirq. Then xc_physdev_map_pirq will
>> call into Xen, but in hvm_physdev_op, PHYSDEVOP_map_pirq is not allowed
>> because currd is PVH dom0 and PVH has no X86_EMU_USE_PIRQ flag, it will
>> fail at has_pirq check.
>>
>> So, I think we may need to allow PHYSDEVOP_map_pirq when currd is dom0 (at
> 
> And PHYSDEVOP_unmap_pirq also?
Yes, in the failed path, PHYSDEVOP_unmap_pirq will be called. I will add some 
descriptions about it into the commit message.

> 
>> present dom0 is PVH).
> 
> IMO it would be better to implement a new set of DMOP hypercalls that
> handle the setup of interrupts from physical devices that are assigned
> to a guest.  That should allow us to get rid of the awkward PHYSDEVOP
> + XEN_DOMCTL_{,un}bind_pt_irq hypercall interface, which currently
> prevents QEMU from being hypervisor version agnostic (since the domctl
> interface is not stable).
> 
> I understand this might be too much to ask for, but something to take
> into account.
Yes, that will be a complex project. I think current change can meet the needs. 
We can take DMOP into account in the future. Thank you.

> 
> Thanks, Roger.

-- 
Best regards,
Jiqian Chen.

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.