[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC XEN PATCH v2 2/3] x86/pvh: Open PHYSDEVOP_map_pirq for PVH dom0


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: "Chen, Jiqian" <Jiqian.Chen@xxxxxxx>
  • Date: Thu, 30 Nov 2023 09:43:19 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Oe4tqENU5MzyCC9x8UtifJ2XU21dom8gwnSeuZzyD8Q=; b=igSQQFN1QYiR/W3B7pnLlURiYQeyaxtMFlDrmiG4mzJ3/bRe7GZh5qMWdn1uF+SdPXvAG5tVxp66J9Yk5BSbk8hn3ZV+etKfB0shqt/LDgn2d07n/yhR8MJXHRc3P+X2SBJy0py2FQq3MzOTCE3iAjAm9IpnSpfcAy+xHd6GV6Il+D+xUZNzvMvtrpTTjzbmr2N0gJo+IZWssLsxXPGE+m2bP3JNfoKn0YbwJZOOG2FBuo+e4KPxWAQzYfMCpQ0XsmzCkA5MYRClfhqJoU91peqWqMlfelrZuQI+NCHkkI5PjTYpNdJEg4hvJ1bC1586gaTBr5dI0TRL67Ue1HTIJg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=f+qTP9dSHl/2J7v7CQa3Drg24KbOvuUO8Qv5LOXIDu7yuQa23q7KwhQQqecHCXMIWAB4ogsKVbKboMN/GMDrNoUcQmmbv7uRmP8YkEMsVoho1WFpNXOyNaxBEUiGY2pbtWxZHHJidGRoDkYpqh+mm8wlmvJ20gSJdnnVn+TpM3xY8/N5gl86vpOmgwmo5WlwNVxfwy2mBSYrNwI0KlwzgN/yCKaY5B610551QskhYFEx8PkVRwIu/hDfENMNP/NGrcl5/rOJVuLA6r3v3gb+BURxbhyibY/Wex/12ziG/X3ss5VsTjqtJRC+/9YCjJ5RN5xISNDSdXXq3EOEjR6izw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com;
  • Cc: "Hildebrand, Stewart" <Stewart.Hildebrand@xxxxxxx>, "Deucher, Alexander" <Alexander.Deucher@xxxxxxx>, "Ragiadakou, Xenia" <Xenia.Ragiadakou@xxxxxxx>, "Stabellini, Stefano" <stefano.stabellini@xxxxxxx>, "Huang, Ray" <Ray.Huang@xxxxxxx>, "Huang, Honglei1" <Honglei1.Huang@xxxxxxx>, "Zhang, Julia" <Julia.Zhang@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "Chen, Jiqian" <Jiqian.Chen@xxxxxxx>
  • Delivery-date: Thu, 30 Nov 2023 09:43:42 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHaHsLnjKJ102tGrkSg9na/6oLNI7CP3bYAgAMYyID//6NpgIAAkSeA
  • Thread-topic: [RFC XEN PATCH v2 2/3] x86/pvh: Open PHYSDEVOP_map_pirq for PVH dom0

On 2023/11/30 17:00, Jan Beulich wrote:
> On 30.11.2023 07:44, Chen, Jiqian wrote:
>> On 2023/11/28 23:14, Jan Beulich wrote:
>>> On 24.11.2023 11:41, Jiqian Chen wrote:
>>>> --- a/xen/arch/x86/hvm/hypercall.c
>>>> +++ b/xen/arch/x86/hvm/hypercall.c
>>>> @@ -74,6 +74,8 @@ long hvm_physdev_op(int cmd, 
>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>      {
>>>>      case PHYSDEVOP_map_pirq:
>>>>      case PHYSDEVOP_unmap_pirq:
>>>> +        if (is_hardware_domain(currd))
>>>> +            break;
>>>>      case PHYSDEVOP_eoi:
>>>>      case PHYSDEVOP_irq_status_query:
>>>>      case PHYSDEVOP_get_free_pirq:
>>>
>>> If you wouldn't go the route suggested by Roger, I think you will need
>>> to deny self-mapping requests here.
>> Do you mean below?
>> if (arg.domid == DOMID_SELF)
>>      return;
> 
> That's part of it, yes. You'd also need to check for the actual domain ID of
> the caller domain.
I will add more check in next version.

> 
>>> Also note that both here and in patch 1 you will want to adjust a number
>>> of style violations.
>> Could you please descript in detail? This will greatly assist me in making 
>> modifications in the next version. Thank you!
> 
> Well, in the code above you're missing blanks inside the if(). Please see
> ./CODING_STYLE.
Thank you very much! I will check and modify all my patches to meet the Xen 
code style in next version.

> 
> Jan

-- 
Best regards,
Jiqian Chen.

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.