[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC XEN PATCH v3 2/3] x86/pvh: Add (un)map_pirq and setup_gsi for PVH dom0


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 14 Dec 2023 10:58:24 +0100
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: "Daniel P . Smith" <dpsmith@xxxxxxxxxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "Hildebrand, Stewart" <Stewart.Hildebrand@xxxxxxx>, "Ragiadakou, Xenia" <Xenia.Ragiadakou@xxxxxxx>, "Huang, Ray" <Ray.Huang@xxxxxxx>, "Chen, Jiqian" <Jiqian.Chen@xxxxxxx>
  • Delivery-date: Thu, 14 Dec 2023 09:58:32 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 14.12.2023 10:55, Roger Pau Monné wrote:
> On Thu, Dec 14, 2023 at 08:55:45AM +0000, Chen, Jiqian wrote:
>> On 2023/12/13 15:03, Jan Beulich wrote:
>>> On 13.12.2023 03:47, Chen, Jiqian wrote:
>>>> On 2023/12/12 17:30, Jan Beulich wrote:
>>>>> On 12.12.2023 07:49, Chen, Jiqian wrote:
>>>>>> On 2023/12/11 23:31, Roger Pau Monné wrote:
>>>>>>> On Mon, Dec 11, 2023 at 12:40:08AM +0800, Jiqian Chen wrote:
>>>>>>>> --- a/xen/arch/x86/hvm/hypercall.c
>>>>>>>> +++ b/xen/arch/x86/hvm/hypercall.c
>>>>>>>> @@ -72,8 +72,11 @@ long hvm_physdev_op(int cmd, 
>>>>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>>>>  
>>>>>>>>      switch ( cmd )
>>>>>>>>      {
>>>>>>>> +    case PHYSDEVOP_setup_gsi:
>>>>>>>
>>>>>>> I think given the new approach on the Linux side patches, where
>>>>>>> pciback will configure the interrupt, there's no need to expose
>>>>>>> setup_gsi anymore?
>>>>>> The latest patch(the second patch of v3 on kernel side) does setup_gsi 
>>>>>> and map_pirq for passthrough device in pciback, so we need this and 
>>>>>> below.
>>>>>>
>>>>>>>
>>>>>>>>      case PHYSDEVOP_map_pirq:
>>>>>>>>      case PHYSDEVOP_unmap_pirq:
>>>>>>>> +        if ( is_hardware_domain(currd) )
>>>>>>>> +            break;
>>>>>>>
>>>>>>> Also Jan already pointed this out in v2: this hypercall needs to be
>>>>>>> limited so a PVH dom0 cannot execute it against itself.  IOW: refuse
>>>>>>> the hypercall if DOMID_SELF or the passed domid matches the current
>>>>>>> domain domid.
>>>>>> Yes, I remember Jan's suggestion, but since the latest patch(the second 
>>>>>> patch of v3 on kernel side) has change the implementation, it does 
>>>>>> setup_gsi and map_pirq for dom0 itself, so I didn't add the DOMID_SELF 
>>>>>> check.
>>>>>
>>>>> And why exactly would it do specifically the map_pirq? (Even the setup_gsi
>>>>> looks questionable to me, but there might be reasons there.)
>>>> Map_pirq is to solve the check failure problem. (pci_add_dm_done-> 
>>>> xc_domain_irq_permission-> XEN_DOMCTL_irq_permission-> 
>>>> pirq_access_permitted->domain_pirq_to_irq->return irq is 0)
>>>> Setup_gsi is because the gsi is never be unmasked, so the gsi is never be 
>>>> registered( vioapic_hwdom_map_gsi-> mp_register_gsi is never be called).
>>>
>>> And it was previously made pretty clear by Roger, I think, that doing a 
>>> "map"
>>> just for the purpose of granting permission is, well, at best a temporary
>>> workaround in the early development phase. If there's presently no hypercall
>>> to _only_ grant permission to IRQ, we need to add one.
>> Could you please describe it in detail? Do you mean to add a new hypercall 
>> to grant irq access for dom0 or domU?
>> It seems XEN_DOMCTL_irq_permission is the hypercall to grant irq access from 
>> dom0 to domU(see XEN_DOMCTL_irq_permission-> irq_permit_access). There is no 
>> need to add hypercall to grant irq access.
>> We failed here (XEN_DOMCTL_irq_permission-> 
>> pirq_access_permitted->domain_pirq_to_irq->return irq is 0) is because the 
>> PVH dom0 didn't use PIRQ, so we can't get irq from pirq if "current" is PVH 
>> dom0.
> 
> One way to bodge this would be to detect whether the caller of
> XEN_DOMCTL_irq_permission is a PV or an HVM domain, and in case of HVM
> assume the pirq field is a GSI.  I'm unsure however how that will work
> with non-x86 architectures.
> 
> It would  be better to introduce a new XEN_DOMCTL_gsi_permission, or
> maybe XEN_DOMCTL_intr_permission that can take a struct we can use to
> accommodate GSIs and other arch specific interrupt identifiers.

How would you see MSI being handled then?

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.