|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 8/8] vpci/msix: Add function to clean MSIX resources
On 2025/4/17 15:34, Jan Beulich wrote:
> On 09.04.2025 08:45, Jiqian Chen wrote:
>> --- a/xen/arch/x86/hvm/intercept.c
>> +++ b/xen/arch/x86/hvm/intercept.c
>> @@ -276,6 +276,50 @@ void register_mmio_handler(struct domain *d,
>> handler->mmio.ops = ops;
>> }
>>
>> +void unregister_mmio_handler(struct domain *d,
>> + const struct hvm_mmio_ops *ops)
>> +{
>> + unsigned int i, count = d->arch.hvm.io_handler_count;
>> +
>> + ASSERT(d->arch.hvm.io_handler);
>> +
>> + if ( !count )
>> + return;
>> +
>> + for ( i = 0; i < count; i++ )
>> + if ( d->arch.hvm.io_handler[i].type == IOREQ_TYPE_COPY &&
>> + d->arch.hvm.io_handler[i].mmio.ops == ops )
>> + break;
>> +
>> + if ( i == count )
>> + return;
>> +
>> + for ( ; i < count - 1; i++ )
>> + {
>> + struct hvm_io_handler *curr = &d->arch.hvm.io_handler[i];
>> + struct hvm_io_handler *next = &d->arch.hvm.io_handler[i + 1];
>> +
>> + curr->type = next->type;
>> + curr->ops = next->ops;
>> + if ( next->type == IOREQ_TYPE_COPY )
>> + {
>> + curr->portio.port = 0;
>> + curr->portio.size = 0;
>> + curr->portio.action = 0;
>> + curr->mmio.ops = next->mmio.ops;
>> + }
>> + else
>> + {
>> + curr->mmio.ops = 0;
>> + curr->portio.port = next->portio.port;
>> + curr->portio.size = next->portio.size;
>> + curr->portio.action = next->portio.action;
>> + }
>> + }
>> +
>> + d->arch.hvm.io_handler_count--;
>> +}
>
> To add on what Roger said: This is inherently non-atomic, so the domain
> would need pausing to do such removal in a race-free way. Hence why we
> deliberately didn't have such a function so far, aiui. (The removal may
> be safe in the specific case you use it helow, but there's nothing here
> preventing it from being used elsewhere, without paying attention to
> the raciness.)
Make sense, thanks for your detailed inputs.
I will delete this part in next version.
>
> Jan
--
Best regards,
Jiqian Chen.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |