|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [XEN PATCH v4 12/14] x86/ioreq: guard VIO_realmode_completion with CONFIG_VMX
On 10.07.2024 12:10, Sergiy Kibrik wrote:
> 09.07.24 10:28, Jan Beulich:
>> On 09.07.2024 08:09, Sergiy Kibrik wrote:
>>> --- a/xen/arch/x86/include/asm/hvm/ioreq.h
>>> +++ b/xen/arch/x86/include/asm/hvm/ioreq.h
>>> @@ -13,6 +13,11 @@
>>> #define IOREQ_STATUS_UNHANDLED X86EMUL_UNHANDLEABLE
>>> #define IOREQ_STATUS_RETRY X86EMUL_RETRY
>>>
>>> +#ifdef CONFIG_VMX
>>> +bool arch_vcpu_ioreq_completion(enum vio_completion completion);
>>> +#define arch_vcpu_ioreq_completion
>>> +#endif
>>
>> Putting the (or some kind of) #define here is certainly fine, but moving ...
>>
>>> --- a/xen/include/xen/ioreq.h
>>> +++ b/xen/include/xen/ioreq.h
>>> @@ -111,7 +111,6 @@ void ioreq_domain_init(struct domain *d);
>>> int ioreq_server_dm_op(struct xen_dm_op *op, struct domain *d, bool
>>> *const_op);
>>>
>>> bool arch_ioreq_complete_mmio(void);
>>> -bool arch_vcpu_ioreq_completion(enum vio_completion completion);
>>> int arch_ioreq_server_map_pages(struct ioreq_server *s);
>>> void arch_ioreq_server_unmap_pages(struct ioreq_server *s);
>>> void arch_ioreq_server_enable(struct ioreq_server *s);
>>
>> ... the declaration from here requires that all architectures wanting to
>> implement the function need to have identical copies. That's unnecessary
>> risk of going out of sync.
>>
>> As to the #define itself: It expanding to nothing means the call site
>> de-generates to
>>
>> #ifdef arch_vcpu_ioreq_completion
>> res = (completion);
>> #else
>>
>> which hardly is what is meant (despite compiling fine, and it likely
>> only being Eclair which would then tell us about the issue). Further
>> there you're also removing a blank line, I don't see why you're doing
>> that.
>>
>
> looking through these changes once again I wonder why can't we just move
> stub to the header like this:
>
> in xen/include/xen/ioreq.h:
>
> #ifdef arch_vcpu_ioreq_completion
>
> #ifdef CONFIG_VMX
> bool arch_vcpu_ioreq_completion(enum vio_completion completion);
> #else
> static inline bool arch_vcpu_ioreq_completion(enum vio_completion
> completion)
> {
> ASSERT_UNREACHABLE();
> return true;
> }
> #endif
>
>
> and avoid additional pre-processor variables & conditionals, because it
> looks like we do need some kind of stub that does ASSERT_UNREACHABLE()
> anyway.
That's possible to do, yes, but not as long as you key it off of CONFIG_VMX.
This arch-specific setting would better not be used in a common code header.
You could introduce a helper CONFIG_* which VMX selects, at which point
doing what you suggest is an option.
However, in what you have above I can't figure why "#ifdef
arch_vcpu_ioreq_completion" is still there.
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |