|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 3/3] xen/x86: Rename and simplify async_event_* infrastructure
On 18/02/2020 16:31, Jan Beulich wrote:
> On 17.02.2020 12:17, Andrew Cooper wrote:
>> --- a/xen/arch/x86/pv/iret.c
>> +++ b/xen/arch/x86/pv/iret.c
>> @@ -27,15 +27,15 @@ static void async_exception_cleanup(struct vcpu *curr)
>> {
>> unsigned int trap;
>>
>> - if ( !curr->arch.async_exception_mask )
>> + if ( !curr->arch.async_event_mask )
>> return;
>>
>> - if ( !(curr->arch.async_exception_mask &
>> (curr->arch.async_exception_mask - 1)) )
>> - trap = __scanbit(curr->arch.async_exception_mask, VCPU_TRAP_NONE);
>> + if ( !(curr->arch.async_event_mask & (curr->arch.async_event_mask - 1))
>> )
>> + trap = __scanbit(curr->arch.async_event_mask, 0);
> The transformation just by itself is clearly not "no functional
> change"; it is together with the prior if(), but it took me a
> little to convince myself.
Well... It is no functional change, even if the fact isn't terribly obvious.
> I don't recall why VCPU_TRAP_NONE was
> used here originally (possibly just because of it being zero),
> but I think the latest now it would be better to use
> VCPU_TRAP_LAST + 1 instead, as 0 now has an actual meaning.
Half poking out of context is:
if ( unlikely(trap > VCPU_TRAP_LAST) )
{
ASSERT_UNREACHABLE();
return;
}
which would trigger on such an error path. That said, the following:
/* Restore previous asynchronous exception mask. */
curr->arch.async_event_mask = curr->arch.async_event[trap].old_mask;
will just pick the NMI old_mask in the case of something going wrong.
I deliberately made no adjustment here. I intend to replace it with
something which is correctly, rather than spending time trying to figure
out how some clearly broken logic was intended to "work".
>
>> @@ -557,12 +546,22 @@ struct arch_vcpu
>>
>> struct vpmu_struct vpmu;
>>
>> - struct {
>> - bool pending;
>> - uint8_t old_mask;
>> - } async_exception_state[VCPU_TRAP_LAST];
>> -#define async_exception_state(t) async_exception_state[(t)-1]
>> - uint8_t async_exception_mask;
>> + union {
>> +#define VCPU_TRAP_NMI 0
>> +#define VCPU_TRAP_MCE 1
>> +#define VCPU_TRAP_LAST VCPU_TRAP_MCE
>> + struct {
>> + bool pending;
>> + uint8_t old_mask;
>> + } async_event[VCPU_TRAP_LAST + 1];
>> + struct {
>> + bool nmi_pending;
>> + uint8_t nmi_old_mask;
>> + bool mce_pending;
>> + uint8_t mce_old_mask;
>> + };
>> + };
> How about
>
> union {
> #define VCPU_TRAP_NMI 0
> #define VCPU_TRAP_MCE 1
> #define VCPU_TRAP_LAST VCPU_TRAP_MCE
> struct async_event_state {
> bool pending;
> uint8_t old_mask;
> } async_event[VCPU_TRAP_LAST + 1];
> struct {
> struct async_event_state nmi;
> struct async_event_state mce;
> };
> };
>
> (structure tag subject to improvement)?
Can do. I don't expect any of this to survive, but I also don't yet
have a clear idea what form the eventual solution is going to take.
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |