|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Single step in HVM domU on Intel machine may see wrong DB6
Jan Beulich wrote on 2014-03-07:
>>>> On 07.03.14 at 06:10, "Zhang, Yang Z" <yang.z.zhang@xxxxxxxxx> wrote:
>> How about the following changes: Intercept the TRAP_debug when schedule
>> out and drop it after restoring VCPU's DR register into hardware.
>
> Along those lines at least.
>
>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>> @@ -372,6 +372,10 @@ static void vmx_save_dr(struct vcpu *v)
>> v->arch.hvm_vmx.exec_control |= CPU_BASED_MOV_DR_EXITING;
>> vmx_update_cpu_exec_control(v);
>> + /* Trap debug for signle stepping. */
>> + v->arch.hvm_vmx.exception_bitmap |= 1 << TRAP_debug;
>> + vmx_update_exception_bitmap(v);
>> +
>> v->arch.debugreg[0] = read_debugreg(0);
>> v->arch.debugreg[1] = read_debugreg(1);
>> v->arch.debugreg[2] = read_debugreg(2);
>> @@ -388,6 +392,13 @@ static void __restore_debug_registers(struct vcpu
>> *v)
>>
>> v->arch.hvm_vcpu.flag_dr_dirty = 1;
>> + /*
>> + * After restore, hardware has the right context.
>> + * So no need to trap debug anymore.
>> + */
>> + v->arch.hvm_vmx.exception_bitmap |= ~(1 << TRAP_debug);
>
> &=
>
>> + vmx_update_exception_bitmap(v);
>> +
>> write_debugreg(0, v->arch.debugreg[0]); write_debugreg(1,
>> v->arch.debugreg[1]); write_debugreg(2, v->arch.debugreg[2]); @@
>> -1171,8 +1182,6 @@ void vmx_update_debug_state(struct vcpu *v)
>> unsigned long mask;
>>
>> mask = 1u << TRAP_int3;
>> - if ( !cpu_has_monitor_trap_flag )
>> - mask |= 1u << TRAP_debug;
>>
>> if ( v->arch.hvm_vcpu.debug_state_latch )
>> v->arch.hvm_vmx.exception_bitmap |= mask;
>
> If the intercept now really becomes independent of MTF
> availability, then I guess this code should be further cleaned up
> ("mask" becoming pointless as a separate variable).
>
Yes, I will clean up it when sending the formal patch.
>> @@ -2689,10 +2698,18 @@ void vmx_vmexit_handler(struct cpu_user_regs
> *regs)
>> */
>> __vmread(EXIT_QUALIFICATION, &exit_qualification);
>> HVMTRACE_1D(TRAP_DEBUG, exit_qualification);
>> - write_debugreg(6, exit_qualification | 0xffff0ff0); -
>> if ( !v->domain->debugger_attached || cpu_has_monitor_trap_flag )
>> - goto exit_and_crash; -
>> domain_pause_for_debugger(); + exit_qualification |=
>> 0xffff0ff0;
>
> Is this really needed?
Yes. The reserved bits need to set to 1 in DB6. But it is cleared in
exit_qualification.
>
>> + if ( v->domain->debugger_attached )
>> + {
>> + write_debugreg(6, exit_qualification);
>> + domain_pause_for_debugger();
>> + }
>> + else
>> + {
>> + __restore_debug_registers(v);
>> + write_debugreg(6, exit_qualification |
> read_debugreg(6));
>
> I still wonder whether it wouldn't be more efficient to simply or
> exit_qualification into v->arch.debugreg[6] before calling
> __restore_debug_registers().
>
__restore_debug_registers() only copy the v->arch.debugreg[6] into hardware DB6
when flag_dr_dirty is cleared. So as I mentioned before, the hardware DB
register will hold the latest value if flag_dr_dirty is set and we should write
hardware DB6 directly.
> Jan
>
>> + hvm_inject_hw_exception(TRAP_debug,
>> HVM_DELIVER_NO_ERROR_CODE); + }
>> break;
>> case TRAP_int3:
>> {
>> diff --git a/xen/include/asm-x86/hvm/hvm.h
>> b/xen/include/asm-x86/hvm/hvm.h index dcc3483..0d76d8c 100644 ---
>> a/xen/include/asm-x86/hvm/hvm.h +++ b/xen/include/asm-x86/hvm/hvm.h @@
>> -378,7 +378,8 @@ static inline int hvm_event_pending(struct vcpu *v)
>> (cpu_has_xsave ? X86_CR4_OSXSAVE : 0))))
>> /* These exceptions must always be intercepted. */
>> -#define HVM_TRAP_MASK ((1U << TRAP_machine_check) | (1U <<
>> TRAP_invalid_op)) +#define HVM_TRAP_MASK ((1U << TRAP_machine_check) |
>> (1U << TRAP_invalid_op) |\ + (1U << TRAP_debug))
>>
>> /*
>> * x86 event types. This enumeration is valid for:
>>
>>
>> Best regards,
>> Yang
>
>
Best regards,
Yang
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |