[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Single step in HVM domU on Intel machine may see wrong DB6



Jan Beulich wrote on 2014-03-05:
>>>> On 05.03.14 at 03:22, "Zhang, Yang Z" <yang.z.zhang@xxxxxxxxx> wrote:
>> Jan Beulich wrote on 2014-02-27:
>>>>>> On 27.02.14 at 02:31, "Zhang, Yang Z" <yang.z.zhang@xxxxxxxxx>
> wrote:
>>>> Jan Beulich wrote on 2014-02-27:
>>>>>>>> On 26.02.14 at 06:15, "Zhang, Yang Z" <yang.z.zhang@xxxxxxxxx>
>>> wrote:
>>>>>> @@ -2690,9 +2688,13 @@ void vmx_vmexit_handler(struct
>>> cpu_user_regs
>>>>> *regs)
>>>>>>              __vmread(EXIT_QUALIFICATION, &exit_qualification);
>>>>>>              HVMTRACE_1D(TRAP_DEBUG, exit_qualification);
>>>>>>              write_debugreg(6, exit_qualification | 0xffff0ff0);
>>>>>> -            if ( !v->domain->debugger_attached ||
>>>>>> cpu_has_monitor_trap_flag ) -                goto exit_and_crash; -
>>>>>>        domain_pause_for_debugger(); +            if (
>>>>>> v->domain->debugger_attached ) +
>>>>>> domain_pause_for_debugger(); +            else +            { +
>>>>>>        __restore_debug_registers(v); +
>>>>>> hvm_inject_hw_exception(TRAP_debug,
>>> HVM_DELIVER_NO_ERROR_CODE); +
>>>>>>      }
>>>>> 
>>>>> I suppose you need to set DR6.BS after restoring the reigsters?
>>>> 
>>>> Right but is not enough. If flag_dr_dirty is set, we need to
>>>> restore register from hardware. Conversely, restore is from
>>>> debugreg and set
>>>> DR6 to exit_qualification.
>>> 
>>> After some more thought, I in fact doubt that restoring the debug
>>> registers is in line with the current model: We should simply set
>>> DR6.BS in the in-memory copy when the debug registers aren't live
>>> yet (and it doesn't hurt to always do that). And since DR6 bits
>>> generally are sticky, I think exit_qualification actually needs to
>>> be or-ed into the
>> in-memory copy.
>> 
>> Will guest be confused to see the DR6.BS always set?
> 
> It certainly shouldn't when single stepping. And as long as the guest
> clears the bit while handling the single step trap, it won't see it set on 
> other kinds of #DB.
> After all that's how hardware behaves.
> 
>>> And presumably we would be better off if we dropped the
>>> interception of TRAP_debug once we restored the debug registers.
>> 
>> Yes, it's unnecessary to trap into hypervisor always. Also, if we
>> set DR6.BS always, I guess there is no need to intercept the TRAP_debug.
> 
> Oh, perhaps you misunderstood then: I didn't suggest to always set that flag.
> What I suggested is that during the intercepted TRAP_debug we should
> or exit_qualification (which ought to have BS set when single stepping
> with no other breakpoint enabled in DR7) into the in-memory copy of
> DR6. Once the intercept got dropped (as also suggested), hardware
> would again take care of setting DR6 correctly.

Oh, I am sorry, I misunderstand you.
How about the following changes: Intercept the TRAP_debug when schedule out and 
drop it after restoring VCPU's DR register into hardware.

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index b128e81..5784dd1 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -372,6 +372,10 @@ static void vmx_save_dr(struct vcpu *v)
     v->arch.hvm_vmx.exec_control |= CPU_BASED_MOV_DR_EXITING;
     vmx_update_cpu_exec_control(v);
 
+    /* Trap debug for signle stepping. */
+    v->arch.hvm_vmx.exception_bitmap |= 1 << TRAP_debug;
+    vmx_update_exception_bitmap(v);
+
     v->arch.debugreg[0] = read_debugreg(0);
     v->arch.debugreg[1] = read_debugreg(1);
     v->arch.debugreg[2] = read_debugreg(2);
@@ -388,6 +392,13 @@ static void __restore_debug_registers(struct vcpu *v)
 
     v->arch.hvm_vcpu.flag_dr_dirty = 1;
 
+    /* 
+     * After restore, hardware has the right context.
+     * So no need to trap debug anymore.
+     */
+    v->arch.hvm_vmx.exception_bitmap |= ~(1 << TRAP_debug);
+    vmx_update_exception_bitmap(v);
+
     write_debugreg(0, v->arch.debugreg[0]);
     write_debugreg(1, v->arch.debugreg[1]);
     write_debugreg(2, v->arch.debugreg[2]);
@@ -1171,8 +1182,6 @@ void vmx_update_debug_state(struct vcpu *v)
     unsigned long mask;
 
     mask = 1u << TRAP_int3;
-    if ( !cpu_has_monitor_trap_flag )
-        mask |= 1u << TRAP_debug;
 
     if ( v->arch.hvm_vcpu.debug_state_latch )
         v->arch.hvm_vmx.exception_bitmap |= mask;
@@ -2689,10 +2698,18 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
              */
             __vmread(EXIT_QUALIFICATION, &exit_qualification);
             HVMTRACE_1D(TRAP_DEBUG, exit_qualification);
-            write_debugreg(6, exit_qualification | 0xffff0ff0);
-            if ( !v->domain->debugger_attached || cpu_has_monitor_trap_flag )
-                goto exit_and_crash;
-            domain_pause_for_debugger();
+            exit_qualification |= 0xffff0ff0;
+            if ( v->domain->debugger_attached )
+            {
+                write_debugreg(6, exit_qualification);
+                domain_pause_for_debugger();
+            }
+            else 
+            {
+                __restore_debug_registers(v);
+                write_debugreg(6, exit_qualification | read_debugreg(6));
+                hvm_inject_hw_exception(TRAP_debug, HVM_DELIVER_NO_ERROR_CODE);
+            }
             break;
         case TRAP_int3: 
         {
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index dcc3483..0d76d8c 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -378,7 +378,8 @@ static inline int hvm_event_pending(struct vcpu *v)
         (cpu_has_xsave ? X86_CR4_OSXSAVE : 0))))
 
 /* These exceptions must always be intercepted. */
-#define HVM_TRAP_MASK ((1U << TRAP_machine_check) | (1U << TRAP_invalid_op))
+#define HVM_TRAP_MASK ((1U << TRAP_machine_check) | (1U << TRAP_invalid_op) |\
+                       (1U << TRAP_debug))
 
 /*
  * x86 event types. This enumeration is valid for:



Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.