[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 2/3] x86/PV: support data breakpoint extension registers



>>> On 11.04.14 at 16:58, <George.Dunlap@xxxxxxxxxxxxx> wrote:
> On Mon, Apr 7, 2014 at 10:38 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>> @@ -854,7 +854,42 @@ long arch_do_domctl(
>>              evc->vmce.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2;
>>              evc->vmce.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2;
>>
>> -            ret = 0;
>> +            i = ret = 0;
>> +            if ( boot_cpu_has(X86_FEATURE_DBEXT) )
>> +            {
>> +                unsigned int j;
>> +
>> +                if ( v->arch.pv_vcpu.dr_mask[0] )
>> +                {
>> +                    if ( i < evc->msr_count && !ret )
>> +                    {
>> +                        msr.index = MSR_AMD64_DR0_ADDRESS_MASK;
>> +                        msr.reserved = 0;
>> +                        msr.value = v->arch.pv_vcpu.dr_mask[0];
>> +                        if ( copy_to_guest_offset(evc->msrs, i, &msr, 1) )
> 
> Sorry if I missed something, but is there any bounds-checking on this
> buffer?  I don't see any specification in the header file about how
> big the buffer needs to be, nor any conceptual limit to the number of
> elements which might be copied into it -- at least in this path.

The conceptual limit is 64k (due to evc->msr_count being a 16 bit
quantity). The limit up to which we may write to the buffer is the
passed in evc->msr_count (which gets updated at the end to the
number we would have wanted to write).

>> @@ -3628,7 +3660,27 @@ long do_set_trap_table(XEN_GUEST_HANDLE_
>>      return rc;
>>  }
>>
>> -long set_debugreg(struct vcpu *v, int reg, unsigned long value)
>> +void activate_debugregs(const struct vcpu *curr)
>> +{
>> +    ASSERT(curr == current);
>> +
>> +    write_debugreg(0, curr->arch.debugreg[0]);
>> +    write_debugreg(1, curr->arch.debugreg[1]);
>> +    write_debugreg(2, curr->arch.debugreg[2]);
>> +    write_debugreg(3, curr->arch.debugreg[3]);
>> +    write_debugreg(6, curr->arch.debugreg[6]);
>> +    write_debugreg(7, curr->arch.debugreg[7]);
>> +
>> +    if ( boot_cpu_has(X86_FEATURE_DBEXT) )
>> +    {
>> +        wrmsrl(MSR_AMD64_DR0_ADDRESS_MASK, curr->arch.pv_vcpu.dr_mask[0]);
>> +        wrmsrl(MSR_AMD64_DR1_ADDRESS_MASK, curr->arch.pv_vcpu.dr_mask[1]);
>> +        wrmsrl(MSR_AMD64_DR2_ADDRESS_MASK, curr->arch.pv_vcpu.dr_mask[2]);
>> +        wrmsrl(MSR_AMD64_DR3_ADDRESS_MASK, curr->arch.pv_vcpu.dr_mask[3]);
>> +    }
>> +}
>> +
>> +long set_debugreg(struct vcpu *v, unsigned int reg, unsigned long value)
>>  {
>>      int i;
>>      struct vcpu *curr = current;
>> @@ -3709,11 +3761,8 @@ long set_debugreg(struct vcpu *v, int re
>>              if ( (v == curr) &&
>>                   !(v->arch.debugreg[7] & DR7_ACTIVE_MASK) )
>>              {
>> -                write_debugreg(0, v->arch.debugreg[0]);
>> -                write_debugreg(1, v->arch.debugreg[1]);
>> -                write_debugreg(2, v->arch.debugreg[2]);
>> -                write_debugreg(3, v->arch.debugreg[3]);
>> -                write_debugreg(6, v->arch.debugreg[6]);
>> +                activate_debugregs(curr);
> 
> So in the other place you replaced with activate_debugregs(), it
> writes DR7; but here it doesn't -- and yet the function will write
> DR7.  Is that a problem?

It's not strictly a problem (because right above we checked that
no breakpoints are active, and right below we write it with the
intended value), but it shouldn't be done to be on the safe side
as well as because the double write is needlessly expensive. And
I do recall that I made a mental note that the DR7 write needs to
remain separate - only to then forget it. I should clearly have
written it down. So thanks a lot for spotting this!

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.