[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 12/29] xen/x86: allow disabling the emulated local apic



El 16/09/15 a les 12.05, Jan Beulich ha escrit:
>>>> On 04.09.15 at 14:08, <roger.pau@xxxxxxxxxx> wrote:
>> --- a/xen/arch/x86/hvm/svm/svm.c
>> +++ b/xen/arch/x86/hvm/svm/svm.c
>> @@ -1035,6 +1035,7 @@ static void noreturn svm_do_resume(struct vcpu *v)
>>      struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
>>      bool_t debug_state = v->domain->debugger_attached;
>>      bool_t vcpu_guestmode = 0;
>> +    struct vlapic *vlapic = vcpu_vlapic(v);
>>  
>>      if ( nestedhvm_enabled(v->domain) && nestedhvm_vcpu_in_guestmode(v) )
>>          vcpu_guestmode = 1;
>> @@ -1058,14 +1059,14 @@ static void noreturn svm_do_resume(struct vcpu *v)
>>          hvm_asid_flush_vcpu(v);
>>      }
>>  
>> -    if ( !vcpu_guestmode )
>> +    if ( !vcpu_guestmode && !vlapic_hw_disabled(vlapic) )
>>      {
>>          vintr_t intr;
>>  
>>          /* Reflect the vlapic's TPR in the hardware vtpr */
>>          intr = vmcb_get_vintr(vmcb);
>>          intr.fields.tpr =
>> -            (vlapic_get_reg(vcpu_vlapic(v), APIC_TASKPRI) & 0xFF) >> 4;
>> +            (vlapic_get_reg(vlapic, APIC_TASKPRI) & 0xFF) >> 4;
>>          vmcb_set_vintr(vmcb, intr);
>>      }
>>  
>> @@ -2294,6 +2295,7 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
>>      int inst_len, rc;
>>      vintr_t intr;
>>      bool_t vcpu_guestmode = 0;
>> +    struct vlapic *vlapic = vcpu_vlapic(v);
>>  
>>      hvm_invalidate_regs_fields(regs);
>>  
>> @@ -2311,11 +2313,11 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
>>       * NB. We need to preserve the low bits of the TPR to make checked 
>> builds
>>       * of Windows work, even though they don't actually do anything.
>>       */
>> -    if ( !vcpu_guestmode ) {
>> +    if ( !vcpu_guestmode && !vlapic_hw_disabled(vlapic) ) {
>>          intr = vmcb_get_vintr(vmcb);
>> -        vlapic_set_reg(vcpu_vlapic(v), APIC_TASKPRI,
>> +        vlapic_set_reg(vlapic, APIC_TASKPRI,
>>                     ((intr.fields.tpr & 0x0F) << 4) |
>> -                   (vlapic_get_reg(vcpu_vlapic(v), APIC_TASKPRI) & 0x0F));
>> +                   (vlapic_get_reg(vlapic, APIC_TASKPRI) & 0x0F));
>>      }
>>  
>>      exit_reason = vmcb->exitcode;
>> @@ -2697,14 +2699,14 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
>>      }
>>  
>>    out:
>> -    if ( vcpu_guestmode )
>> +    if ( vcpu_guestmode || vlapic_hw_disabled(vlapic) )
>>          /* Don't clobber TPR of the nested guest. */
> 
> The comment is now stale.

Thanks, I've removed it.

> Also - aren't all the changes to this file (and perhaps othersfurther
> down) bug fixes in their own right?

Whether they should be considered bugs or not it's hard to tell. There
was no code that executed this paths before with this configuration, and
probably nobody considered running HVM guests without an emulated lapic
a possibility, so I would argue that they are merely omissions.

>> @@ -1042,8 +1045,7 @@ void vlapic_tdt_msr_set(struct vlapic *vlapic, 
>> uint64_t value)
>>      uint64_t guest_tsc;
>>      struct vcpu *v = vlapic_vcpu(vlapic);
>>  
>> -    /* may need to exclude some other conditions like vlapic->hw.disabled */
>> -    if ( !vlapic_lvtt_tdt(vlapic) )
>> +    if ( !vlapic_lvtt_tdt(vlapic) || vlapic_hw_disabled(vlapic) )
>>      {
>>          HVM_DBG_LOG(DBG_LEVEL_VLAPIC_TIMER, "ignore tsc deadline msr 
>> write");
> 
> The newly added condition needless triggers the HVM_DBG_LOG().
> Please separate it.

Done.

>> @@ -1328,7 +1339,10 @@ static int lapic_load_hidden(struct domain *d, 
>> hvm_domain_context_t *h)
>>      uint16_t vcpuid;
>>      struct vcpu *v;
>>      struct vlapic *s;
>> -    
>> +
>> +    if ( !has_vlapic(d) )
>> +        return 0;
>> +
>>      /* Which vlapic to load? */
>>      vcpuid = hvm_load_instance(h); 
>>      if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL )
>> @@ -1360,7 +1374,10 @@ static int lapic_load_regs(struct domain *d, 
>> hvm_domain_context_t *h)
>>      uint16_t vcpuid;
>>      struct vcpu *v;
>>      struct vlapic *s;
>> -    
>> +
>> +    if ( !has_vlapic(d) )
>> +        return 0;
> 
> I agree that the save side should return zero in that case, but aren't
> load attempts invalid (and hence warrant an error return)?

Good point, ENODEV seems like the most appropriate error code here IMHO.

>> @@ -1399,7 +1416,7 @@ int vlapic_init(struct vcpu *v)
>>  
>>      HVM_DBG_LOG(DBG_LEVEL_VLAPIC, "%d", v->vcpu_id);
>>  
>> -    if ( is_pvh_vcpu(v) )
>> +    if ( is_pvh_vcpu(v) || !has_vlapic(v->domain) )
> 
> Isn't the latter alone sufficient?

Yes.

>> @@ -1452,6 +1469,9 @@ void vlapic_destroy(struct vcpu *v)
>>  {
>>      struct vlapic *vlapic = vcpu_vlapic(v);
>>  
>> +    if ( !has_vlapic(vlapic_domain(vlapic)) )
> 
> I think v->domain would be better here.

TBH, I don't have a strong opinion, if you think v->domain is clearer
I'm fine with that.

>> --- a/xen/arch/x86/hvm/vmsi.c
>> +++ b/xen/arch/x86/hvm/vmsi.c
>> @@ -482,6 +482,9 @@ found:
>>  
>>  void msixtbl_init(struct domain *d)
>>  {
>> +    if ( !has_vlapic(d) )
>> +        return;
>> +
>>      INIT_LIST_HEAD(&d->arch.hvm_domain.msixtbl_list);
>>      spin_lock_init(&d->arch.hvm_domain.msixtbl_list_lock);
> 
> Don't you also need to add guards to msixtbl_pt_{,un}register()?

Yes, that's right. XEN_DOMCTL_bind_pt_irq (which calls
msixtbl_pt_register) can be executed against HVM guests, and by
extensions against HVMlite guests.

>> --- a/xen/arch/x86/hvm/vmx/vmcs.c
>> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
>> @@ -1002,7 +1002,7 @@ static int construct_vmcs(struct vcpu *v)
>>          ~(SECONDARY_EXEC_ENABLE_VM_FUNCTIONS |
>>            SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS);
>>  
>> -    if ( is_pvh_domain(d) )
>> +    if ( is_pvh_domain(d) || !has_vlapic(d) )
> 
> See above.

Fixed.

Thanks for the thorough review!

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.