|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 3/3] vVMX: use latched VMCS machine address
>>> On 23.02.16 at 09:34, <liang.z.li@xxxxxxxxx> wrote:
> I found some issues in your patch, see the comments below.
Thanks for getting back on this.
>> --- a/xen/arch/x86/hvm/vmx/vmcs.c
>> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
>> @@ -932,37 +932,36 @@ void vmx_vmcs_switch(paddr_t from, paddr
>> spin_unlock(&vmx->vmcs_lock);
>> }
>>
>> -void virtual_vmcs_enter(void *vvmcs)
>> +void virtual_vmcs_enter(const struct vcpu *v)
>> {
>> - __vmptrld(pfn_to_paddr(domain_page_map_to_mfn(vvmcs)));
>> + __vmptrld(v->arch.hvm_vmx.vmcs_shadow_maddr);
>
> Debug shows v->arch.hvm_vmx.vmcs_shadow_maddr will equal to 0 at this
> point,
> this will crash the system.
>
>> }
>>
>> -void virtual_vmcs_exit(void *vvmcs)
>> +void virtual_vmcs_exit(const struct vcpu *v)
>> {
>> paddr_t cur = this_cpu(current_vmcs);
>>
>> - __vmpclear(pfn_to_paddr(domain_page_map_to_mfn(vvmcs)));
>> + __vmpclear(v->arch.hvm_vmx.vmcs_shadow_maddr);
>
> Debug shows v->arch.hvm_vmx.vmcs_shadow_maddr will equal to 0 at this
> point,
> this will crash the system.
For both of these you provide too little context. In particular ...
> Maybe we should use pfn_to_paddr(domain_page_map_to_mfn(vvmcs))) here.
... this shouldn't be necessary, since the whole purpose of the
patch is to avoid this, making sure
v->arch.hvm_vmx.vmcs_shadow_maddr always represents
domain_page_map_to_mfn(vvmcs). Hence if you find the latched
field to be zero, we'll need to understand _why_ this is so, i.e.
what code path cleared the field (perhaps prematurely).
>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>
>> static void vvmcs_to_shadow_bulk(struct vcpu *v, unsigned int n, @@ -
>> 950,15 +926,15 @@ static void vvmcs_to_shadow_bulk(struct
>>
>> fallback:
>> for ( i = 0; i < n; i++ )
>> - vvmcs_to_shadow(vvmcs, field[i]);
>> + vvmcs_to_shadow(v, field[i]);
>> }
>
> You omitted something in function vvmcs_to_shadow_bulk()
> =====================================
> static void vvmcs_to_shadow_bulk(struct vcpu *v, unsigned int n,
> const u16 *field)
> {
> struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
> void *vvmcs = nvcpu->nv_vvmcx;
> u64 *value = this_cpu(vvmcs_buf);
> unsigned int i;
>
> if ( !cpu_has_vmx_vmcs_shadowing )
> goto fallback;
>
> if ( !value || n > VMCS_BUF_SIZE )
> {
> gdprintk(XENLOG_DEBUG, "vmcs sync fall back to non-bulk mode, \
> buffer: %p, buffer size: %d, fields number: %d.\n",
> value, VMCS_BUF_SIZE, n);
> goto fallback;
> }
>
> virtual_vmcs_enter(vvmcs);
> for ( i = 0; i < n; i++ )
> __vmread(field[i], &value[i]);
> virtual_vmcs_exit(vvmcs);
>
> for ( i = 0; i < n; i++ )
> __vmwrite(field[i], value[i]);
>
> return;
>
> fallback:
> for ( i = 0; i < n; i++ )
> vvmcs_to_shadow(v, field[i]);
> }
> ================================
> You should use virtual_vmcs_enter(v) in this function, not
> virtual_vmcs_enter(vvmcs).
> the same thing happened in the function shadow_to_vvmcs_bulk().
Oh, indeed. It's not helpful that vvmcs is of type void *.
>> @@ -1694,10 +1657,10 @@ int nvmx_handle_vmclear(struct cpu_user_
>> rc = VMFAIL_INVALID;
>> else if ( gpa == nvcpu->nv_vvmcxaddr )
>> {
>> - if ( cpu_has_vmx_vmcs_shadowing )
>> - nvmx_clear_vmcs_pointer(v, nvcpu->nv_vvmcx);
>> - clear_vvmcs_launched(&nvmx->launched_list,
>> - domain_page_map_to_mfn(nvcpu->nv_vvmcx));
>> + unsigned long mfn =
>> + PFN_DOWN(v->arch.hvm_vmx.vmcs_shadow_maddr);
>> +
>> + nvmx_clear_vmcs_pointer(v, nvcpu->nv_vvmcx);
>> + clear_vvmcs_launched(&nvmx->launched_list, mfn);
>
> v->arch.hvm_vmx.vmcs_shadow_maddr will be set to 0 in
> nvmx_clear_vmcs_pointer()
> so mfn will be 0 at this point, it's incorrect.
How that? mfn gets latched before calling nvmx_clear_vmcs_pointer(),
precisely because that function would clear
v->arch.hvm_vmx.vmcs_shadow_maddr. If mfn was zero here,
v->arch.hvm_vmx.vmcs_shadow_maddr would need to have been
zero already before the call.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |