[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/3] vVMX: use latched VMCS machine address



> Thanks for getting back on this.
> 
> >> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> >> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> >> @@ -932,37 +932,36 @@ void vmx_vmcs_switch(paddr_t from, paddr
> >>      spin_unlock(&vmx->vmcs_lock);
> >>  }
> >>
> >> -void virtual_vmcs_enter(void *vvmcs)
> >> +void virtual_vmcs_enter(const struct vcpu *v)
> >>  {
> >> -    __vmptrld(pfn_to_paddr(domain_page_map_to_mfn(vvmcs)));
> >> +    __vmptrld(v->arch.hvm_vmx.vmcs_shadow_maddr);
> >
> > Debug shows  v->arch.hvm_vmx.vmcs_shadow_maddr  will equal to 0 at
> > this point, this will crash the system.
> >
> >>  }
> >>
> >> -void virtual_vmcs_exit(void *vvmcs)
> >> +void virtual_vmcs_exit(const struct vcpu *v)
> >>  {
> >>      paddr_t cur = this_cpu(current_vmcs);
> >>
> >> -    __vmpclear(pfn_to_paddr(domain_page_map_to_mfn(vvmcs)));
> >> +    __vmpclear(v->arch.hvm_vmx.vmcs_shadow_maddr);
> >
> > Debug shows  v->arch.hvm_vmx.vmcs_shadow_maddr  will equal to 0 at
> > this point, this will crash the system.
> 
> For both of these you provide too little context. In particular ...
> 
> > Maybe we should use pfn_to_paddr(domain_page_map_to_mfn(vvmcs)))
> here.
> 
> ... this shouldn't be necessary, since the whole purpose of the patch is to
> avoid this, making sure
> v->arch.hvm_vmx.vmcs_shadow_maddr always represents
> domain_page_map_to_mfn(vvmcs). Hence if you find the latched field to be
> zero, we'll need to understand _why_ this is so, i.e.
> what code path cleared the field (perhaps prematurely).

Yes, it's better to find out the reason for this.

> >> @@ -1694,10 +1657,10 @@ int nvmx_handle_vmclear(struct cpu_user_
> >>          rc = VMFAIL_INVALID;
> >>      else if ( gpa == nvcpu->nv_vvmcxaddr )
> >>      {
> >> -        if ( cpu_has_vmx_vmcs_shadowing )
> >> -            nvmx_clear_vmcs_pointer(v, nvcpu->nv_vvmcx);
> >> -        clear_vvmcs_launched(&nvmx->launched_list,
> >> -            domain_page_map_to_mfn(nvcpu->nv_vvmcx));
> >> +        unsigned long mfn =
> >> + PFN_DOWN(v->arch.hvm_vmx.vmcs_shadow_maddr);
> >> +
> >> +        nvmx_clear_vmcs_pointer(v, nvcpu->nv_vvmcx);
> >> +        clear_vvmcs_launched(&nvmx->launched_list, mfn);
> >
> > v->arch.hvm_vmx.vmcs_shadow_maddr will be set to 0 in
> > nvmx_clear_vmcs_pointer()
> > so mfn will be 0 at this point, it's incorrect.
> 
> How that? mfn gets latched before calling nvmx_clear_vmcs_pointer(),
> precisely because that function would clear
> v->arch.hvm_vmx.vmcs_shadow_maddr. If mfn was zero here,
> v->arch.hvm_vmx.vmcs_shadow_maddr would need to have been
> zero already before the call.
> 
> Jan

You are right, I confused the code, mfn is set before nvmx_clear_vmcs_pointer().
Indeed, v->arch.hvm_vmx.vmcs_shadow_maddr may equal to 0 at this point, it will
cause clear_vvmcs_launched() failed to remove the vvmcs from the list.

Liang

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.