[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/3] vVMX: use latched VMCS machine address



> >>> On 24.02.16 at 08:04, <liang.z.li@xxxxxxxxx> wrote:
> >  I found the code path when creating the L2 guest:
> 
> thanks for the analysis!
> 
> > (XEN)nvmx_handle_vmclear
> > (XEN)nvmx_handle_vmptrld
> > (XEN)map_io_bitmap_all
> > (XEN)_map_io_bitmap
> > (XEN)virtual_vmcs_enter
> > (XEN)_map_io_bitmap
> > (XEN)virtual_vmcs_enter
> > (XEN)_map_msr_bitmap
> > (XEN)virtual_vmcs_enter
> > (XEN)nvmx_set_vmcs_pointer
> > (XEN)nvmx_handle_vmwrite
> > ....
> >
> > so the virtual_vmcs_enter() will be called before the
> > nvmx_set_vmcs_pointer(),
> > and at this time   'v->arch.hvm_vmx.vmcs_shadow_maddr' still equal to 0.
> 
> So this finally explains the difference in behavior between different
> hardware - without VMCS shadowing we wouldn't reach
> virtual_vmcs_enter() here.
> 
> > Maybe  ' v->arch.hvm_vmx.vmcs_shadow_maddr' should be set when
> setting
> > the 'nvcpu->nv_vvmcx'  in nvmx_handle_vmptrld().
> 
> Right, this looks to be the only viable option. In particular, deferring
> map_io_bitmap_all() and _map_msr_bitmap() cannot reasonably be moved
> past nvmx_set_vmcs_pointer(), since failure of any of them would then
> require further unrolling, which seems undesirable. Plus doing it this way
> allows undoing some of the changes done before.
> 
> Attached the updated patch - could you please give this another try (on top
> of current staging or master)?
> 
> Jan

No problem, I will tell you the result later.

Liang

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.