[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] nested vmx: Fix the booting of L2 PAE guest



>>> On 27.06.13 at 03:14, "Xu, Dongxiao" <dongxiao.xu@xxxxxxxxx> wrote:
> Hi stakeholders,
> 
> I saw the patch is not merged yet. Do you have any other comment about this 
> patch? I think it is a critical fix for 4.3 release in nested virtualization 
> side.

Irrespective of Keir's ack I was hoping for an ack from one of the
VMX maintainers. Even more so as they are, just like you, working
for Intel I think it would be appropriate for you to get in touch
with them to fulfill their maintainer task here. In fact you should
have Cc-ed them with your initial patch submission.

Independently of that, you should have also Cc-ed George if you
want this to go in for 4.3. In the absence of this, I had simply put
this on my post-4.3 queue...

Jan

>> -----Original Message-----
>> From: Keir Fraser [mailto:keir.xen@xxxxxxxxx]
>> Sent: Monday, June 24, 2013 2:46 PM
>> To: Xu, Dongxiao; xen-devel@xxxxxxxxxxxxx 
>> Subject: Re: [Xen-devel] [PATCH] nested vmx: Fix the booting of L2 PAE guest
>> 
>> On 24/06/2013 06:55, "Dongxiao Xu" <dongxiao.xu@xxxxxxxxx> wrote:
>> 
>> > When doing virtual VM entry and virtual VM exit, we need to
>> > sychronize the PAE PDPTR related VMCS registers. With this fix,
>> > we can boot 32bit PAE L2 guest (Win7 & RHEL6.4) on "Xen on Xen"
>> > environment.
>> >
>> > Signed-off-by: Dongxiao Xu <dongxiao.xu@xxxxxxxxx>
>> > Tested-by: Yongjie Ren <yongjie.ren@xxxxxxxxx>
>> 
>> Acked-by: Keir Fraser <keir@xxxxxxx>
>> 
>> > ---
>> >  xen/arch/x86/hvm/vmx/vvmx.c | 27 +++++++++++++++------------
>> >  1 file changed, 15 insertions(+), 12 deletions(-)
>> >
>> > diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
>> > index bb7688f..5dfbc54 100644
>> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
>> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>> > @@ -864,6 +864,13 @@ static const u16 vmcs_gstate_field[] = {
>> >      GUEST_SYSENTER_EIP,
>> >  };
>> >
>> > +static const u16 gpdptr_fields[] = {
>> > +    GUEST_PDPTR0,
>> > +    GUEST_PDPTR1,
>> > +    GUEST_PDPTR2,
>> > +    GUEST_PDPTR3,
>> > +};
>> > +
>> >  /*
>> >   * Context: shadow -> virtual VMCS
>> >   */
>> > @@ -1053,18 +1060,6 @@ static void load_shadow_guest_state(struct vcpu
>> *v)
>> >                       (__get_vvmcs(vvmcs, CR4_READ_SHADOW) &
>> cr_gh_mask);
>> >      __vmwrite(CR4_READ_SHADOW, cr_read_shadow);
>> >
>> > -    if ( nvmx_ept_enabled(v) && hvm_pae_enabled(v) &&
>> > -         (v->arch.hvm_vcpu.guest_efer & EFER_LMA) )
>> > -    {
>> > -        static const u16 gpdptr_fields[] = {
>> > -            GUEST_PDPTR0,
>> > -            GUEST_PDPTR1,
>> > -            GUEST_PDPTR2,
>> > -            GUEST_PDPTR3,
>> > -        };
>> > -        vvmcs_to_shadow_bulk(v, ARRAY_SIZE(gpdptr_fields),
>> gpdptr_fields);
>> > -    }
>> > -
>> >      /* TODO: CR3 target control */
>> >  }
>> >
>> > @@ -1159,6 +1154,10 @@ static void virtual_vmentry(struct cpu_user_regs
>> *regs)
>> >      if ( lm_l1 != lm_l2 )
>> >          paging_update_paging_modes(v);
>> >
>> > +    if ( nvmx_ept_enabled(v) && hvm_pae_enabled(v) &&
>> > +         !(v->arch.hvm_vcpu.guest_efer & EFER_LMA) )
>> > +        vvmcs_to_shadow_bulk(v, ARRAY_SIZE(gpdptr_fields),
>> gpdptr_fields);
>> > +
>> >      regs->eip = __get_vvmcs(vvmcs, GUEST_RIP);
>> >      regs->esp = __get_vvmcs(vvmcs, GUEST_RSP);
>> >      regs->eflags = __get_vvmcs(vvmcs, GUEST_RFLAGS);
>> > @@ -1294,6 +1293,10 @@ static void virtual_vmexit(struct cpu_user_regs
>> *regs)
>> >      sync_vvmcs_guest_state(v, regs);
>> >      sync_exception_state(v);
>> >
>> > +    if ( nvmx_ept_enabled(v) && hvm_pae_enabled(v) &&
>> > +         !(v->arch.hvm_vcpu.guest_efer & EFER_LMA) )
>> > +        shadow_to_vvmcs_bulk(v, ARRAY_SIZE(gpdptr_fields),
>> gpdptr_fields);
>> > +
>> >      vmx_vmcs_switch(v->arch.hvm_vmx.vmcs, nvcpu->nv_n1vmcx);
>> >
>> >      nestedhvm_vcpu_exit_guestmode(v);
>> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx 
> http://lists.xen.org/xen-devel 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.