[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RFC: PVH set vcpu info context in vmcs....



On Fri, 16 Aug 2013 15:28:37 -0700
Mukesh Rathor <mukesh.rathor@xxxxxxxxxx> wrote:

> On Fri, 16 Aug 2013 08:28:12 +0100
> "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
......... 
> Ok, I'll have to change tools to make sure they are zeroed in the
> create path. I can do that. I'll also take that to mean you are ok
> with the above short function with checks for zero...  I'll have it
> that way in the next version. Thank you very much :), I'm glad this
> is finally resolved.

Ok, changed the tools to clear fields, tested and everything works:

/*
 * Set vmcs fields during boot of a vcpu. Called from arch_set_info_guest.
 *
 * Boot vcpu call is from tools via:
 *     do_domctl -> XEN_DOMCTL_setvcpucontext -> arch_set_info_guest
 *
 * Secondary vcpu's are brought up by the guest itself via:
 *     do_vcpu_op -> VCPUOP_initialise -> arch_set_info_guest
 *     (In case of linux, the call comes from cpu_initialize_context()).
 *
 * Note, PVH save/restore is expected to happen the HVM way, ie,
 *        do_domctl -> XEN_DOMCTL_sethvmcontext -> hvm_load/save
 * and not get here.
 *
 * PVH 32bitfixme: this function needs to be modified for 32bit guest.
 */
int vmx_pvh_vcpu_boot_set_info(struct vcpu *v, 
                               struct vcpu_guest_context *ctxtp)
{
    if ( ctxtp->ldt_base || ctxtp->ldt_ents ||
         ctxtp->user_regs.cs || ctxtp->user_regs.ss || ctxtp->user_regs.es ||
         ctxtp->user_regs.ds || ctxtp->user_regs.fs || ctxtp->user_regs.gs ||
         ctxtp->gdt.pvh.addr || ctxtp->gdt.pvh.limit ||
         ctxtp->fs_base || ctxtp->gs_base_user )
        return -EINVAL;

    vmx_vmcs_enter(v);
    __vmwrite(GUEST_GS_BASE, ctxtp->gs_base_kernel);
    vmx_vmcs_exit(v);

    return 0;
}

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.