[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [V10 PATCH 09/23] PVH xen: introduce pvh_set_vcpu_info() and vmx_pvh_set_vcpu_info()



>>> On 10.08.13 at 01:41, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx> wrote:
> On Thu, 08 Aug 2013 07:56:41 +0100
> "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
> 
>> >>> On 08.08.13 at 03:05, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
>> >>> wrote:
>> > On Mon, 05 Aug 2013 12:10:15 +0100
>> > "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
>> > 
>> >> >>> On 24.07.13 at 03:59, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
>> >> >>> wrote:
>> >> > +int vmx_pvh_set_vcpu_info(struct vcpu *v, struct
>> >> > vcpu_guest_context *ctxtp) +{
>> >> > +    if ( v->vcpu_id == 0 )
>> >> > +        return 0;
>> >> > +
>> >> > +    if ( !(ctxtp->flags & VGCF_in_kernel) )
>> >> > +        return -EINVAL;
>> >> > +
>> >> > +    vmx_vmcs_enter(v);
>> >> > +    __vmwrite(GUEST_GDTR_BASE, ctxtp->gdt.pvh.addr);
>> >> > +    __vmwrite(GUEST_GDTR_LIMIT, ctxtp->gdt.pvh.limit);
>> >> > +    __vmwrite(GUEST_LDTR_BASE, ctxtp->ldt_base);
>> >> > +    __vmwrite(GUEST_LDTR_LIMIT, ctxtp->ldt_ents);
>> >> 
>> >> Just noticed: Aren't you mixing up entries and bytes here?
>> > 
>> > Right:
>> > 
>> > __vmwrite(GUEST_LDTR_LIMIT, (ctxtp->ldt_ents * 8 - 1) );
>> > 
>> > Any formatting issues here? I don't see in coding style, and see
>> > both code where there is a space around '*' and not.
>> 
>> The inner parentheses are superfluous.
>> 
>> CODING_STYLE is pretty explicit about there needing to be white
>> space around operators: "Spaces are placed [...], and around
>> binary operators (except the structure access operators, '.' and
>> '->')."
>> 
>> > Also, when setting the limit, do we need to worry about the G flag?
>> > or for that matter, D/B whether segment is growing up or down?
>> > It appears we don't need to worry about that for LDT, but not sure
>> > reading the SDMs..
>> 
>> The D/B bit doesn't matter for LDT (and TSS), but the G bit would.
> 
> Ugh, to find the G bit, I need to walk the GDT to find the LDT descriptor.
> Walking the GDT to look for system descriptor means mapping guest gdt
> pages as I go thru the table, and also the system descriptor sizes are
> different for 32bit vs IA-32e modes adding extra code... All that just 
> doesn't seem worth it to me for supporting LDT during vcpu bringup.

Which is why I suggested requiring the LDT to be empty.

> Keir, do you have any thoughts? Basically, I'm trying to support 
> VCPUOP_initialise here, which is used by a PV guest boot vcpu to 
> set context of another vcpu it's trying to bring up. In retrospect, I 
> should have just created VCPUOP_initialise_pvh with limited fields
> needed for PVH. (We already ignore bunch of stuff for PVH from 
> VCPUOP_initialise like trap_ctxt, event_callback*, syscall_callback*, 
> etc...). But anyways, can't we just document VCPUOP_initialise that
> only following fields are relevant and honored for PVH:
> 
>    gdt.pvh.addr/limit, and ctxtp->user_regs.cs/ds/ss
> 
> (And others used in arch_set_info_guest like user_regs, flags,...)
> 
> Since we are loading gdtr and selectors cs/ds/ss, we should also load
> the hidden fields for cs/ds/ss. That IMO is plenty enough support for
> the vcpu to come up, and the vcpu itself can then load ldtr, fs base, gs
> base, etc first thing in it's HVM container. What do you all think?

If you implement loading the hidden fields of CS, then doing the
same for the LDT shouldn't be that much more code (and if you
permit a non-null LDT selector, then having it in place would even
be a requirement before validating the CS selector). But again,
I had already indicated that I'd be fine with requiring the state to
be truly minimal: CS -> flat 64-bit code descriptor, SS, DS, ES, FS
and GS holding null selectors. And CS descriptor validation done
only in debug mode.

Talking of the LDT selector: Iirc you modify struct
vcpu_guest_context's GDT to match PVH needs, but if I'm not
mistaken you don't do the same for the LDT - PVH would require
merely a selector here, not a base/ents pair.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.