[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/svm: Fix svm_update_guest_efer() for domains using shadow paging



>>> On 08.10.18 at 13:03, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 08/10/18 11:12, Jan Beulich wrote:
>>>>> On 05.10.18 at 19:02, <andrew.cooper3@xxxxxxxxxx> wrote:
>>> --- a/xen/arch/x86/hvm/svm/svm.c
>>> +++ b/xen/arch/x86/hvm/svm/svm.c
>>> @@ -649,13 +649,32 @@ void svm_update_guest_cr(struct vcpu *v, unsigned int 
> cr, unsigned int flags)
>>>  static void svm_update_guest_efer(struct vcpu *v)
>>>  {
>>>      struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
>>> -    bool lma = v->arch.hvm.guest_efer & EFER_LMA;
>>> -    uint64_t new_efer;
>>> +    unsigned long guest_efer = v->arch.hvm.guest_efer,
>>> +        xen_efer = read_efer();
>>>  
>>> -    new_efer = (v->arch.hvm.guest_efer | EFER_SVME) & ~EFER_LME;
>>> -    if ( lma )
>>> -        new_efer |= EFER_LME;
>>> -    vmcb_set_efer(vmcb, new_efer);
>>> +    if ( paging_mode_shadow(v->domain) )
>>> +    {
>>> +        /* EFER.NX is a Xen-owned bit and is not under guest control. */
>>> +        guest_efer &= ~EFER_NX;
>>> +        guest_efer |= xen_efer & EFER_NX;
>>> +
>>> +        /*
>>> +         * CR0.PG is a Xen-owned bit, and remains set even when the guest 
>>> has
>>> +         * logically disabled paging.
>>> +         *
>>> +         * LMA was calculated using the guest CR0.PG setting, but LME needs
>>> +         * clearing to avoid interacting with Xen's CR0.PG setting.  As 
>>> writes
>>> +         * to CR0 are intercepted, it is safe to leave LME clear at this
>>> +         * point, and fix up both LME and LMA when CR0.PG is set.
>>> +         */
>>> +        if ( !(guest_efer & EFER_LMA) )
>>> +            guest_efer &= ~EFER_LME;
>>> +    }
>> I think this wants an "else", either ASSERT()ing that what the removed
>> code did is actually the case (arranged for by the callers), or
>> retaining the original logic in some form.
> 
> No - the original logic does not want keeping.  It is a latent bug in
> the HAP case, because if the guest could write to CR0, setting CR0.PG
> would fail to activate long mode.

Hmm, perhaps we're talking of different parts of the original code:
I think you talk about the clearing of LME, whereas I am mostly
concerned about the dropped implication of LME from LMA. Also
note that I suggested keeping some form of the old code only as
one option; the other was to add an ASSERT() verifying that the
callers have taken care of that implication. In particular for these
...

>> This looks particularly relevant
>> when hvm_efer_valid() was called with -1 as its cr0_pg argument, as
>> in that case there was not necessarily any correlation enforced
>> between EFER.LMA and CR0.PG.
> 
> On the subject of passing -1,  all of that logic is horrible, but it is
> only ever used when LME/LMA is being recalculated around other state
> changes.

... cases I couldn't really convince myself that all callers really
guarantee this.

Anyway - don't read this as an objection to the change. I agree
SVM and VMX should be in line with one another wherever
possible.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.