[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] hap_invlpg() vs INVLPGA



>>> On 01.02.16 at 10:41, <chegger@xxxxxxxxx> wrote:
> On 01/02/16 10:00, Jan Beulich wrote:
>>>>> On 01.02.16 at 09:14, <chegger@xxxxxxxxx> wrote:
>>> On 01/02/16 09:04, Jan Beulich wrote:
>>>>>> This, otoh, reads as if you imply we intercept the L2's INVLPG.
>>>>>> Yet the INVLPG intercept gets cleared when the domain uses
>>>>>> NPT (and your original change also didn't alter any intercept
>>>>>> settings). Hence I'm still lost how hap_invlpg() can be reached
>>>>>> in that case other than via emulating INVLPG in the instruction
>>>>>> emulator.
>>>>>
>>>>> svm_invlpg_intercept() and vmx_invlpg_intercept() call
>>>>> paging_invlpg().  paging_invlpg() calls hap_invlpg()
>>>>> as initialized in xen/arch/x86/mm/hap/hap.c
>>>>
>>>> That's all fine, but according to my previous reply: How does
>>>> execution reach svm_invlpg_intercept() when the INVLPG
>>>> intercept gets disabled for domains using HAP (NPT)?
>>>
>>> The intercept bitmask for L1 guest and L2 guest gets binary or'ed
>>> when emulating the VMENTRY for the L1 guest.
>>> That way you get also intercepts for the L1 hypervisor.
>> 
>> Okay, I can see this perhaps being correct (albeit unexpected)
>> for general1-intercepts (because all 32 bits are defined), but
>> clearly this is broken for e.g. general2-intercepts (where the
>> guest could set flags the hypervisor doesn't know about),
>> leading to the BUG() in nsvm_vmcb_guest_intercepts_exitcode().
>> Hence I didn't expect such behavior to be there in the first place.
> 
> Whenever new intercepts get defined then those must be added.

I'm sorry, but no - this attitude is why nested mode can't be
expected to become supported any time soon. Unknown intercepts
must be explicitly filtered out and/or unknown L2 exits must be
handled gracefully (to at least the hypervisor).

>> And then this still doesn't make svm_invlpg_intercept() reachable:
>> While the L2 guest runs, the INVLPG intercept would be reflected
>> to the L1 guest. Whereas while the L1 guest runs, the intercept
>> would be off.
> 
> While this is correct, L0 hypervisor must flush the nested hap or
> whatever the L1 hypervisor does has no real effect to the L2 guest,
> otherwise because the TLB/MMU pagetable walk is not different.

I don't understand: You agree that svm_invlpg_intercept() is
unreachable when the guest uses HAP, but at the same time
you say that what it does is required for correct operation?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.