[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCHv3 1/2] x86/ept: invalidate guest physical mappings on VMENTER



On Fri, Dec 4, 2015 at 1:39 PM, David Vrabel <david.vrabel@xxxxxxxxxx> wrote:
> On 04/12/15 11:00, George Dunlap wrote:
>> On 03/12/15 16:42, David Vrabel wrote:
>>> If a guest allocates a page and the tlbflush_timestamp on the page
>>> indicates that a TLB flush of the previous owner is required, only the
>>> linear and combined mappings are invalidated.  The guest-physical
>>> mappings are not invalidated.
>>>
>>> This is currently safe because the EPT code ensures that the
>>> guest-physical and combined mappings are invalidated /before/ the page
>>> is freed.  However, this prevents us from deferring the EPT invalidate
>>> until after the page is freed (e.g., to defer the invalidate until the
>>> p2m locks are released).
>>>
>>> The TLB flush that may be done after allocating page already causes
>>> the original guest to VMEXIT, thus on VMENTER we can do an INVEPT if
>>> one is still pending.
>>>
>>> ept_sync_domain() now marks all PCPUs as needing to be invalidated,
>>> including PCPUs that the domain has not run on.  We still only IPI
>>> those PCPUs that are active so this does not result in any more[1]
>>> INVEPT calls.
>>>
>>> We do not attempt to track when PCPUs may have cached translations
>>> because the only safe way to clear this per-CPU state if immediately
>>> after an invalidate the PCPU is not active (i.e., the PCPU is not in
>>> d->domain_dirty_cpumask).  Since we only invalidate on VMENTER or by
>>> IPIing active PCPUs this can never happen.
>>>
>>> [1] There is one unnecessary INVEPT when the domain runs on a PCPU for
>>>     the very first time.  But this is: a) only 1 additional invalidate
>>>     per PCPU for the lifetime of the domain; and b) we can avoid it
>>>     with a subsequent change.
>>>
>>> Signed-off-by: David Vrabel <david.vrabel@xxxxxxxxxx>
>>
>> This looks like a definite improvement.
>>
>> One thing you missed is that in ept_p2m_init(), it calls
>> __ept_sync_domain() on all cpus; but because the "invalidate" mask is
>> clear at that point, nothing will actually happen.
>
> Good point.  I'd missed this because I'd planned to replace this initial
> invalidate by initializing ept->invalidate to all ones (as I alluded to
> in the [1] footnote).
>
>> Part of this I think comes as a result from inverting what the mask
>> means.  Before this patch, "not synced" is the default, and therefore
>> you need to invalidate unless someone has set the bit saying you don't
>> have to.  After this, "don't invalidate" is the default and you do
>> nothing unless someone has set a bit saying you do have to.
>>
>> I'd think prefer it if you left the mask as "synced_mask"; then you can
>> actually take that on_each_cpu() out of the ept_p2m_init entirely.
>> (Probably wants a comment pointing that out.)
>
> I changed its name because it's old use as synced_mask (i.e., the set of
> CPUs needing to be notified of required invalidation) did not match its
> name.  I rather not keep the name and invert its use.

I took the past tense ("synced") to mean, "These CPUs have been
brought into sync (or are no longer out of sync)".  So they start out
not-synced, so you initialize the bit to be clear; when an INVEPT is
executed, they become synced, so you set the bit; and when you change
the EPT tables, they are no longer synced so you clear the bit.

I still think defaulting to zero and "not-synced" will minimize the
risk of people making a mistake later (e.g., by forgetting to
initialize them to 1 instead of 0), but it's not a big deal either
way.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.