[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/arm: p2m: Correctly flush TLB in create_p2m_entries



On 01/09/2014 02:05 PM, Ian Campbell wrote:
> On Thu, 2014-01-09 at 13:14 +0000, Julien Grall wrote:
>>
>> On 01/09/2014 10:59 AM, Ian Campbell wrote:
>>> On Wed, 2014-01-08 at 17:13 +0000, Stefano Stabellini wrote:
>>>> On Wed, 8 Jan 2014, Julien Grall wrote:
>>>>> The p2m is shared between VCPUs for each domain. Currently Xen only flush
>>>>> TLB on the local PCPU. This could result to mismatch between the mapping 
>>>>> in the
>>>>> p2m and TLBs.
>>>>>
>>>>> Flush TLBs used by this domain on every PCPU.
>>>>>
>>>>> Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
>>>>
>>>> The fix makes sense to me.
>>>
>>> Me too. Has anyone had a grep for similar issues?
>>
>> I think flush_xen_data_tlb and flush_xen_text_tlb should also be 
>> innershareable.
> 
> I think text_tlb is ok, it's only used at start of day. The usage in
> setup_pagetables and mmu_init_secondary_cpus are both explicitly local I
> think. Perhaps it should be renamed _local.

After looking to the code, both function are only used to flush for the
current cpu. Suffix flush_xen_data_tlb, flush_xen_text_tlb,
flush_xen_data_tlb_range_va with _local sounds a good idea. Is it a 4.4
material?

> The case in set_pte_flags_on_range via free_init_memory I'm less sure
> about, but I don't think stale tlb entries are actually an issue here,
> since there will certainly be one when the page becomes used again. But
> maybe it would be safest to make it global.

Theses translations will only be used for hypervisor mode. It should be
safe ...

We can flush TLB on every CPUs. That would mean we have to create
flush_xen_text_tlb and flush_xen_text_tlb_local.

>> The former one is used by flush_tlb_mask.
> 
> Yes, the comment there is just wrong. I think this was my doing based on
> the confusion I mentioned before.
> 
> We need to be careful not to change the (un)map_domain_page since those
> are not shared between processors, I don't think this change would do
> that.

Right, this function doesn't need to be changed. We need to modify the
behaviour of flush_tlb_mask.

>>  But ... this function seems 
>> badly implement, it's weird to use flush_xen_data_tlb because we are 
>> mainly using flush_tlb_mask in common/grant-table.c. Any ideas?
> 
> Do you mean that this should be flushing the guest TLBs and not Xen's?
> That does seem right... We actually need to be flushing for all vmid's
> too I think -- for the alloc_heap_pages case.

After looking to the code, flush_tlb_mask is called in 2 specific place
for ARM:
   - alloc_heap_pages: the flush is only be called if the new allocated
page was used by a domain before. So we need to flush only need to flush
TLB non-secure non-hyp inner-shareable.
For Xen 4.5, this flush can be removed for ARM, Xen already call flush
TLB in create_p2m_entries.
   - common/grant_table.c: every calls to flush_tlb_mask are used with
the current domain. A simple flush TLB by VMID inner-shareable should be
enough.

For Xen 4.4, I suggest to make flush_tlb_mask flushes TLB non-secure
non-hyp innershareable.

We would need to rework after 4.4 for optimisation.

Sincerely yours,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.