[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/arm: create_p2m_entries has to flush TLBs on every CPU



Hi Ian,

On 04/23/2014 12:06 PM, Ian Campbell wrote:
> On Fri, 2014-04-18 at 17:12 +0100, Julien Grall wrote:
>> The function create_p2m_entries creates mappings in second-level page tables
>> which is shared between every CPU.
> 
> You say create_p2m_entries twice but then patch create_xen_entries. Is
> it the patch or the description which is wrong?

The description is wrong, sorry. I will send a new version of this patch.

> 
>> Only flushing TLBs on local processor may result to wrong behaviour
>> when io{re,un}map is used.
> 
> From this sounds like it's the desription?
> 
>> Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
>> ---
>>
>> This patch is candidate to be backported to Xen 4.4.
>>
>> create_p2m_entries is only used by vmap ( iore{,un}map functions.
>>
>> Upstream Xen 4.4 calls these functions only when 1 CPU is online so it's
>> "safe". People might want to use them when multiple CPUs are online.
>>
>> Ian: Do you plan to backport your tlb series? If not, this patch will have
>> to slighty change because Xen 4.4 doesn't have TLBs helper to flush xen
>> data on every cpus.
> 
> I wasn't planning to backport the tlb series, but I could look at
> backporting the required bits I suppose.
> 
> However, is it needed? Does 4.4 make any ioremap calls where this would
> actually hurt? I think all the uses in xen/arch/arm/platforms/*.c are
> short lived and only accessed from the current cpu. The one in
> xen/drivers/video/arm_hdlcd.c is done before we go SMP anyway.

There is some in serials drivers. I was thinking about people making
product based Xen 4.4. They might use io{re,un}map in their own code
with all CPUs up.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.