[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 4/7] xen/arm: page: Clarify the Xen TLBs helpers name
Hi Stefano, On 6/10/19 9:51 PM, Stefano Stabellini wrote: Sorry but this doesn't make sense to me. You are unlikely going to modify mapping when using self-modifying. And if you were, then because instructions caches are implementing the IVIPT extension (assuming we forbid IVIVT cache as suggested by patch #1 for Arm32) there are no need to modifying the cache because the physical address would be different.On Mon, 20 May 2019, Julien Grall wrote:On 20/05/2019 22:01, Stefano Stabellini wrote:On Fri, 10 May 2019, Julien Grall wrote:Feel free to suggest an in-code comment so we can discuss on the worthiness.I suggest something like the following: /* * Flush all hypervisor mappings from the TLB of the local processor. Note * that instruction cache maintenance might also be required when self * modifying Xen code, see D5-2522 in ARM DDI 0487D.a and B3.11.2 in ARM * DDI 0406C.c. */This looks quite out-of-context, what is the relation between self-modifying code and TLB flush?"Flush all hypervisor mappings from the TLB of the local processor" is the description of the function below (it cannot be seen here but it's the function on top of which this comment is supposed to be on, flush_xen_data_tlb_local). The rest of the comment is informative regarding difficult cases such as self-modifying code, which was present in the previous version of the code and I would like to retain. The relation is that there is a good chance you need to do both. All the self-modifying code in Xen (i.e alternative, livepatch) don't requires a TLB maintenance. I also can't see when the two would be necessary at the same. Can you please give a concrete example where it would be necessary? Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |