|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] xen/arm: fix affected memory range by dcache clean functions
On Thu, Mar 02, 2017 at 05:15:26PM -0800, Stefano Stabellini wrote:
> clean_dcache_va_range and clean_and_invalidate_dcache_va_range don't
> calculate the range correctly when "end" is not cacheline aligned. As a
> result, the last cacheline is not skipped. Fix the issue by aligning the
> start address to the cacheline size.
>
> In addition, make the code simpler and faster in
> invalidate_dcache_va_range, by removing the module operation and using
> bitmasks instead.
>
> Signed-off-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>
> Reported-by: edgar.iglesias@xxxxxxxxxx
> CC: edgar.iglesias@xxxxxxxxxx
This looks good to me and it works on my side.
In the future, perhaps we could convert invalidate_dcache_va_range
to use a smaller implementation like clean_dcache_va_range().
Anyway:
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xxxxxxxxxx>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xxxxxxxxxx>
And super thanks for helping me resolve this issue!
Cheers,
Edgar
> ---
> xen/include/asm-arm/page.h | 24 +++++++++++-------------
> 1 file changed, 11 insertions(+), 13 deletions(-)
>
> diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
> index 86de0b6..4b46e88 100644
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -291,24 +291,20 @@ extern size_t cacheline_bytes;
>
> static inline int invalidate_dcache_va_range(const void *p, unsigned long
> size)
> {
> - size_t off;
> const void *end = p + size;
> + size_t cacheline_mask = cacheline_bytes - 1;
>
> dsb(sy); /* So the CPU issues all writes to the range */
>
> - off = (unsigned long)p % cacheline_bytes;
> - if ( off )
> + if ( (uintptr_t)p & cacheline_mask )
> {
> - p -= off;
> + p = (void *)((uintptr_t)p & ~cacheline_mask);
> asm volatile (__clean_and_invalidate_dcache_one(0) : : "r" (p));
> p += cacheline_bytes;
> - size -= cacheline_bytes - off;
> }
> - off = (unsigned long)end % cacheline_bytes;
> - if ( off )
> + if ( (uintptr_t)end & cacheline_mask )
> {
> - end -= off;
> - size -= off;
> + end = (void *)((uintptr_t)end & ~cacheline_mask);
> asm volatile (__clean_and_invalidate_dcache_one(0) : : "r" (end));
> }
>
> @@ -322,9 +318,10 @@ static inline int invalidate_dcache_va_range(const void
> *p, unsigned long size)
>
> static inline int clean_dcache_va_range(const void *p, unsigned long size)
> {
> - const void *end;
> + const void *end = p + size;
> dsb(sy); /* So the CPU issues all writes to the range */
> - for ( end = p + size; p < end; p += cacheline_bytes )
> + p = (void *)((uintptr_t)p & ~(cacheline_bytes - 1));
> + for ( ; p < end; p += cacheline_bytes )
> asm volatile (__clean_dcache_one(0) : : "r" (p));
> dsb(sy); /* So we know the flushes happen before continuing */
> /* ARM callers assume that dcache_* functions cannot fail. */
> @@ -334,9 +331,10 @@ static inline int clean_dcache_va_range(const void *p,
> unsigned long size)
> static inline int clean_and_invalidate_dcache_va_range
> (const void *p, unsigned long size)
> {
> - const void *end;
> + const void *end = p + size;
> dsb(sy); /* So the CPU issues all writes to the range */
> - for ( end = p + size; p < end; p += cacheline_bytes )
> + p = (void *)((uintptr_t)p & ~(cacheline_bytes - 1));
> + for ( ; p < end; p += cacheline_bytes )
> asm volatile (__clean_and_invalidate_dcache_one(0) : : "r" (p));
> dsb(sy); /* So we know the flushes happen before continuing */
> /* ARM callers assume that dcache_* functions cannot fail. */
> --
> 1.9.1
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |