|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen stable-4.8] xen/arm: fix affected memory range by dcache clean functions
commit 2e68fda962226d4de916d5ceab9d9d6037d94d45
Author: Stefano Stabellini <sstabellini@xxxxxxxxxx>
AuthorDate: Thu Mar 2 17:15:26 2017 -0800
Commit: Stefano Stabellini <sstabellini@xxxxxxxxxx>
CommitDate: Tue Mar 7 11:23:43 2017 -0800
xen/arm: fix affected memory range by dcache clean functions
clean_dcache_va_range and clean_and_invalidate_dcache_va_range don't
calculate the range correctly when "end" is not cacheline aligned. As a
result, the last cacheline is not skipped. Fix the issue by aligning the
start address to the cacheline size.
In addition, make the code simpler and faster in
invalidate_dcache_va_range, by removing the module operation and using
bitmasks instead. Also remove the size adjustments in
invalidate_dcache_va_range, because the size variable is not used later
on.
Signed-off-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xxxxxxxxxx>
Reviewed-by: Julien Grall <julien.grall@xxxxxxx>
Tested-by: Edgar E. Iglesias <edgar.iglesias@xxxxxxxxxx>
---
xen/include/asm-arm/page.h | 24 +++++++++++-------------
1 file changed, 11 insertions(+), 13 deletions(-)
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index c492d6d..a0f9344 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -292,24 +292,20 @@ extern size_t cacheline_bytes;
static inline int invalidate_dcache_va_range(const void *p, unsigned long size)
{
- size_t off;
const void *end = p + size;
+ size_t cacheline_mask = cacheline_bytes - 1;
dsb(sy); /* So the CPU issues all writes to the range */
- off = (unsigned long)p % cacheline_bytes;
- if ( off )
+ if ( (uintptr_t)p & cacheline_mask )
{
- p -= off;
+ p = (void *)((uintptr_t)p & ~cacheline_mask);
asm volatile (__clean_and_invalidate_dcache_one(0) : : "r" (p));
p += cacheline_bytes;
- size -= cacheline_bytes - off;
}
- off = (unsigned long)end % cacheline_bytes;
- if ( off )
+ if ( (uintptr_t)end & cacheline_mask )
{
- end -= off;
- size -= off;
+ end = (void *)((uintptr_t)end & ~cacheline_mask);
asm volatile (__clean_and_invalidate_dcache_one(0) : : "r" (end));
}
@@ -323,9 +319,10 @@ static inline int invalidate_dcache_va_range(const void
*p, unsigned long size)
static inline int clean_dcache_va_range(const void *p, unsigned long size)
{
- const void *end;
+ const void *end = p + size;
dsb(sy); /* So the CPU issues all writes to the range */
- for ( end = p + size; p < end; p += cacheline_bytes )
+ p = (void *)((uintptr_t)p & ~(cacheline_bytes - 1));
+ for ( ; p < end; p += cacheline_bytes )
asm volatile (__clean_dcache_one(0) : : "r" (p));
dsb(sy); /* So we know the flushes happen before continuing */
/* ARM callers assume that dcache_* functions cannot fail. */
@@ -335,9 +332,10 @@ static inline int clean_dcache_va_range(const void *p,
unsigned long size)
static inline int clean_and_invalidate_dcache_va_range
(const void *p, unsigned long size)
{
- const void *end;
+ const void *end = p + size;
dsb(sy); /* So the CPU issues all writes to the range */
- for ( end = p + size; p < end; p += cacheline_bytes )
+ p = (void *)((uintptr_t)p & ~(cacheline_bytes - 1));
+ for ( ; p < end; p += cacheline_bytes )
asm volatile (__clean_and_invalidate_dcache_one(0) : : "r" (p));
dsb(sy); /* So we know the flushes happen before continuing */
/* ARM callers assume that dcache_* functions cannot fail. */
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.8
_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
https://lists.xenproject.org/xen-changelog
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |