Skip to content
This repository was archived by the owner on May 21, 2019. It is now read-only.

Commit bd2cfdd

Browse files
committed
Merging r323315:
------------------------------------------------------------------------ r323315 | mstorsjo | 2018-01-24 11:14:52 +0100 (Wed, 24 Jan 2018) | 9 lines [builtins] Align addresses to cache lines in __clear_cache for aarch64 This makes sure that the last cache line gets invalidated properly. This matches the example code at http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.den0024a/BABJDBHI.html, and also matches what libgcc does. Differential Revision: https://reviews.llvm.org/D42196 ------------------------------------------------------------------------ git-svn-id: https://llvm.org/svn/llvm-project/compiler-rt/branches/release_60@323338 91177308-0d34-0410-b5e6-96231b3b80d8
1 parent 8d6a4ba commit bd2cfdd

File tree

1 file changed

+4
-2
lines changed

1 file changed

+4
-2
lines changed

lib/builtins/clear_cache.c

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -163,12 +163,14 @@ void __clear_cache(void *start, void *end) {
163163
* uintptr_t in case this runs in an IPL32 environment.
164164
*/
165165
const size_t dcache_line_size = 4 << ((ctr_el0 >> 16) & 15);
166-
for (addr = xstart; addr < xend; addr += dcache_line_size)
166+
for (addr = xstart & ~(dcache_line_size - 1); addr < xend;
167+
addr += dcache_line_size)
167168
__asm __volatile("dc cvau, %0" :: "r"(addr));
168169
__asm __volatile("dsb ish");
169170

170171
const size_t icache_line_size = 4 << ((ctr_el0 >> 0) & 15);
171-
for (addr = xstart; addr < xend; addr += icache_line_size)
172+
for (addr = xstart & ~(icache_line_size - 1); addr < xend;
173+
addr += icache_line_size)
172174
__asm __volatile("ic ivau, %0" :: "r"(addr));
173175
__asm __volatile("isb sy");
174176
#elif defined (__powerpc64__)

0 commit comments

Comments
 (0)