[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH ARM v6 07/14] mini-os: arm: boot code
On 17 July 2014 17:28, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote: > On Wed, 2014-07-16 at 12:07 +0100, Thomas Leonard wrote: >> + @ Enable MMU / SCTLR >> + @ We leave caches off for now because we're going to be changing the >> + @ TLB a lot and this avoids having to flush the caches each time. > > ARMv7 (or perhaps it was the virt extensions, either way you can rely on > it here) makes the MMU cache coherent, so you don't need to flush the > caches when doing a PTE write any more. I tried enabling the D-cache here, but then it frequently (but not always) fails to boot, even with various dsb and isb instructions scattered around. If I enable the D-cache immediately before the loop that writes the translation table entries, it fails. Adding a dsb after the loop doesn't help. But if I enable it after the loop then it's fine. I don't know what's going on there - any suggestions? Here's my (failing) test-case (tested using Xen/stable-4.4 - 4.4.1-rc1-28-gee81dda): https://github.com/talex5/xen/commit/adadfa7e483aa3f3a351c0aeca6a560614fcdead >> + @ Enable MMU / SCTLR >> + @ We leave caches off for now because we're going to be changing the >> + @ TLB a lot and this avoids having to flush the caches each time. >> + mrc p15, 0, r1, c1, c0, 0 @ SCTLR >> + orr r1, r1, #0x1 @ Enable MMU > > I think you want a dsb here to ensure that any previous PTE write you > did when setting up the PTs have actually occurred. Presumably this isn't needed in the current code, since the caches are off? >> + mcr p15, 0, r1, c1, c0, 0 @ SCTLR >> + isb >> + > >> + adr r1, stage2 @ Physical address of stage2 >> + sub r1, r1, r9 @ Virtual address of stage2 > > ldr r1, =stage2 > > would do this in one, I think. Yes, OK. >> + bx r1 >> + >> +@ Called once the MMU is enabled. The boot code and the page table are >> mapped, >> +@ but nothing else is yet. >> +@ >> +@ => r2 -> dtb (physical) >> +@ r7 = virtual address of page table >> +@ r8 = section entry template (flags) >> +@ r9 = desired physical - virtual offset >> +@ pc -> somewhere in newly-mapped virtual code section >> +stage2: > > I think you want another dsb around here somewhere to ensure that none > of the PT updates you do in a moment float up and interfere with the > mapping of the code between the write to SCTLR until the bx r1. How does that happen? Presumably the isb below will block that. >> + @ Invalidate TLB >> + mcr p15, 0, r1, c8, c7, 0 @ TLBIALL >> + isb >> + >> + @ The new mapping has now taken effect: >> + @ r7 -> page_dir >> + >> + @ Fill in the whole top-level translation table (at page_dir). >> + @ Populate the whole pagedir with 1MB section descriptors. >> + >> + mov r1, r7 @ r1 -> first section entry >> + add r3, r1, #4*4*1024 @ limit (4 GB address space, 4 byte >> entries) >> + orr r0, r8, r9 @ r0 = entry mapping section zero to >> start of physical RAM >> +1: >> + str r0, [r1],#4 @ write the section entry >> + add r0, r0, #1 << 20 @ next physical page (wraps) >> + cmp r1, r3 >> + bne 1b > > dsb here to make sure those writes have actually occurred before you > come to rely on them. > > If there is any chance of any of those writes going through a mapping > which you are then changing at the same time then I worry that you might > also need a dsb before each write. The mapping for the translation table itself doesn't change (its entry will be written with the same value as previously). > I'm not 100% sure what would happen > in this case, I think you'd be better off creating a whole new set of > PTs rather than pulling the rug out from under yourself. It's either > that or cold towel on head time ;-) > > Ian. -- Dr Thomas Leonard http://0install.net/ GPG: 9242 9807 C985 3C07 44A6 8B9A AE07 8280 59A5 3CC1 GPG: DA98 25AE CAD0 8975 7CDA BD8E 0713 3F96 CA74 D8BA _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |