[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 6/7] xen/arm: flush D-cache and I-cache when appropriate
At 16:03 +0100 on 24 Oct (1351094626), Stefano Stabellini wrote: > - invalidate tlb after setting WXN; > - flush D-cache and I-cache after relocation; > - flush D-cache after writing to smp_up_cpu; > - flush TLB before changing HTTBR; > - flush I-cache after changing HTTBR; > - flush I-cache and branch predictor after writing Xen text ptes. > > Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx> > @@ -244,10 +245,18 @@ void __init setup_pagetables(unsigned long > boot_phys_offset, paddr_t xen_paddr) > > /* Change pagetables to the copy in the relocated Xen */ > boot_httbr = (unsigned long) xen_pgtable + phys_offset; > + flush_xen_dcache_va((unsigned long)&boot_httbr); > + for ( i = 0; i < _end - _start; i += cacheline_size ) > + flush_xen_dcache_va(dest_va + i); > + flush_xen_text_tlb(); > + > asm volatile ( > + "dsb;" /* Ensure visibility of HTTBR update */ That comment should be changed -- this dsb is to make sure all the PT changes are completed, right? > STORE_CP64(0, HTTBR) /* Change translation base */ > "dsb;" /* Ensure visibility of HTTBR update */ > + "isb;" > STORE_CP32(0, TLBIALLH) /* Flush hypervisor TLB */ > + STORE_CP32(0, ICIALLU) /* Flush I-cache */ > STORE_CP32(0, BPIALL) /* Flush branch predictor */ > "dsb;" /* Ensure completion of TLB+BP flush */ > "isb;" > diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h > index 9511c45..7d70d8c 100644 > --- a/xen/include/asm-arm/page.h > +++ b/xen/include/asm-arm/page.h > @@ -232,13 +232,26 @@ static inline lpae_t mfn_to_p2m_entry(unsigned long > mfn, unsigned int mattr) > static inline void write_pte(lpae_t *p, lpae_t pte) > { > asm volatile ( > + "dsb;" I guess this is to make sure all writes that used the old mapping have completed? Can you add a comment? > /* Safely write the entry (STRD is atomic on CPUs that support LPAE) > */ > "strd %0, %H0, [%1];" > + "dsb;" > /* Push this cacheline to the PoC so the rest of the system sees it. > */ > STORE_CP32(1, DCCMVAC) > + "isb;" This is for code modifications, right? I think we can drop it, and have all the paths that modify text mappings do explicit isb()s there -- the vast majority of PTE writes don't need it. > : : "r" (pte.bits), "r" (p) : "memory"); > } > > +static inline void flush_xen_dcache_va(unsigned long va) Three of the four users of this function cast their arguments from pointer types - maybe it should take a void * instead?. > +{ > + register unsigned long r0 asm ("r0") = va; I don't think this is necessary - why not just pass va directly to the inline asm? We don't care what register it's in (and if we did I'm not convinced this would guarantee it was r0). > + asm volatile ( > + "dsb;" > + STORE_CP32(0, DCCMVAC) > + "isb;" > + : : "r" (r0) : "memory"); Does this need a 'memory' clobber? Can we get away with just saying it consumes *va as an input? All we need to be sure of is that the particular thing we're flushing has been written out; no need to stop any other optimizations. I guess it might need to be re-cast as a macro so the compiler knows how big *va is? Tim. > +} > + > /* > * Flush all hypervisor mappings from the TLB and branch predictor. > * This is needed after changing Xen code mappings. > @@ -249,6 +262,7 @@ static inline void flush_xen_text_tlb(void) > asm volatile ( > "dsb;" /* Ensure visibility of PTE writes */ > STORE_CP32(0, TLBIALLH) /* Flush hypervisor TLB */ > + STORE_CP32(0, ICIALLU) /* Flush I-cache */ > STORE_CP32(0, BPIALL) /* Flush branch predictor */ > "dsb;" /* Ensure completion of TLB+BP flush */ > "isb;" > -- > 1.7.2.5 > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |