[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v1 03/14] xen/riscv: introduce ioremap()
On 4/10/25 5:13 PM, Jan Beulich wrote:
On 08.04.2025 17:57, Oleksii Kurochko wrote:Based on RISC-V unpriviliged spec ( Version 20240411 ): ``` For implementations that conform to the RISC-V Unix Platform Specification, I/O devices and DMA operations are required to access memory coherently and via strongly ordered I/O channels. Therefore, accesses to regular main memory regions that are concurrently accessed by external devices can also use the standard synchronization mechanisms. Implementations that do not conform to the Unix Platform Specification and/or in which devices do not access memory coherently will need to use mechanisms (which are currently platform-specific or device-specific) to enforce coherency. I/O regions in the address space should be considered non-cacheable regions in the PMAs for those regions. Such regions can be considered coherent by the PMA if they are not cached by any agent. ``` and [1]: ``` The current riscv linux implementation requires SOC system to support memory coherence between all I/O devices and CPUs. But some SOC systems cannot maintain the coherence and they need support cache clean/invalid operations to synchronize data. Current implementation is no problem with SiFive FU540, because FU540 keeps all IO devices and DMA master devices coherence with CPU. But to a traditional SOC vendor, it may already have a stable non-coherency SOC system, the need is simply to replace the CPU with RV CPU and rebuild the whole system with IO-coherency is very expensive. ``` and the fact that all known ( to me ) CPUs that support the H-extension and that ones is going to be supported by Xen have memory coherency between all I/O devices and CPUs, so it is currently safe to use the PAGE_HYPERVISOR attribute. However, in cases where a platform does not support memory coherency, it should support CMO extensions and Svpbmt. In this scenario, updates to ioremap will be necessary. For now, a compilation error will be generated to ensure that the need to update ioremap() is not overlooked. [1] https://patchwork.kernel.org/project/linux-riscv/patch/1555947870-23014-1-git-send-email-guoren@xxxxxxxxxx/But MMIO access correctness isn't just a matter of coherency. There may not be any caching involved in most cases, or else you may observe significantly delayed or even dropped (folded with later ones) writes, and reads may be serviced from the cache instead of going to actual MMIO. Therefore ...--- a/xen/arch/riscv/Kconfig +++ b/xen/arch/riscv/Kconfig @@ -15,6 +15,18 @@ config ARCH_DEFCONFIG string default "arch/riscv/configs/tiny64_defconfig" +config HAS_SVPBMT + bool + help + This config enables usage of Svpbmt ISA-extension ( Supervisor-mode: + page-based memory types). + + The memory type for a page contains a combination of attributes + that indicate the cacheability, idempotency, and ordering + properties for access to that page. + + The Svpbmt extension is only available on 64-bit cpus.... I kind of expect this extension (or anything else that there might be) will need making use of. In cases where the Svpbmt extension isn't available, PMA (Physical Memory Attributes) is used to control which memory regions are cacheable, non-cacheable, readable, writable, etc. PMA is configured in M-mode by the firmware (e.g., OpenSBI), as is done in Andes cores, or it can be fixed at design time, as in SiFive cores. In the case of QEMU, I assume it is QEMU's responsibility to properly emulate accesses to device memory regions. Since QEMU does not appear to provide registers for configuring PMA, it seems that PMA is not emulated. Additionally, QEMU does not emulate caches. Based on that, I expect that it is the responsibility of the firmware or the hardware itself to provide the correct PMA configuration. I want to note that even Svpbmt is available PMA settings could be used or be ovewritten by Svpbmt's attribute value. I will update the commit message for more clearness. @@ -548,3 +549,21 @@ void clear_fixmap(unsigned int map) FIXMAP_ADDR(map) + PAGE_SIZE) != 0 ) BUG(); } + +void *ioremap(paddr_t pa, size_t len) +{ + mfn_t mfn = _mfn(PFN_DOWN(pa)); + unsigned int offs = pa & (PAGE_SIZE - 1); + unsigned int nr = PFN_UP(offs + len); + +#ifdef CONFIG_HAS_SVPBMT + #error "an introduction of PAGE_HYPERVISOR_IOREMAP is needed for __vmap()" +#endifWhile, as per above, I don't think this can stay, just in case: As indicated earlier, pre-processor directives want to have the # in the first column. I think it can be safely dropped now, and ~ Oleksii
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |