[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [RFC v1 3/5] xen/arm: introduce SCMI-SMC mediator driver
On Thu, Jan 06, 2022 at 02:02:10PM +0000, Julien Grall wrote: > > > On 06/01/2022 13:53, Oleksii Moisieiev wrote: > > Hi Julien, > > Hi, > > > > > On Mon, Jan 03, 2022 at 01:14:17PM +0000, Julien Grall wrote: > > > Hi, > > > > > > On 24/12/2021 17:02, Oleksii Moisieiev wrote: > > > > On Fri, Dec 24, 2021 at 03:42:42PM +0100, Julien Grall wrote: > > > > > On 20/12/2021 16:41, Oleksii Moisieiev wrote: > > > > > > > 2) What are the expected memory attribute for the regions? > > > > > > > > > > > > xen uses iommu_permit_access to pass agent page to the guest. So > > > > > > guest can access the page directly. > > > > > > > > > > I think you misunderstood my comment. Memory can be mapped with > > > > > various type > > > > > (e.g. Device, Memory) and attribute (cacheable, non-cacheable...). > > > > > What will > > > > > the firmware expect? What will the guest OS usually? > > > > > > > > > > The reason I am asking is the attributes have to matched to avoid any > > > > > coherency issues. At the moment, if you use > > > > > XEN_DOMCTL_memory_mapping, Xen > > > > > will configure the stage-2 to use Device nGnRnE. As the result, the > > > > > result > > > > > memory access will be Device nGnRnE which is very strict. > > > > > > > > > > > > > Let me share with you the configuration example: > > > > scmi expects memory to be configured in the device-tree: > > > > > > > > cpu_scp_shm: scp-shmem@0xXXXXXXX { > > > > compatible = "arm,scmi-shmem"; > > > > reg = <0x0 0xXXXXXX 0x0 0x1000>; > > > > }; > > > > > > > > where XXXXXX address I allocate in alloc_magic_pages function. > > > > > > The goal of alloc_magic_pages() is to allocate RAM. However, what you want > > > is a guest physical address and then map a part of the shared page. > > > > Do you mean that I can't use alloc_magic_pages to allocate shared > > memory region for SCMI? > Correct. alloc_magic_pages() will allocate a RAM page and then assign to the > guest. From your description, this is not what you want because you will > call XEN_DOMCTL_memory_mapping (and therefore replace the mapping). > Ok thanks, I will refactor this part in v2. > > > > > > > > I can see two options here: > > > 1) Hardcode the SCMI region in the memory map > > > 2) Create a new region in the memory map that can be used for reserving > > > memory for mapping. > > > > Could you please explain what do you mean under the "new region in the > > memory map"? > > I mean reserving some guest physical address that could be used for map host > physical address (e.g. SCMI region, GIC CPU interface...). > > So rather than hardcoding the address, we have something more flexible. > Ok, I will fix that in v2. > > > > > > > > We still have plenty of space in the guest memory map. So the former is > > > probably going to be fine for now. > > > > > > > Then I get paddr of the scmi channel for this domain and use > > > > XEN_DOMCTL_memory_mapping to map scmi channel address to gfn. > > > > > Hope I wass able to answer your question. > > > > > > What you provided me is how the guest OS will locate the shared memory. > > > This > > > still doesn't tell me which memory attribute will be used to map the page > > > in > > > Stage-1 (guest page-tables). > > > > > > To find that out, you want to look at the driver and how the mapping is > > > done. The Linux driver (drivers/firmware/arm_scmi) is using devm_ioremap() > > > (see smc_chan_setup()). > > > > > > Under the hood, the function devm_ioremap() is using PROT_DEVICE_nGnRE > > > (arm64) which is one of the most restrictive memory attribute. > > > > > > This means the firmware should be able to deal with the most restrictive > > > attribute and therefore using XEN_DOMCTL_memory_mapping to map the shared > > > page in stage-2 should be fine. > > > > > > > I'm using vmap call to map channel memory (see smc_create_channel()). > > vmap call sets PAGE_HYPERVISOR flag which sets MT_NORMAL (0x7) flag. > > You want to use ioremap(). > I've used ioremap originally, but changed it to vmap because ioremap doesn't support memcpy. What if I use __vmap with MT_DEVICE_nGnRE flag? > > Considering that protocol is synchronous and only one agent per channel is > > expected - this works fine for now. > > But I agree that the same memory attributes should be used in xen and > > kernel. I fill fix that in v2. > > I am a bit confused. Are you mapping the full shared memory area in Xen? If > yes, why do you need to map the memory that is going to be shared with a > domain? > Xen should have an access to all agent channels because it should send SCMI_BASE_DISCOVER_AGENT to each channel and receive agent_id during scmi_probe call. Best regards, Oleksii
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |