[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 0/8] Follow-up static shared memory PART I
Hi Michal Sorry for the delayed response, caught up in internal release lately. :\ On 2023/8/22 16:57, Michal Orzel wrote: Hi Penny, On 22/08/2023 07:32, Penny Zheng wrote:Hi, michal On 2023/8/21 18:49, Michal Orzel wrote:Hi Penny, On 21/08/2023 06:00, Penny Zheng wrote:There are some unsolving issues on current 4.17 static shared memory feature[1], including: - In order to avoid keeping growing 'membank', having the shared memory info in separate structures is preferred. - Missing implementation on having the host address optional in "xen,shared-mem" property - Removing static shared memory from extended regions - Missing reference release on foreign superpage - Missing "xen,offset" feature, which is introduced in Linux DOC[2] All above objects have been divided into two parts to complete. And this patch serie is PART I. [1] https://lore.kernel.org/all/20220908135513.1800511-1-Penny.Zheng@xxxxxxx/ [2] https://www.kernel.org/doc/Documentation/devicetree/bindings/reserved-memory/xen%2Cshared-memory.txtIt looks like there is a problem with the changes introduced in this series. The gitlab static shared memory tests failed: https://gitlab.com/xen-project/patchew/xen/-/pipelines/973985190 No Xen logs meaning the failure occurred before serial console initialization. Now, I would like to share some observations after playing around with the current static shared mem code today. 1) Static shared memory region is advertised to a domain by creating a child node under reserved-memory. /reserved-memory is nothing but a way to carve out a region from the normal memory specified in /memory node. For me, such regions should be described in domain's /memory node as well. This is not the case at the moment for static shm unlike to other sub-nodes of /reserved-memory (present in host dtb) for which Xen creates separate /memory nodes.Hmm, correct me if I'm wrong, If we describe twice in domain's /memory node too, it will be treated as normal memory, then any application could use it. The reason why we put static shm under /reserved-memory is that we only hope special driver, like static shm linux driver, could access it. If you track down in make_memory_node(), only memory range that is reserved for device (or firmware) will be described twice as normal memory in Dom0. Memory like static shm, will get passed.Reserved memory is a region of RAM (not MMIO) carved out for a special purpose which can be used by a driver for e.g. shared dma pool. Therefore, such region shall be described both under /memory (used to present the total RAM and reserved memory is in RAM) and under /reserved-memory. The OS parses /memory and then parses /reserved-memory to exclude the regions from normal usage (there is also no-map property to tell OS not to create virtual mapping). So you do not need to worry about OS making use of something we marked as reserved. This is exactly what Xen does if there are regions described as reserved in the host dtb: 1. Xen parses host dtb, adds reserved regions into bootinfo.reserved_mem so that it will not be used e.g. for allocator 2. While copying nodes from host dtb, Xen copies reserved memory nodes to dom0 dtb and only maps the regions in p2m without permitting iomem access 3. Xen creates another /memory node to contain the reserved memory ranges I guess static shm is not exception to this flow. It is part of RAM suited for memory sharing. Understood!!! thanks for the detailed explanation.I've created a new commit "xen/arm: fix duplicate /reserved-memory node in Dom0" to fix this problem in v4[1], plz feel free to review. 2) Domain dtb parsing issue with two /reserved-memory nodes present. In case there is a /reserved-memory node already present in the host dtb, Xen would create yet another /reserved-memory node for the static shm (to be observed in case of dom0). This is a bug as there can be only one /reserved-memory node. This leads to an error when dumping with dtc and leads to a shm node not being visible to a domain (guest OS relies on a presence of a single /reserved-memory node). The issue is because in make_resv_memory_node(), you are not checking if such node already exists.Yes, you're true. In Dom0, we could see two /reserved-memory nodes. I think, if there is a /reserved-memory node already present in the host dtb, we shall reserve it in kinfo for make_resv_memory_node().This way you will only take the reference of a region but what about all the properties, node names, etc. that you need to copy? This is why Xen first copies the reserved memory nodes from host dtb and then adds ranges to /memory. In our shm case, we would need to insert the shm node into existing reserved memory node. This is a bit tricky as you can no longer make use of fdt_{begin,end}_node and instead use the helpers operating on offsets. I've also created a new commit "xen/arm: create another /memory node for static shm" to fix this problem in v4[1] too, plz feel free to review. [1] https://lore.kernel.org/xen-devel/20230911040442.2541398-1-Penny.Zheng@xxxxxxx/ ~Michal
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |