|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] xen/evtchn: Add design for static event channel signaling for domUs..
Hi, On 09/04/2022 02:00, Stefano Stabellini wrote: On Wed, 23 Mar 2022, Rahul Singh wrote:in dom0less system. This patch introduce the new feature to support the signaling between two domUs in dom0less system. Signed-off-by: Rahul Singh <rahul.singh@xxxxxxx> --- docs/designs/dom0less-evtchn.md | 96 +++++++++++++++++++++++++++++++++ 1 file changed, 96 insertions(+) create mode 100644 docs/designs/dom0less-evtchn.md diff --git a/docs/designs/dom0less-evtchn.md b/docs/designs/dom0less-evtchn.md new file mode 100644 index 0000000000..6a1b7e8c22 --- /dev/null +++ b/docs/designs/dom0less-evtchn.md @@ -0,0 +1,96 @@ +# Signaling support between two domUs on dom0less system + +## Current state: Draft version + +## Proposer(s): Rahul Singh, Bertrand Marquis + +## Problem Statement: + +The goal of this work is to define a simple signaling system between Xen guests +in dom0less systems. + +In dom0less system, we cannot make use of xenbus and xenstore that are used in +normal systems with dynamic VMs to communicate between domains by providing a +bus abstraction for paravirtualized drivers. + +One possible solution to implement the signaling system between domUs is based +on event channels.I suggest to reword this as follows: --- Dom0less guests would benefit from a statically-defined memory sharing and signally system for communication. One that would be immediately available at boot without any need for dynamic configurations. In embedded a great variety of guest operating system kernels exist, many of which don't have support for xenstore, grant table or other complex drivers. Some of them are small kernel-space applications (often called "baremetal", not to be confused with the term "baremetal" used in datacenter which means "without hypervisors") or RTOSes. Additionally, for safety reasons, users often need to be able to configure the full system statically so that it can be verified statically. Event channels are very simple and can be added even to baremetal applications. This proposal introduces a way to define them statically to make them suitable to dom0less embedded deployments. --- As for 1), you could combine all the ports in one node. This format has the advantage that doesn't need a new top-level node type under /chosen. That is desirable few a few reasons. Today we only have domains (dom0 is legacy). In the future we might have nested domains and non-Xen domains. With System Device Tree, domains are under /domains instead of /chosen. So normally I would argue to use the sub-node format because it doesn't need a new top-level node under /chosen. However, in this case it looks like the 1) format is simpler to write and also simpler to parse in Xen. I am not sure for both. For the writing part: In one hand, it is nice to have all the event channels defined in one place. One the other hand, it is more messy if you want to visually check that you don't duplicate the event channels. It may also end up to be complex to read if you have many events channels. So if we go with 1), I think we would want to allow multiple nodes to help the user to keep a clean/readable DT. In 1), we would need to loop over xen,evtchn and for each tuplet we would only need to fetch the foreign domid. You will need to loop over all the nodes in chosen to find "xen,evtchn" and also fetch two phandles. In 2), we would need to check the compatible string of every "xen,evtchn-v1" node, and we would have to fetch from the phandle both the remote event channel number but also the domain-id of the parent. So it looks like 1) is better because it is much simpler to parse. Do you agree? See above, I am not sure the code to parse will end up to be much bigger because we would still need to loop on the nodes in chosen and fetch two phandles. That said, we are potentially going to need to loop on more nodes. So overall, I am 50/50 on which one to chose. Cheers, -- Julien Grall
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |