[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 8/8] xen/dom0less: store xenstore event channel in page


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Jason Andryuk <jason.andryuk@xxxxxxx>
  • Date: Wed, 27 Aug 2025 09:19:42 -0400
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=KyjzUob1uy0Vqyna53Q1B04376b7ib8w6l4RLlZvr3I=; b=ZRdrTOdaN7TaOjGUAMEfhawrAccSLiTYSfEGPwDd3KytL88gTHN/aiDrkQbcifWebg1yGAArL+4OQLtqwNjy9037VQJwTZ9DG4r27aXIR4yNdbIjLTh5XV2vMzSsdKtI7CjhU6kJ7rZE/CnHL63acNqF/zfUmOcvcX3sWBhzn5nxRk8YsV0gfuAj7fhbgr0VsS3HK74yIPSJFnW8KOtu9G6ZJYYz8Jhd1MUM5HSZ0IyM3Zc77vLumX349guQFojfbvcT4sgDKa099lIhOLkykfX1LJeM9O+BfuuFYsBH/NMaF8dnBt+v9LwEgvdZ+dX32v1gWBagig+iqq4vh7jPsw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=UGagWUl5drSypdrutre4kNzMKwShZeuzofNAOipfcwLlOEoAkBz0um/bUhOJt/PU4dkykddGM7W6VFssrMvI6laYsRPe9JQPNVWLgcB19Tb9UOPnigpzClENcoXFCzw6+B/XDkqx7x3z0t81REffTlg2XF0NRC/Ln3PPGa8/iewNyCD3NB/h/Fw3AlFeCJIWJC4LszDcDi6LpaGTc8JWYQgf74AlYIQBE8394c65atwX2y4+Ni776qo1CBUn74bsdGT9ws7w4375YIyi12vfKsMxoprFxj3+u0UMV5luH/Iwy4mFs7iP2wcP2vmOtAk3xLqV4vOGr5cmdocbiT63Ww==
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 27 Aug 2025 13:19:36 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 2025-08-27 03:58, Jan Beulich wrote:
On 26.08.2025 23:08, Jason Andryuk wrote:
--- a/xen/common/device-tree/dom0less-build.c
+++ b/xen/common/device-tree/dom0less-build.c
@@ -26,6 +26,7 @@
  #include <public/event_channel.h>
  #include <public/io/xs_wire.h>
+#include <asm/guest_access.h>
  #include <asm/setup.h>
#include <xen/static-memory.h>
@@ -120,8 +121,14 @@ static void __init initialize_domU_xenstore(void)
if ( gfn != XENSTORE_PFN_LATE_ALLOC && IS_ENABLED(CONFIG_GRANT_TABLE) )
          {
+            evtchn_port_t port = d->arch.hvm.params[HVM_PARAM_STORE_EVTCHN];
+            paddr_t evtchn_gaddr = gfn_to_gaddr(_gfn(gfn)) +
+                offsetof(struct xenstore_domain_interface, evtchn_port);
+
              ASSERT(gfn < UINT32_MAX);
              gnttab_seed_entry(d, GNTTAB_RESERVED_XENSTORE, xs_domid, gfn);
+            access_guest_memory_by_gpa(d, evtchn_gaddr, &port, sizeof(port),
+                                       true /* is_write */);

Isn't the use of an arch-specific function going to pose yet another issue
for making this code usable on x86? Can't you use copy_to_guest_phys() here?
Which may in turn need to be passed in by the caller, see e.g. dtb_load()
and initrd_load() (i.e. cache flushing may also be necessary for Arm).

Yes, that could be done, but it's not my preferred approach. Using a function pointer to pass a compile time constant seems to me like a misuse of a function pointer.

I'd rather each arch using dom0less define:
unsigned long copy_to_guest_phys(struct domain *d,
                                 paddr_t gpa,
                                 void *buf,
                                 unsigned int len);

Which does the correct thing for the arch.

Alejandro was able to re-work things to re-use the dom0less parsing code (dom0less-bindings.c), but he has so far kept the x86 domain construction separate such that it does not use dom0less-build.c. So I don't know how that will shake out.

But, yeah, I can just pass in a function pointer if that is what is agreed upon.

Regards,
Jason



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.