[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH] xen/arm: fix unmapped access trapping on GICv2 hardware


  • To: <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
  • Date: Thu, 5 Feb 2026 14:01:27 -0500
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=6pkHT7YoWrdILM2KZRf860iYhK0mFrQlm9u5hTlgGD0=; b=PhSAR8YREbhU3iCLZ23AadrxwLOtDp1ZT/4OjUWG+kxIdFN9bsH7Ekvi6+gQW1wwTqe8ilMueld1UjoF8eKkyINsU+78No5dPPo22MlLHaD320h3lR0/nDefFSZW+U8YGitUv357/R9gAx18Iz38bolTnvbCQi14y0G+HIB+U6x7DvyAv7RhsExf8gZ7UVtmW3MgQfjx5eIpKPIT4r7SphT7g4sxv/GqZqlee8rbmfFAobNEvUbW9KewCGch5QrrXd7UKBGhXcc0UxNuwxJJ4M4c50mpXq3K8m7VUXfIS8JZFSIaZ7oCpId/MQsutpcRSk0pCz7boPJNe1EblPMfTg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=dgE30Q9Q9fX/hHkzdUdjpssxu13J+1dvlooZo37aRdexNqlmR6M5g84RGoWreHwqaj3S2297tNXnJyA/QVpQ71lFKF5E0L0wa9SBcU8Cd1MKjXIFZOVxvF2QFJkwawrL4heYmDocX6U1baLVINafWCJQgtqoSr4C++sVO6JjFQWB5sk2tb3aytQR02S3HPH6ub2beMlMkvKsTzHIZNhG5D6u0qng2t6RRNc6vlcTm4C+kV6XZUHF6P1wQsiP6LgJlIPusF7BqRnUERYiegqUg4czimdmaI0LSz51XlAvYfDD02w3gzbKsHeyNNMoBzMHWxk1x2x+8saih/mz6Ow4Ag==
  • Cc: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, "Volodymyr Babchuk" <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Thu, 05 Feb 2026 19:10:49 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Since 4dbcb0653621, the vGICv2 CPU interface is mapped in a deferred
manner. On domains with XEN_DOMCTL_CDF_trap_unmapped_accesses unset on
GICv2 hardware, the vGICv2 CPU interface fails to be mapped. A visible
symptom is that a domU gets stuck at:

  [    0.177983] smp: Bringing up secondary CPUs ...

Move the 2nd check_p2m earlier so it's prioritized over try_handle_mmio.

Fixes: 980aff4e8fcd ("xen/arm: Add way to disable traps on accesses to unmapped 
addresses")
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
---
This should be backported to 4.21.

Pipeline: 
https://gitlab.com/xen-project/people/stewarthildebrand/xen/-/pipelines/2010469665
---
 xen/arch/arm/traps.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 040c0f2e0db1..0c01f37ad6b4 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1915,6 +1915,14 @@ static void do_trap_stage2_abort_guest(struct 
cpu_user_regs *regs,
         if ( info.dabt_instr.state == INSTR_ERROR )
             goto inject_abt;
 
+        /*
+         * If the instruction syndrome was invalid, then we already checked if
+         * this was due to a P2M fault. So no point to check again as the 
result
+         * will be the same.
+         */
+        if ( (info.dabt_instr.state == INSTR_VALID) && check_p2m(is_data, gpa) 
)
+            return;
+
         state = try_handle_mmio(regs, &info);
 
         switch ( state )
@@ -1939,14 +1947,6 @@ static void do_trap_stage2_abort_guest(struct 
cpu_user_regs *regs,
                 break;
         }
 
-        /*
-         * If the instruction syndrome was invalid, then we already checked if
-         * this was due to a P2M fault. So no point to check again as the 
result
-         * will be the same.
-         */
-        if ( (info.dabt_instr.state == INSTR_VALID) && check_p2m(is_data, gpa) 
)
-            return;
-
         break;
     }
     default:
-- 
2.52.0




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.