[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 3/6] arm/mpu: Populate a new region in Xen MPU mapping table


  • To: Hari Limaye <hari.limaye@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: "Orzel, Michal" <michal.orzel@xxxxxxx>
  • Date: Tue, 15 Jul 2025 10:20:49 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=sDZii7biVj3J1IviJz4GOZQYAlZE4PcuFJcJ9zdwGT4=; b=zGbAqdMF/NbBvMa+iJZ+KT59Hoe12N/QF0RgkgUiC14lyTUxh44ZOarLpNg8WdUM2l/DPHItF+4+8jj84JVySuk4nlJPWeIpWnwJvdIQ4YuiQo282aN0QVlJqsTYljGjsZy7kWZ3NME27osCAaDgWK/62jBuFwGQd0rLbiltBnUE/O/A+3S/2l+M/oPSIaxBPjMsdX95OVo1IWkUCWzQJ6Cb/EuiA39dM0PSgkGtQ1fwzPm2AShNISA4UBekyKupSjwWgbLaD++fzeSucCb5ol2UphtgTw4AOhM14BjLZJ/dvs1p9uZPiMCBNeeVdtGKSZRgAkUoVVmPyblfYZje/A==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=IKJg3Xs6ltjXWO9k0CWcQBhMYCwJLwaZbzUwP7QAcQK4HWGKsCA9MMDMz+CgDH+G31UKwrlDhMMiO7eh2KOqclmo/j8647ICXqlXfFSJlPHJYN9qToo/jjNmFd0bakOUrkjyr0iaic+Vx43FzLnZnHJ7NREKBpPTdstKp2Bew+aTze1fdUH312cOFBYuH05rHTDcKgXQ5sjWYxzmhkKmro1IwFVAKU8n+jbVeLvpba/LAyQs3GsfAcvs3BR6OXKgv0/ZdeJKXeuNzOWT5gcrrFs2+6WTWdX0yH8r3bsWreU4jSB/Z/+FDO7FCccl2Q4mqzP4pM7BrZqyBiXAepexWw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com;
  • Cc: luca.fancellu@xxxxxxx, Penny Zheng <Penny.Zheng@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Wei Chen <wei.chen@xxxxxxx>
  • Delivery-date: Tue, 15 Jul 2025 08:21:07 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>


On 15/07/2025 09:45, Hari Limaye wrote:
> From: Penny Zheng <Penny.Zheng@xxxxxxx>
> 
> Introduce map_pages_to_xen() that is implemented using a new helper,
> xen_mpumap_update(), which is responsible for updating Xen MPU memory
> mapping table(xen_mpumap), including creating a new entry, updating
> or destroying an existing one, it is equivalent to xen_pt_update in MMU.
> 
> This commit only implements populating a new entry in Xen MPU memory mapping
> table(xen_mpumap).
> 
> Signed-off-by: Penny Zheng <penny.zheng@xxxxxxx>
> Signed-off-by: Wei Chen <wei.chen@xxxxxxx>
> Signed-off-by: Luca Fancellu <luca.fancellu@xxxxxxx>
> Signed-off-by: Hari Limaye <hari.limaye@xxxxxxx>
> ---
> Changes from v1:
> - Simplify if condition
> - Use normal printk
> - Use %# over 0x%
> - Add same asserts as in Patch 4
> 
> Changes from v2:
> - Improve clarity in xen_mpumap_alloc_entry comment
> - Simplify if condition
> - Remove redundant ASSERT statements
> - Add check for `base >= limit`
> - Pass virt directly in map_pages_to_xen
> - Move call to `context_sync_mpu` inside locked section of `xen_mpumap_update`
> - Move _PAGE_PRESENT check before calling `mpumap_contains_region`
> ---
>  xen/arch/arm/include/asm/mpu/mm.h |  12 ++++
>  xen/arch/arm/mpu/mm.c             | 103 ++++++++++++++++++++++++++++++
>  2 files changed, 115 insertions(+)
> 
> diff --git a/xen/arch/arm/include/asm/mpu/mm.h 
> b/xen/arch/arm/include/asm/mpu/mm.h
> index 5a2b9b498b..c32fac8905 100644
> --- a/xen/arch/arm/include/asm/mpu/mm.h
> +++ b/xen/arch/arm/include/asm/mpu/mm.h
> @@ -64,6 +64,7 @@ static inline void context_sync_mpu(void)
>   * The following API requires context_sync_mpu() after being used to modify 
> MPU
>   * regions:
>   *  - write_protection_region
> + *  - xen_mpumap_update
>   */
>  
>  /* Reads the MPU region (into @pr_read) with index @sel from the HW */
> @@ -72,6 +73,17 @@ void read_protection_region(pr_t *pr_read, uint8_t sel);
>  /* Writes the MPU region (from @pr_write) with index @sel to the HW */
>  void write_protection_region(const pr_t *pr_write, uint8_t sel);
>  
> +/*
> + * Maps an address range into the MPU data structure and updates the HW.
> + * Equivalent to xen_pt_update in an MMU system.
> + *
> + * @param base      Base address of the range to map (inclusive).
> + * @param limit     Limit address of the range to map (exclusive).
> + * @param flags     Flags for the memory range to map.
> + * @return          0 on success, negative on error.
> + */
> +int xen_mpumap_update(paddr_t base, paddr_t limit, unsigned int flags);
> +
>  /*
>   * Creates a pr_t structure describing a protection region.
>   *
> diff --git a/xen/arch/arm/mpu/mm.c b/xen/arch/arm/mpu/mm.c
> index 407264a88c..d5426525af 100644
> --- a/xen/arch/arm/mpu/mm.c
> +++ b/xen/arch/arm/mpu/mm.c
> @@ -6,6 +6,7 @@
>  #include <xen/lib.h>
>  #include <xen/mm.h>
>  #include <xen/sizes.h>
> +#include <xen/spinlock.h>
>  #include <xen/types.h>
>  #include <asm/mpu.h>
>  #include <asm/mpu/mm.h>
> @@ -29,6 +30,8 @@ DECLARE_BITMAP(xen_mpumap_mask, MAX_MPU_REGION_NR) \
>  /* EL2 Xen MPU memory region mapping table. */
>  pr_t __cacheline_aligned __section(".data") xen_mpumap[MAX_MPU_REGION_NR];
>  
> +static DEFINE_SPINLOCK(xen_mpumap_lock);
> +
>  static void __init __maybe_unused build_assertions(void)
>  {
>      /*
> @@ -165,6 +168,106 @@ int mpumap_contains_region(pr_t *table, uint8_t 
> nr_regions, paddr_t base,
>      return MPUMAP_REGION_NOTFOUND;
>  }
>  
> +/*
> + * Allocate an entry for a new EL2 MPU region in the bitmap xen_mpumap_mask.
> + * @param idx   Set to the index of the allocated EL2 MPU region on success.
> + * @return      0 on success, otherwise -ENOENT on failure.
> + */
> +static int xen_mpumap_alloc_entry(uint8_t *idx)
> +{
> +    ASSERT(spin_is_locked(&xen_mpumap_lock));
> +
> +    *idx = find_first_zero_bit(xen_mpumap_mask, max_mpu_regions);
> +    if ( *idx == max_mpu_regions )
> +    {
> +        printk(XENLOG_ERR "mpu: EL2 MPU memory region mapping pool 
> exhausted\n");
> +        return -ENOENT;
> +    }
> +
> +    set_bit(*idx, xen_mpumap_mask);
> +
> +    return 0;
> +}
> +
> +/*
> + * Update the entry in the MPU memory region mapping table (xen_mpumap) for 
> the
> + * given memory range and flags, creating one if none exists.
> + *
> + * @param base  Base address (inclusive).
> + * @param limit Limit address (exclusive).
> + * @param flags Region attributes (a combination of PAGE_HYPERVISOR_XXX)
> + * @return      0 on success, otherwise negative on error.
> + */
> +static int xen_mpumap_update_entry(paddr_t base, paddr_t limit,
> +                                   unsigned int flags)
> +{
> +    uint8_t idx;
> +    int rc;
> +
> +    ASSERT(spin_is_locked(&xen_mpumap_lock));
> +
> +    /* Currently only region creation is supported. */
> +    if ( !(flags & _PAGE_PRESENT) )
> +        return -EINVAL;
> +
> +    rc = mpumap_contains_region(xen_mpumap, max_mpu_regions, base, limit, 
> &idx);
> +    if ( rc != MPUMAP_REGION_NOTFOUND )
> +        return -EINVAL;
> +
> +    /* We are inserting a mapping => Create new region. */
> +    rc = xen_mpumap_alloc_entry(&idx);
> +    if ( rc )
> +        return -ENOENT;
> +
> +    xen_mpumap[idx] = pr_of_addr(base, limit, flags);
> +
> +    write_protection_region(&xen_mpumap[idx], idx);
> +
> +    return 0;
> +}
> +
> +int xen_mpumap_update(paddr_t base, paddr_t limit, unsigned int flags)
> +{
> +    int rc;
> +
> +    if ( flags_has_rwx(flags) )
> +    {
> +        printk("Mappings should not be both Writeable and Executable\n");
> +        return -EINVAL;
> +    }
> +
> +    if ( base >= limit )
Given that limit is exclusive, to prevent empty regions, you would need to check
for base >= (limit - 1), even though it's not really possible because today we
require page aligned addresses (limit must be at least bigger or equal than base
+ PAGE_SIZE). That said, it can change one day, so I would suggest to modify the
check. With that:

Reviewed-by: Michal Orzel <michal.orzel@xxxxxxx>

~Michal




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.