|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v7 03/15] x86/mm: rewrite virt_to_xen_l*e
On 29.05.2020 13:11, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@xxxxxxxxxx>
>
> Rewrite those functions to use the new APIs. Modify its callers to unmap
> the pointer returned. Since alloc_xen_pagetable_new() is almost never
> useful unless accompanied by page clearing and a mapping, introduce a
> helper alloc_map_clear_xen_pt() for this sequence.
>
> Note that the change of virt_to_xen_l1e() also requires vmap_to_mfn() to
> unmap the page, which requires domain_page.h header in vmap.
>
> Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx>
> Signed-off-by: Hongyan Xia <hongyxia@xxxxxxxxxx>
Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
with two further small adjustments:
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4948,8 +4948,28 @@ void free_xen_pagetable_new(mfn_t mfn)
> free_xenheap_page(mfn_to_virt(mfn_x(mfn)));
> }
>
> +void *alloc_map_clear_xen_pt(mfn_t *pmfn)
> +{
> + mfn_t mfn = alloc_xen_pagetable_new();
> + void *ret;
> +
> + if ( mfn_eq(mfn, INVALID_MFN) )
> + return NULL;
> +
> + if ( pmfn )
> + *pmfn = mfn;
> + ret = map_domain_page(mfn);
> + clear_page(ret);
> +
> + return ret;
> +}
> +
> static DEFINE_SPINLOCK(map_pgdir_lock);
>
> +/*
> + * For virt_to_xen_lXe() functions, they take a virtual address and return a
> + * pointer to Xen's LX entry. Caller needs to unmap the pointer.
> + */
> static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
May I suggest s/virtual/linear/ to at least make the new comment
correct?
> --- a/xen/include/asm-x86/page.h
> +++ b/xen/include/asm-x86/page.h
> @@ -291,7 +291,13 @@ void copy_page_sse2(void *, const void *);
> #define pfn_to_paddr(pfn) __pfn_to_paddr(pfn)
> #define paddr_to_pfn(pa) __paddr_to_pfn(pa)
> #define paddr_to_pdx(pa) pfn_to_pdx(paddr_to_pfn(pa))
> -#define vmap_to_mfn(va) _mfn(l1e_get_pfn(*virt_to_xen_l1e((unsigned
> long)(va))))
> +
> +#define vmap_to_mfn(va) ({ \
> + const l1_pgentry_t *pl1e_ = virt_to_xen_l1e((unsigned long)(va)); \
> + mfn_t mfn_ = l1e_get_mfn(*pl1e_); \
> + unmap_domain_page(pl1e_); \
> + mfn_; })
Just like is already the case in domain_page_map_to_mfn() I think
you want to add "BUG_ON(!pl1e)" here to limit the impact of any
problem to DoS (rather than a possible privilege escalation).
Or actually, considering the only case where virt_to_xen_l1e()
would return NULL, returning INVALID_MFN here would likely be
even more robust. There looks to be just a single caller, which
would need adjusting to cope with an error coming back. In fact -
it already ASSERT()s, despite NULL right now never coming back
from vmap_to_page(). I think the loop there would better be
for ( i = 0; i < pages; i++ )
{
struct page_info *page = vmap_to_page(va + i * PAGE_SIZE);
if ( page )
page_list_add(page, &pg_list);
else
printk_once(...);
}
Thoughts?
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |