[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH] common: guest_physmap_add_page()'s return value needs checking
The function may fail; it is not correct to indicate "success" in this case up the call stack. Mark the function must-check to prove all cases have been caught (and no new ones will get introduced). Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> --- In the grant-transfer case it is not really clear to me whether we can stick to setting GTF_transfer_completed in the error case. Since a guest may spin-wait for the flag to become set, simply not setting the flag is not an option either. I was wondering whether we may want to slightly alter (extend) the ABI and allow for a GTF_transfer_committed -> GTF_transfer_completed transition (i.e. clearing GTF_transfer_committed at the same time as setting GTF_transfer_completed). --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -2394,7 +2394,7 @@ gnttab_transfer( { grant_entry_v1_t *sha = &shared_entry_v1(e->grant_table, gop.ref); - guest_physmap_add_page(e, _gfn(sha->frame), mfn, 0); + rc = guest_physmap_add_page(e, _gfn(sha->frame), mfn, 0); if ( !paging_mode_translate(e) ) sha->frame = mfn_x(mfn); } @@ -2402,7 +2402,7 @@ gnttab_transfer( { grant_entry_v2_t *sha = &shared_entry_v2(e->grant_table, gop.ref); - guest_physmap_add_page(e, _gfn(sha->full_page.frame), mfn, 0); + rc = guest_physmap_add_page(e, _gfn(sha->full_page.frame), mfn, 0); if ( !paging_mode_translate(e) ) sha->full_page.frame = mfn_x(mfn); } @@ -2415,7 +2415,7 @@ gnttab_transfer( rcu_unlock_domain(e); - gop.status = GNTST_okay; + gop.status = rc ? GNTST_general_error : GNTST_okay; copyback: if ( unlikely(__copy_field_to_guest(uop, &gop, status)) ) --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -268,7 +268,8 @@ static void populate_physmap(struct memo mfn = page_to_mfn(page); } - guest_physmap_add_page(d, _gfn(gpfn), mfn, a->extent_order); + if ( guest_physmap_add_page(d, _gfn(gpfn), mfn, a->extent_order) ) + goto out; if ( !paging_mode_translate(d) && /* Inform the domain of the new page's machine address. */ @@ -765,8 +766,8 @@ static long memory_exchange(XEN_GUEST_HA } mfn = page_to_mfn(page); - guest_physmap_add_page(d, _gfn(gpfn), mfn, - exch.out.extent_order); + rc = guest_physmap_add_page(d, _gfn(gpfn), mfn, + exch.out.extent_order) ?: rc; if ( !paging_mode_translate(d) && __copy_mfn_to_guest_offset(exch.out.extent_start, --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -307,10 +307,9 @@ int guest_physmap_add_entry(struct domai p2m_type_t t); /* Untyped version for RAM only, for compatibility */ -static inline int guest_physmap_add_page(struct domain *d, - gfn_t gfn, - mfn_t mfn, - unsigned int page_order) +static inline int __must_check +guest_physmap_add_page(struct domain *d, gfn_t gfn, mfn_t mfn, + unsigned int page_order) { return guest_physmap_add_entry(d, gfn, mfn, page_order, p2m_ram_rw); } --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -583,8 +583,8 @@ int guest_physmap_add_entry(struct domain *d, gfn_t gfn, p2m_type_t t); /* Untyped version for RAM only, for compatibility and PV. */ -int guest_physmap_add_page(struct domain *d, gfn_t gfn, mfn_t mfn, - unsigned int page_order); +int __must_check guest_physmap_add_page(struct domain *d, gfn_t gfn, mfn_t mfn, + unsigned int page_order); /* Set a p2m range as populate-on-demand */ int guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn,
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |