|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH 1/2] fix locking in offline_page()
Coverity ID 1055655
Apart from the Coverity-detected lock order reversal (a domain's
page_alloc_lock taken with the heap lock already held), calling
put_page() with heap_lock is a bad idea too (as a possible descendant
from put_page() is free_heap_pages(), which wants to take this very
lock).
From all I can tell the region over which heap_lock was held was far
too large: All we need to protect are the call to mark_page_offline()
and reserve_heap_page() (and I'd even put under question the need for
the former). Hence by slightly re-arranging the if/else-if chain we
can drop the lock much earlier, at once no longer covering the two
put_page() invocations.
Once at it, do a little bit of other cleanup: Put the "pod_replace"
code path inline rather than at its own label, and drop the effectively
unused variable "ret".
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -957,7 +957,6 @@ int offline_page(unsigned long mfn, int
{
unsigned long old_info = 0;
struct domain *owner;
- int ret = 0;
struct page_info *pg;
if ( !mfn_valid(mfn) )
@@ -1007,16 +1006,28 @@ int offline_page(unsigned long mfn, int
if ( page_state_is(pg, offlined) )
{
reserve_heap_page(pg);
- *status = PG_OFFLINE_OFFLINED;
+
+ spin_unlock(&heap_lock);
+
+ *status = broken ? PG_OFFLINE_OFFLINED | PG_OFFLINE_BROKEN
+ : PG_OFFLINE_OFFLINED;
+ return 0;
}
- else if ( (owner = page_get_owner_and_reference(pg)) )
+
+ spin_unlock(&heap_lock);
+
+ if ( (owner = page_get_owner_and_reference(pg)) )
{
if ( p2m_pod_offline_or_broken_hit(pg) )
- goto pod_replace;
+ {
+ put_page(pg);
+ p2m_pod_offline_or_broken_replace(pg);
+ *status = PG_OFFLINE_OFFLINED;
+ }
else
{
*status = PG_OFFLINE_OWNED | PG_OFFLINE_PENDING |
- (owner->domain_id << PG_OFFLINE_OWNER_SHIFT);
+ (owner->domain_id << PG_OFFLINE_OWNER_SHIFT);
/* Release the reference since it will not be allocated anymore */
put_page(pg);
}
@@ -1024,7 +1035,7 @@ int offline_page(unsigned long mfn, int
else if ( old_info & PGC_xen_heap )
{
*status = PG_OFFLINE_XENPAGE | PG_OFFLINE_PENDING |
- (DOMID_XEN << PG_OFFLINE_OWNER_SHIFT);
+ (DOMID_XEN << PG_OFFLINE_OWNER_SHIFT);
}
else
{
@@ -1043,21 +1054,7 @@ int offline_page(unsigned long mfn, int
if ( broken )
*status |= PG_OFFLINE_BROKEN;
- spin_unlock(&heap_lock);
-
- return ret;
-
-pod_replace:
- put_page(pg);
- spin_unlock(&heap_lock);
-
- p2m_pod_offline_or_broken_replace(pg);
- *status = PG_OFFLINE_OFFLINED;
-
- if ( broken )
- *status |= PG_OFFLINE_BROKEN;
-
- return ret;
+ return 0;
}
/*
Attachment:
offline-page-locking.patch _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |