[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [RFC] x86/hpet: Correct -ENOMEM actions in hpet_fsb_cap_lookup()



These changes are entirely from inspection, discovered while investigating
another problem.

* Don't leak the previously allocated cpumasks
* Don't leave num_hpets_used > 0.  It would fool hpet_broadcast_init() into
  believing that broadcast mode had been set up, despite having freed the
  underlying datastructure (and subsequenly result in a NULL pointer fault).
* Unconditionally decallocate hpet_events.  hpet_broadcast_init() will then
  try to allocate a single hpet_event_channel instead.

Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
CC: Keir Fraser <keir@xxxxxxx>
CC: Jan Beulich <JBeulich@xxxxxxxx>

---

This patch is RFC as I didn't actually encounter the problem, nor can think of
an easy way of actually testing the correctness of the codepath.  Chances are
that if -ENOMEM occurs here, Xen is not actually going to complete booting.
---
 xen/arch/x86/hpet.c |   11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/hpet.c b/xen/arch/x86/hpet.c
index 7e0d332..bad2d68 100644
--- a/xen/arch/x86/hpet.c
+++ b/xen/arch/x86/hpet.c
@@ -415,11 +415,12 @@ static void __init hpet_fsb_cap_lookup(void)
 
         if ( !zalloc_cpumask_var(&ch->cpumask) )
         {
-            if ( !num_hpets_used )
-            {
-                xfree(hpet_events);
-                hpet_events = NULL;
-            }
+            /* Out of mem.  Clean up and bail. */
+            for ( i = 0; i < num_hpets_used; ++i )
+                free_cpumask_var(hpet_events[i].cpumask);
+            xfree(hpet_events);
+            hpet_events = NULL;
+            num_hpets_used = 0;
             break;
         }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.