[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Mysql inside Xen crashes during live migration.



 Don't know whether your system has included the fix from Edwin Zhai. Without 
this fix, it may crash the guest system during migration if guest is doing 
disk-intensive workloads.  Hope it can fix your issue! 
Xiantao

________________________________

From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx 
[mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Vatche Isahakian
Sent: Friday, December 04, 2009 6:07 AM
To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] Mysql inside Xen crashes during live migration.


Hi All 

I have a problem about Xen live migration, and was wondering whether anyone 
faced the same problem.
I have a VM running on Xen (3.3) which consists of a web server (apache) and 
mysql database. When I try to do live migration, the system crashes. 
I separated the database from the web Server, and allocated each a VM by 
itself. Now when I migrate the Webserver, everything runs smoothly, but when I 
migrate the mysql, the VM crashes.

Did anyone experience this type of problem with mysql and xen?
--- Begin Message ---
 [IOEMU]: fix the crash of HVM live migration with intensive disk access

Intensive disk access, e.g. sum of big file, during HVM live migration would 
cause guest error even file system crash. Guest dmesg said
"attempt to access beyond end of device
hda1: rw=0, want=10232032112, limit=10474317"

Current map cache used by qemu dma doesn't mark the page dirty, so that these 
pages(probably holding DMA data struct) are not transferred in the last 
iteration during live migration.

This patch fixes it, and also merges the qemu's original dirty bitmap used by 
other devices such as vga.

Signed-Off-By: Zhai Edwin <edwin.zhai@xxxxxxxxx>


Index: hv/tools/ioemu-remote/cpu-all.h
===================================================================
--- hv.orig/tools/ioemu-remote/cpu-all.h
+++ hv/tools/ioemu-remote/cpu-all.h
@@ -975,6 +975,16 @@ static inline int cpu_physical_memory_ge
 static inline void cpu_physical_memory_set_dirty(ram_addr_t addr)
 {
     phys_ram_dirty[addr >> TARGET_PAGE_BITS] = 0xff;
+
+#ifndef CONFIG_STUBDOM
+    if (logdirty_bitmap != NULL) {
+        addr >>= TARGET_PAGE_BITS;
+        if (addr / 8 < logdirty_bitmap_size) {
+            logdirty_bitmap[addr / HOST_LONG_BITS]
+                |= 1UL << addr % HOST_LONG_BITS;
+        }
+    }
+#endif
 }
 
 void cpu_physical_memory_reset_dirty(ram_addr_t start, ram_addr_t end,
Index: hv/tools/ioemu-remote/i386-dm/exec-dm.c
===================================================================
--- hv.orig/tools/ioemu-remote/i386-dm/exec-dm.c
+++ hv/tools/ioemu-remote/i386-dm/exec-dm.c
@@ -806,6 +806,24 @@ void *cpu_physical_memory_map(target_phy
     if ((*plen) > l)
         *plen = l;
 #endif
+#ifndef CONFIG_STUBDOM
+    if (logdirty_bitmap != NULL) {
+        /* Record that we have dirtied this frame */
+        unsigned long pfn = addr >> TARGET_PAGE_BITS;
+        do {
+            if (pfn / 8 >= logdirty_bitmap_size) {
+                fprintf(logfile, "dirtying pfn %lx >= bitmap "
+                        "size %lx\n", pfn, logdirty_bitmap_size * 8);
+            } else {
+                logdirty_bitmap[pfn / HOST_LONG_BITS]
+                    |= 1UL << pfn % HOST_LONG_BITS;
+            }
+
+            pfn++;
+        } while ( (pfn << TARGET_PAGE_BITS) < addr + *plen );
+
+    }
+#endif
     return qemu_map_cache(addr, 1);
 }
 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

--- End Message ---
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.