[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] blkfront failure on migrate


A friend of mine was having weird ocasional crashes on migration, and I
took a look at the problem

The VM is a very stripped down ubuntu 12.04 environment (3.2.0 kernel)
with a total of 96MB of RAM, but this appears to be a generic driver
problem still present in upstream.

The symptoms were that on about 5% of migrations, one or more block
devices would fail to come back on resume.

The relevant snippets of dmesg are:

[6673983.756117] xenwatch: page allocation failure: order:5, mode:0x4430
[6673983.756123] Pid: 12, comm: xenwatch Not tainted 3.2.0-29-virtual
[6673983.756155]  [<c01fdaac>] __get_free_pages+0x1c/0x30
[6673983.756161]  [<c0232dd7>] kmalloc_order_trace+0x27/0xa0
[6673983.756165]  [<c04998b1>] blkif_recover+0x71/0x550
[6673983.756168]  [<c0499de5>] blkfront_resume+0x55/0x60
[6673983.756172]  [<c044502a>] xenbus_dev_resume+0x4a/0x100
[6673983.756176]  [<c048a2ad>] pm_op+0x17d/0x1a0
[6673983.756737] xenbus: resume vbd-51712 failed: -12
[6673983.756743] pm_op(): xenbus_dev_resume+0x0/0x100 returns -12
[6673983.756759] PM: Device vbd-51712 failed to restore: error -12
[6673983.867532] PM: restore of devices complete after 182.808 msecs

Looking at the code in blkif_recover

In "Stage 1" as commented, we make a copy of the shadow map.  We then
reset the contents of the real shadow map, and selectively copy the
in-use entries back from the copy to the real map.

Looking at the code, it appears possible to do this rearranging inplace
in the real shadow map, without requiring any memory allocation.

Is this a sensible suggestion or have I overlooked something?  This
order-5 allocation is a disaster lying in wait for VMs with high memory

Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.