[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH] xen-blkfront: use old rinfo after enomem during migration
Hi, Feel free to suggest/comment on this. I am trying to do the following at dst during the migration now. 1. Dont clear the old rinfo in blkif_free(). Instead just clean it. 2. Store the old rinfo and nr_rings into temp variables in negotiate_mq() 3. let nr_rings get re-calculated based on backend data 4. try allocating new memory based on new nr_rings 5. a. If memory allocation is a success - free the old rinfo and proceed to use the new rinfo b. If memory allocation is a failure - use the old the rinfo - adjust the nr_rings to the lowest of new nr_rings and old nr_rings -Thanks, Manjunath -- During migration, the dst side device resume can fail to allocate rinfo struct. Each rinfo is about 80K in size and allocating 4[typical] such rings would need an order 7 allocation[512KB chuck] there by incresing the chance of memory allocation failure. Device resume failure during migration will lead the processes accessing the device into hung state. This patch aims to reuse old rinfo in case of memory allocation failure. Signed-off-by: Manjunath Patil <manjunath.b.patil@xxxxxxxxxx> --- drivers/block/xen-blkfront.c | 46 +++++++++++++++++++++++++++++++++++------ 1 files changed, 39 insertions(+), 7 deletions(-) diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index 0ed4b20..041ba67 100644 --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -1353,9 +1353,17 @@ static void blkif_free(struct blkfront_info *info, int suspend) for (i = 0; i < info->nr_rings; i++) blkif_free_ring(&info->rinfo[i]); - kfree(info->rinfo); - info->rinfo = NULL; - info->nr_rings = 0; + if (unlikely(info->connected == BLKIF_STATE_SUSPENDED)) { + /* We are migrating. You may reuse it. Just clean. */ + for (i = 0; i < info->nr_rings; i++) { + memset(&info->rinfo[i], 0, + sizeof(struct blkfront_ring_info)); + } + } else { + kfree(info->rinfo); + info->rinfo = NULL; + info->nr_rings = 0; + } } struct copy_from_grant { @@ -1903,6 +1911,16 @@ static int negotiate_mq(struct blkfront_info *info) { unsigned int backend_max_queues; unsigned int i; + struct blkfront_ring_info *rinfo_old = NULL; + unsigned int nr_rings_old = 0; + + /* Migrating. We did not free old rinfo. Reuse it if possible */ + if (unlikely(info->connected == BLKIF_STATE_SUSPENDED)) { + nr_rings_old = info->nr_rings; + rinfo_old = info->rinfo; + info->rinfo = NULL; + info->nr_rings = 0; + } BUG_ON(info->nr_rings); @@ -1918,10 +1936,24 @@ static int negotiate_mq(struct blkfront_info *info) sizeof(struct blkfront_ring_info), GFP_KERNEL); if (!info->rinfo) { - xenbus_dev_fatal(info->xbdev, -ENOMEM, "allocating ring_info structure"); - info->nr_rings = 0; - return -ENOMEM; - } + if (unlikely(nr_rings_old)) { + /* We might waste some memory if + * info->nr_rings < nr_rings_old + */ + info->rinfo = rinfo_old; + if (info->nr_rings > nr_rings_old) + info->nr_rings = nr_rings_old; + xenbus_dev_fatal(info->xbdev, -ENOMEM, + "reusing old ring_info structure(new ring size=%d)", + info->nr_rings); + } else { + xenbus_dev_fatal(info->xbdev, -ENOMEM, + "allocating ring_info structure"); + info->nr_rings = 0; + return -ENOMEM; + } + } else if (unlikely(nr_rings_old)) + kfree(rinfo_old); for (i = 0; i < info->nr_rings; i++) { struct blkfront_ring_info *rinfo; -- 1.7.1 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |