[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] blkfront failure on migrate



On 22/11/12 13:46, Jan Beulich wrote:
>>>> On 22.11.12 at 13:57, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
>> In "Stage 1" as commented, we make a copy of the shadow map.  We then
>> reset the contents of the real shadow map, and selectively copy the
>> in-use entries back from the copy to the real map.
>>
>> Looking at the code, it appears possible to do this rearranging inplace
>> in the real shadow map, without requiring any memory allocation.
>>
>> Is this a sensible suggestion or have I overlooked something?  This
>> order-5 allocation is a disaster lying in wait for VMs with high memory
>> pressure.
> While merging the multi-page ring patches, I think I tried to make
> this an in place copy operation, and it didn't work (don't recall
> details though). This and/or the need to deal with shrinking ring
> size across migration (maybe that was what really didn't work)
> made me move stage 3 to kick_pending_request_queues(), and
> allocate entries that actually need copying one by one, sticking
> them on a list.
>
> Jan
>

Where are your multi-page ring patches?  Are you saying this code is
going to change very shortly?

If the copy and copy back really cant be avoided, then making
"sizeof(info->shadow)/PAGE_SIZE" allocations of order 0 would be
substantially more friendly to environments with high memory pressure,
at the cost of slightly more complicated indexing in the loop.

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.